pax_global_header00006660000000000000000000000064126500012630014505gustar00rootroot0000000000000052 comment=be18995e5e4744b8498220e6fad307063e9f3b09 slurm-slurm-15-08-7-1/000077500000000000000000000000001265000126300143255ustar00rootroot00000000000000slurm-slurm-15-08-7-1/.gitignore000066400000000000000000000022701265000126300163160ustar00rootroot00000000000000*.o *.la *.lo .deps .libs Makefile /config.h /config.log /config.status /config.xml /contribs/cray/opt_modulefiles_slurm /contribs/cray/slurmconfgen.py /contribs/perlapi/libslurm/perl/Makefile.PL /contribs/perlapi/libslurmdb/perl/Makefile.PL /contribs/phpext/slurm_php/config.m4 /doc/html/*.html !/doc/html/review_release.html /etc/cgroup.release_common.example /etc/init.d.slurm /etc/init.d.slurmdbd /etc/slurmctld.service /etc/slurmd.service /etc/slurmdbd.service /libtool /slurm/slurm.h /slurm/stamp-h2 /src/api/pmi_version.map /src/api/version.map /src/common/global_defaults.c /src/db_api/version.map /src/plugins/checkpoint/blcr/cr_checkpoint.sh /src/plugins/checkpoint/blcr/cr_restart.sh /stamp-h1 /src/sacct/sacct /src/sacctmgr/sacctmgr /src/salloc/salloc /src/sattach/sattach /src/sbatch/sbatch /src/sbcast/sbcast /src/scancel/scancel /src/scontrol/scontrol /src/sdiag/sdiag /src/sinfo/sinfo /src/slurmctld/slurmctld /src/slurmd/slurmd/slurmd /src/slurmd/slurmstepd/slurmstepd /src/slurmdbd/slurmdbd /src/smap/smap /src/sprio/sprio /src/squeue/squeue /src/sreport/sreport /src/srun/srun /src/sshare/sshare /src/sstat/sstat /src/strigger/strigger /src/sview/sview testsuite/expect/globals.local TAGS slurm-slurm-15-08-7-1/AUTHORS000066400000000000000000000034061265000126300154000ustar00rootroot00000000000000Don Albert Ernest Artiaga Danny Auble Susanne Balle Anton Blanchard Hongjia Cao Chuck Clouston Daniel Christians Gilles Civario Chris Dunlap Joey Ekstrom Kent Engstrom snc.liu.se> Jim Garlick Didier Gazen Mark Grondona Takao Hatazaki Matthieu Hautreux David Hoppner <0xffea(at)googlemail.com> Christopher Holmes Nathan Huff David Jackson Greg Johnson Morris Jette Jason King Nancy Kritkausky Eric Lin Bernard Li Puenlap Lee Steven McDougall Donna Mecozzi Chris Morrone Bryan O'Sullivan Gennaro Oliva Daniel Palermo Dan Phung Ashley Pitman Vijay Ramasubramanian Andy Riebs Asier Roa Miguel Ros Federico Sacerdoti Jeff Squyres Keven Tew Prashanth Tamraparni Jay Windley Ann-Marie Wunderlin slurm-slurm-15-08-7-1/BUILD.NOTES000066400000000000000000000302321265000126300157760ustar00rootroot00000000000000This information is meant primarily for the Slurm developers. System administrators should read the instructions at http://slurm.schedmd.com/quickstart_admin.html (also found in the file doc/html/quickstart_admin.shtml). The "INSTALL" file contains generic Linux build instructions. Simple build/install on Linux: ./configure --enable-debug \ --prefix= --sysconfdir= make make install To build the files in the contribs directory: make contrib make install-contrib (The RPMs are built by default) If you make changes to any auxdir/* or Makefile.am file, then run _snowflake_ (where there are recent versions of autoconf, automake and libtool installed): ./autogen.sh then check-in the new Makefile.am and Makefile.in files Here is a step-by-step HOWTO for creating a new release of Slurm on a Linux cluster (See BlueGene and AIX specific notes below for some differences). 0. Get current copies of Slurm and buildfarm > git clone https://@github.com/chaos/slurm.git > svn co https://eris.llnl.gov/svn/chaos/private/buildfarm/trunk buildfarm place the buildfarm directory in your search path > export PATH=~/buildfarm:$PATH 1. Update NEWS and META files for the new release. In the META file, the API, Major, Minor, Micro, Version, and Release fields must all by up-to-date. **** DON'T UPDATE META UNTIL RIGHT BEFORE THE TAG **** The Release field should always be 1 unless one of the following is true - Changes were made to the spec file, documentation, or example files, but not to code. - this is a prerelease (Release = 0.preX) 2. Tag the repository with the appropriate name for the new version. Note the first three digits are the version number. For a proper release, the last digit is "1" (except for a rebuild without code changes which could be "2"). For pre-releases, the last digit should be "0" followed by "pre#" or "rc#". > git tag -a slurm-2-6-7-1 -m "create tag v2.6.7" OR > git tag -a slurm-2-7-0-0pre5 -m "create tag v2.7.0-pre5" > git push --tags 3. Use the rpm make target to create the new RPMs. This requires a .rpmmacros (.rpmrc for newer versions of rpmbuild) file containing: %_slurm_sysconfdir /etc/slurm %_with_debug 1 %_with_sgijob 1 %_with_elan 1 (ONLY ON SYSTEMS WITH ELAN SWITCH) NOTE: build will make a tar-ball based upon ALL of the files in your current local directory. If that includes scratch files, everyone will get those files in the tar-ball. For that reason, it is a good idea to clone a clean copy of the repository and build from that > git clone https://@github.com/chaos/slurm.git Build using the following syntax: > build --snapshot -s OR > build --nosnapshot -s --nosnapshot will name the tar-ball and RPMs based upon the META file --snapshot will name the tar-ball and RPMs based upon the META file plus a timestamp. Do this to make a tar-ball for a non-tagged release. NOTE: should be a fully-qualified pathname 4. scp the files to schedmd.com in to ~/www/download/latest or ~/www/download/development. Move the older files to ~/www/download/archive, login to schedmd.com, cd to ~/download, and execute "php process.php" to update the web pages. BlueGene build notes: 0. If on a bgp system and you want sview export these variables > export CFLAGS="-I/opt/gnome/lib/gtk-2.0/include -I/opt/gnome/lib/glib-2.0/include $CFLAGS" > export LIBS="-L/usr/X11R6/lib64 $LIBS" > export CMD_LDFLAGS='-L/usr/X11R6/lib64' > export PKG_CONFIG_PATH="/opt/gnome/lib64/pkgconfig/:$PKG_CONFIG_PATH" 1. Use the rpm make target to create the new RPMs. This requires a .rpmmacros (.rpmrc for newer versions of rpmbuild) file containing: %_prefix /usr %_slurm_sysconfdir /etc/slurm %_with_bluegene 1 %_without_pam 1 %_with_debug 1 Build on Service Node with using the following syntax > rpmbuild -ta slurm-...bz2 The RPM files get written to the directory /usr/src/packages/RPMS/ppc64 To build and run on AIX: 0. Get current copies of Slurm and buildfarm > git clone https://@github.com/chaos/slurm.git > svn co https://eris.llnl.gov/svn/chaos/private/buildfarm/trunk buildfarm put the buildfarm directory in your search path > export PATH=~/buildfarm:$PATH Put the buildfarm directory in your search path Also, you will need several commands to appear FIRST in your PATH: /usr/local/tools/gnu/aix_5_64_fed/bin/install /usr/local/gnu/bin/tar /usr/bin/gcc I do this by making symlinks to those commands in the buildfarm directory, then making the buildfarm directory the first one in my PATH. Also, make certain that the "proctrack" rpm is installed. 1. Export some environment variables > export OBJECT_MODE=32 > export PKG_CONFIG="/usr/bin/pkg-config" 2. Build with: > ./configure --enable-debug --prefix=/opt/freeware \ --sysconfdir=/opt/freeware/etc/slurm \ --with-ssl=/opt/freeware --with-munge=/opt/freeware \ --with-proctrack=/opt/freeware make make uninstall # remove old shared libraries, aix caches them make install 3. To build RPMs (NOTE: GNU tools early in PATH as described above in #0): Create a .rpmmacros file specifying system specific files: # # RPM Macros for use with Slurm on AIX # The system-wide macros for RPM are in /usr/lib/rpm/macros # and this overrides a few of them # %_prefix /opt/freeware %_slurm_sysconfdir %{_prefix}/etc/slurm %_defaultdocdir %{_prefix}/doc %_with_debug 1 %_with_aix 1 %with_ssl "--with-ssl=/opt/freeware" %with_munge "--with-munge=/opt/freeware" %with_proctrack "--with-proctrack=/opt/freeware" Log in to the machine "uP". uP is currently the lowest-common-denominator AIX machine. NOTE: build will make a tar-ball based upon ALL of the files in your current local directory. If that includes scratch files, everyone will get those files in the tar-ball. For that reason, it is a good idea to clone a clean copy of the repository and build from that > git clone https://@github.com/chaos/slurm.git Build using the following syntax: > export CC=/usr/bin/gcc > build --snapshot -s OR > build --nosnapshot -s --nosnapshot will name the tar-ball and RPMs based upon the META file --snapshot will name the tar-ball and RPMs based upon the META file plus a timestamp. Do this to make a tar-ball for a non-tagged release. 4. Test POE after telling POE where to find Slurm's LoadLeveler wrapper. > export MP_RMLIB=./slurm_ll_api.so > export CHECKPOINT=yes 5. > poe hostname -rmpool debug 6. To debug, set SLURM_LL_API_DEBUG=3 before running poe - will create a file /tmp/slurm.* It can also be helpful to use poe options "-ilevel 6 -pmdlog yes" There will be a log file create named /tmp/mplog.. 7. If you update proctrack, be sure to run "slibclean" to clear cached version. 8. Remove the RPMs that we don't want: rm -f slurm-perlapi*rpm slurm-torque*rpm and install the other RPMs into /usr/admin/inst.images/slurm/aix5.3 on an OCF AIX machine (pdev is a good choice). Debian build notes: Since Debian doesn't have PRMs, the rpmbuild program can not locate dependencies, so build without them by patching the build program: Index: build =================================================================== --- build (revision 173) +++ build (working copy) @@ -798,6 +798,7 @@ $cmd .= " --define \"_tmppath $rpmdir/TMP\""; $cmd .= " --define \"_topdir $rpmdir\""; $cmd .= " --define \"build_bin_rpm 1\""; + $cmd .= " --nodeps"; if (defined $conf{rpm_dist}) { my $dist = length $conf{rpm_dist} ? $conf{rpm_dist} : "%{nil}"; $cmd .= " --define \"dist $dist\""; AIX/Federation switch window problems To clean switch windows: ntblclean =w 8 -a sni0 To get switch window status: ntblstatus BlueGene bglblock boot problem diagnosis - Logon to the Service Node (bglsn, ubglsn) - Execute /admin/bglscripts/fatalras This will produce a list of failures including Rack and Midplane number R M - Translate the Rack and Midplane to Slurm node id: smap -R r - Drain only the bad Slurm node, return others to service using scontrol Configuration file update procedures: - cd /usr/bgl/dist/slurm (on bgli) - co -l - vi - ci -u - make install - then run "dist_local slurm" on SN and FENs to update /etc/slurm Some RPM commands: rpm -qa | grep slurm (determine what is installed) rpm -qpl slurm-1.1.9-1.rpm (check contents of an rpm) rpm -e slurm-1.1.8-1 (erase an rpm) rpm --upgrade slurm-1.1.9-1.rpm (replace existing rpm with new version) rpm -i --ignoresize slurm-1.1.9-1.rpm (install a new rpm) For main Slurm plugin installation on BGL service node: rpm -i --force --nodeps --ignoresize slurm-1.1.9-1.rpm rpm -U --force --nodeps --ignoresize slurm-1.1.9-1.rpm (upgrade option) To clear a wedged job: /bgl/startMMCSconsole > delete bgljob #### > free RMP### Starting and stopping daemons on Linux: /etc/init.d/slurm stop /etc/init.d/slurm start Patches: - cd to the top level src directory - Run the patch command with epilog_complete.patch as stdin: patch -p[path_level_to_filter] [--dry-run] < epilog_complete.patch To get the process and job IDs with proctrack/sgi_job: - jstat -p CVS and gnats: Include "gnats: e.g. "(gnats:123)" as part of cvs commit to automatically record that update in gnats database. NOTE: Does not change gnats bug state, but records source files associated with the bug. For memory leaks (for AIX use zerofault, zf; for linux use valgrind) - Run configure with the option "--enable-memory-leak-debug" to completely release allocated memory when the daemons exit - valgrind --tool=memcheck --leak-check=yes --num-callers=8 --leak-resolution=med \ ./slurmctld -Dc >valg.ctld.out 2>&1 - valgrind --tool=memcheck --leak-check=yes --num-callers=8 --leak-resolution=med \ ./slurmd -Dc >valg.slurmd.out 2>&1 (Probably only one one node of cluster) - valgrind --tool=memcheck --leak-check=yes --num-callers=8 --leak-resolution=med \ ./slurmdbd -D >valg.dbd.out 2>&1 - Run the regression test. In the globals.local file include: "set enable_memory_leak_debug 1" - Shutdown the daemons using "scontrol shutdown" - Examine the end of the log files for leaks. pthread_create() and dlopen() have small memory leaks on some systems, which do not grow over time - Functions in the plugins will not have their symbols preserved when the plugin is unloaded and the function names will appear as "???" in the valgrind backtrace after shutdown. Rebuilding the daemons without the configure option of "--enable-memory-leak-debug" typically prevents the plugin from being unloaded so the symbols will be properly reported. However many memory leaks will be reported due to not unloading plugins. You will need to match the call sequence from the first log (with "--enable-memory-leak-debug") to the second log (without "--enable-memory-leak-debug" and ignore memory leaks reported that are not real leaks in the second log) to identify the full code path through plugins. Job profiling: - "export CFLAGS=-pg", then run "configure" and "make install" as usual. - Run the slurm daemons through a stress test and exit normally - Run "gprof [executable-file] >outfile" Before new major release: - Test on ia64, i386, x86_64, BGQ, NRT/POE, Cray - Test on Elan and IB switches - Test fail-over of slurmctld - Test for memory leaks in slurmctld, slurmd and slurmdbd with various plugins - Change API version number - Run "make check" (requires "dejagnu" package) - Test that the prolog and epilog run - Run the test suite with SlurmUser NOT being self - Test for errors reported by CLANG tool: NOTE: Run "configure" with "--enable-developer" option so assert functions take effect. scan-build -k -v make >m.sb.out 2>&1 # and look for output in /tmp/scan-build- slurm-slurm-15-08-7-1/COPYING000066400000000000000000000477721265000126300154010ustar00rootroot00000000000000 SLURM LICENSE AGREEMENT All Slurm code and documentation is available under the GNU General Public License. Some tools in the "contribs" directory have other licenses. See the documentation for individual contributed tools for details. In addition, as a special exception, the copyright holders give permission to link the code of portions of this program with the OpenSSL library under certain conditions as described in each individual source file, and distribute linked combinations including the two. You must obey the GNU General Public License in all respects for all of the code used other than OpenSSL. If you modify file(s) with this exception, you may extend this exception to your version of the file(s), but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version. If you delete this exception statement from all source files in the program, then also delete it here. NO WARRANTY: Because the program is licensed free of charge, there is no warranty for the program. See section 11 below for full details. ============================================================================= OUR NOTICE AND TERMS OF AND CONDITIONS OF THE GNU GENERAL PUBLIC LICENSE Auspices Portions of this work were performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Disclaimer This work was sponsored by an agency of the United States government. Neither the United States Government nor Lawrence Livermore National Security, LLC, nor any of their employees, makes any warranty, express or implied, or assumes any liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. References herein to any specific commercial products, process, or services by trade names, trademark, manufacturer or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. ============================================================================= GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. slurm-slurm-15-08-7-1/ChangeLog000066400000000000000000000000001265000126300160650ustar00rootroot00000000000000slurm-slurm-15-08-7-1/DISCLAIMER000066400000000000000000000143401265000126300156660ustar00rootroot00000000000000SLURM was produced at Lawrence Livermore National Laboratory in collaboration with various organizations. Copyright (C) 2012-2013 Los Alamos National Security, LLC. Copyright (C) 2011 Trinity Centre for High Performance Computing Copyright (C) 2010-2015 SchedMD LLC Copyright (C) 2009-2013 CEA/DAM/DIF Copyright (C) 2009-2011 Centro Svizzero di Calcolo Scientifico (CSCS) Copyright (C) 2008-2011 Lawrence Livermore National Security Copyright (C) 2008 Vijay Ramasubramanian Copyright (C) 2007-2008 Red Hat, Inc. Copyright (C) 2007-2013 National University of Defense Technology, China Copyright (C) 2007-2015 Bull Copyright (C) 2005-2008 Hewlett-Packard Development Company, L.P. Copyright (C) 2004-2009, Marcus Holland-Moritz Copyright (C) 2002-2007 The Regents of the University of California Copyright (C) 2002-2003 Linux NetworX Copyright (C) 2002 University of Chicago Copyright (C) 2001, Paul Marquess Copyright (C) 2000 Markus Friedl Copyright (C) 1999, Kenneth Albanowski Copyright (C) 1998 Todd C. Miller Copyright (C) 1996-2003 Maximum Entropy Data Consultants Ltd, Copyright (C) 1995 Tatu Ylonen , Espoo, Finland Copyright (C) 1989-1994, 1996-1999, 2001 Free Software Foundation, Inc. Many other organizations contributed code and/or documentation without including a copyright notice. Written by: Amjad Majid Ali (Colorado State University) Par Andersson (National Supercomputer Centre, Sweden) Don Albert (Bull) Ernest Artiaga (Barcelona Supercomputer Center, Spain) Danny Auble (LLNL, SchedMD LLC) Susanne Balle (HP) Anton Blanchard (Samba) Janne Blomqvist (Aalto University, Finland) David Bremer (LLNL) Jon Bringhurst (LANL) Bill Brophy (Bull) Hongjia Cao (National University of Defense Techonogy, China) Daniel Christians (HP) Gilles Civario (Bull) Chuck Clouston (Bull) Joseph Donaghy (LLNL) Chris Dunlap (LLNL) Joey Ekstrom (LLNL/Bringham Young University) Josh England (TGS Management Corporation) Kent Engstrom (National Supercomputer Centre, Sweden) Jim Garlick (LLNL) Didier Gazen (Laboratoire d'Aerologie, France) Raphael Geissert (Debian) Yiannis Georgiou (Bull) Andriy Grytsenko (Massive Solutions Limited, Ukraine) Mark Grondona (LLNL) Takao Hatazaki (HP, Japan) Matthieu Hautreux (CEA, France) Chris Holmes (HP) David Hoppner Nathan Huff (North Dakota State University) David Jackson (Adaptive Computing) Morris Jette (LLNL, SchedMD LLC) Klaus Joas (University Karlsruhe, Germany) Greg Johnson (LANL) Jason King (LLNL) Aaron Knister (Environmental Protection Agency) Nancy Kritkausky (Bull) Roman Kurakin (Institute of Natural Science and Ecology, Russia) Eric Lin (Bull) Don Lipari (LLNL) Puenlap Lee (Bull) Dennis Leepow Bernard Li (Genome Sciences Centre, Canada) Donald Lipari (LLNL) Steven McDougall (SiCortex) Donna Mecozzi (LLNL) Bjorn-Helge Mevik (University of Oslo, Norway) Chris Morrone (LLNL) Pere Munt (Barcelona Supercomputer Center, Spain) Michal Novotny (Masaryk University, Czech Republic) Bryan O'Sullivan (Pathscale) Gennaro Oliva (Institute of High Performance Computing and Networking, Italy) Alejandro Lucero Palau (Barcelona Supercomputer Center, Spain) Daniel Palermo (HP) Dan Phung (LLNL/Columbia University) Ashley Pittman (Quadrics, UK) Vijay Ramasubramanian (University of Maryland) Krishnakumar Ravi[KK] (HP) Petter Reinholdtsen (University of Oslo, Norway) Gerrit Renker (Swiss National Computer Centre) Andy Riebs (HP) Asier Roa (Barcelona Supercomputer Center, Spain) Miguel Ros (Barcelona Supercomputer Center, Spain) Beat Rubischon (DALCO AG, Switzerland) Dan Rusak (Bull) Eygene Ryabinkin (Kurchatov Institute, Russia) Federico Sacerdoti (D.E. Shaw) Rod Schultz (Bull) Tyler Strickland (University of Florida) Jeff Squyres (LAM MPI) Prashanth Tamraparni (HP, India) Jimmy Tang (Trinity College, Ireland) Kevin Tew (LLNL/Bringham Young University) Adam Todorski (Rensselaer Polytechnic Institute) Nathan Weeks (Iowa State University) Tim Wickberg (Rensselaer Polytechnic Institute) Ramiro Brito Willmersdorf (Universidade Federal de Pemambuco, Brazil) Jay Windley (Linux NetworX) Anne-Marie Wunderlin (Bull) CODE-OCEC-09-009. All rights reserved. This file is part of SLURM, a resource management program. For details, see . Please also read the supplied file: DISCLAIMER. SLURM is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. SLURM is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with SLURM; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. OUR NOTICE AND TERMS OF AND CONDITIONS OF THE GNU GENERAL PUBLIC LICENSE Our Preamble Notice Auspices This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Disclaimer This work was sponsored by an agency of the United States government. Neither the United States Government nor Lawrence Livermore National Security, LLC, nor any of their employees, makes any warranty, express or implied, or assumes any liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. References herein to any specific commercial products, process, or services by trade names, trademark, manufacturer or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. The precise terms and conditions for copying, distribution and modification is provided in the file named "COPYING" in this directory. slurm-slurm-15-08-7-1/INSTALL000066400000000000000000000225221265000126300153610ustar00rootroot00000000000000Copyright 1994, 1995, 1996, 1999, 2000, 2001, 2002 Free Software Foundation, Inc. This file is free documentation; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. Basic Installation ================== These are generic Linux installation instructions. Build instructions specific to Slurm are available at http://slurm.schedmd.com/quickstart_admin.html (also found in the file doc/html/quickstart_admin.shtml). The `configure' shell script attempts to guess correct values for various system-dependent variables used during compilation. It uses those values to create a `Makefile' in each directory of the package. It may also create one or more `.h' files containing system-dependent definitions. Finally, it creates a shell script `config.status' that you can run in the future to recreate the current configuration, and a file `config.log' containing compiler output (useful mainly for debugging `configure'). It can also use an optional file (typically called `config.cache' and enabled with `--cache-file=config.cache' or simply `-C') that saves the results of its tests to speed up reconfiguring. (Caching is disabled by default to prevent problems with accidental use of stale cache files.) If you need to do unusual things to compile the package, please try to figure out how `configure' could check whether to do them, and mail diffs or instructions to the address given in the `README' so they can be considered for the next release. If you are using the cache, and at some point `config.cache' contains results you don't want to keep, you may remove or edit it. The file `configure.ac' (or `configure.in') is used to create `configure' by a program called `autoconf'. You only need `configure.ac' if you want to change it or regenerate `configure' using a newer version of `autoconf'. The simplest way to build and install this package is: 1. `cd' to the directory containing the package's source code and type `./configure' to configure the package for your system. If you're using `csh' on an old version of System V, you might need to type `sh ./configure' instead to prevent `csh' from trying to execute `configure' itself. Running `configure' takes awhile. While running, it prints some messages telling which features it is checking for. 2. Type `make' to compile the package. 3. Optionally, type `make check' to run any self-tests that come with the package. 4. Type `make install' to install the programs and any data files and documentation. 5. You can remove the program binaries and object files from the source code directory by typing `make clean'. To also remove the files that `configure' created (so you can compile the package for a different kind of computer), type `make distclean'. There is also a `make maintainer-clean' target, but that is intended mainly for the package's developers. If you use it, you may have to get all sorts of other programs in order to regenerate files that came with the distribution. Alternate build and installation instructions for systems supporting RPM: 1. rpmbuild -ta slurm.*.tar.bz2 2. rpm --install Compilers and Options ===================== Some systems require unusual options for compilation or linking that the `configure' script does not know about. Run `./configure --help' for details on some of the pertinent environment variables. You can give `configure' initial values for variables by setting them in the environment. You can do that on the command line like this: ./configure CC=c89 CFLAGS=-O2 LIBS=-lposix *Note Defining Variables::, for more details. Compiling For Multiple Architectures ==================================== You can compile the package for more than one kind of computer at the same time, by placing the object files for each architecture in their own directory. To do this, you must use a version of `make' that supports the `VPATH' variable, such as GNU `make'. `cd' to the directory where you want the object files and executables to go and run the `configure' script. `configure' automatically checks for the source code in the directory that `configure' is in and in `..'. If you have to use a `make' that does not support the `VPATH' variable, you have to compile the package for one architecture at a time in the source code directory. After you have installed the package for one architecture, use `make distclean' before reconfiguring for another architecture. Installation Names ================== By default, `make install' will install the package's files in `/usr/local/bin', `/usr/local/man', etc. You can specify an installation prefix other than `/usr/local' by giving `configure' the option `--prefix=PATH'. You can specify separate installation prefixes for architecture-specific files and architecture-independent files. If you give `configure' the option `--exec-prefix=PATH', the package will use PATH as the prefix for installing programs and libraries. Documentation and other data files will still use the regular prefix. In addition, if you use an unusual directory layout you can give options like `--bindir=PATH' to specify different values for particular kinds of files. Run `configure --help' for a list of the directories you can set and what kinds of files go in them. If the package supports it, you can cause programs to be installed with an extra prefix or suffix on their names by giving `configure' the option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'. Optional Features ================= Some packages pay attention to `--enable-FEATURE' options to `configure', where FEATURE indicates an optional part of the package. They may also pay attention to `--with-PACKAGE' options, where PACKAGE is something like `gnu-as' or `x' (for the X Window System). The `README' should mention any `--enable-' and `--with-' options that the package recognizes. For packages that use the X Window System, `configure' can usually find the X include and library files automatically, but if it doesn't, you can use the `configure' options `--x-includes=DIR' and `--x-libraries=DIR' to specify their locations. Specifying the System Type ========================== There may be some features `configure' cannot figure out automatically, but needs to determine by the type of machine the package will run on. Usually, assuming the package is built to be run on the _same_ architectures, `configure' can figure that out, but if it prints a message saying it cannot guess the machine type, give it the `--build=TYPE' option. TYPE can either be a short name for the system type, such as `sun4', or a canonical name which has the form: CPU-COMPANY-SYSTEM where SYSTEM can have one of these forms: OS KERNEL-OS See the file `config.sub' for the possible values of each field. If `config.sub' isn't included in this package, then this package doesn't need to know the machine type. If you are _building_ compiler tools for cross-compiling, you should use the `--target=TYPE' option to select the type of system they will produce code for. If you want to _use_ a cross compiler, that generates code for a platform different from the build platform, you should specify the "host" platform (i.e., that on which the generated programs will eventually be run) with `--host=TYPE'. Sharing Defaults ================ If you want to set default values for `configure' scripts to share, you can create a site shell script called `config.site' that gives default values for variables like `CC', `cache_file', and `prefix'. `configure' looks for `PREFIX/share/config.site' if it exists, then `PREFIX/etc/config.site' if it exists. Or, you can set the `CONFIG_SITE' environment variable to the location of the site script. A warning: not all `configure' scripts look for a site script. Defining Variables ================== Variables not defined in a site shell script can be set in the environment passed to `configure'. However, some packages may run configure again during the build, and the customized values of these variables may be lost. In order to avoid this problem, you should set them in the `configure' command line, using `VAR=value'. For example: ./configure CC=/usr/local2/bin/gcc will cause the specified gcc to be used as the C compiler (unless it is overridden in the site shell script). `configure' Invocation ====================== `configure' recognizes the following options to control how it operates. `--help' `-h' Print a summary of the options to `configure', and exit. `--version' `-V' Print the version of Autoconf used to generate the `configure' script, and exit. `--cache-file=FILE' Enable the cache: use and save the results of the tests in FILE, traditionally `config.cache'. FILE defaults to `/dev/null' to disable caching. `--config-cache' `-C' Alias for `--cache-file=config.cache'. `--quiet' `--silent' `-q' Do not print messages saying which checks are being made. To suppress all normal output, redirect it to `/dev/null' (any error messages will still be shown). `--srcdir=DIR' Look for the package's source code in directory DIR. Usually `configure' can determine that directory automatically. `configure' also accepts some other, not widely useful, options. Run `configure --help' for more details. slurm-slurm-15-08-7-1/LICENSE.OpenSSL000066400000000000000000000205371265000126300166230ustar00rootroot00000000000000/* * (c) 2002, 2003, 2004 by Jason McLaughlin and Riadh Elloumi * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * is provided AS IS, WITHOUT ANY WARRANTY; without even the implied * warranty of MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, and * NON-INFRINGEMENT. See the GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, * MA 02110-1301 USA. * * In addition, as a special exception, the copyright holders give * permission to link the code of portions of this program with the * OpenSSL library under certain conditions as described in each * individual source file, and distribute linked combinations * including the two. * You must obey the GNU General Public License in all respects * for all of the code used other than OpenSSL. If you modify * file(s) with this exception, you may extend this exception to your * version of the file(s), but you are not obligated to do so. If you * do not wish to do so, delete this exception statement from your * version. If you delete this exception statement from all source * files in the program, then also delete it here. */ Certain source files in this program permit linking with the OpenSSL library (http://www.openssl.org), which otherwise wouldn't be allowed under the GPL. For purposes of identifying OpenSSL, most source files giving this permission limit it to versions of OpenSSL having a license identical to that listed in this file (LICENSE.OpenSSL). It is not necessary for the copyright years to match between this file and the OpenSSL version in question. However, note that because this file is an extension of the license statements of these source files, this file may not be changed except with permission from all copyright holders of source files in this program which reference this file. LICENSE ISSUES ============== The OpenSSL toolkit stays under a dual license, i.e. both the conditions of the OpenSSL License and the original SSLeay license apply to the toolkit. See below for the actual license texts. Actually both licenses are BSD-style Open Source licenses. In case of any license issues related to OpenSSL please contact openssl-core@openssl.org. OpenSSL License --------------- /* ==================================================================== * Copyright (c) 1998-2001 The OpenSSL Project. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * 3. All advertising materials mentioning features or use of this * software must display the following acknowledgment: * "This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit. (http://www.openssl.org/)" * * 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to * endorse or promote products derived from this software without * prior written permission. For written permission, please contact * openssl-core@openssl.org. * * 5. Products derived from this software may not be called "OpenSSL" * nor may "OpenSSL" appear in their names without prior written * permission of the OpenSSL Project. * * 6. Redistributions of any form whatsoever must retain the following * acknowledgment: * "This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit (http://www.openssl.org/)" * * THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY * EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR * ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * ==================================================================== * * This product includes cryptographic software written by Eric Young * (eay@cryptsoft.com). This product includes software written by Tim * Hudson (tjh@cryptsoft.com). * */ Original SSLeay License ----------------------- /* Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com) * All rights reserved. * * This package is an SSL implementation written * by Eric Young (eay@cryptsoft.com). * The implementation was written so as to conform with Netscapes SSL. * * This library is free for commercial and non-commercial use as long as * the following conditions are aheared to. The following conditions * apply to all code found in this distribution, be it the RC4, RSA, * lhash, DES, etc., code; not just the SSL code. The SSL documentation * included with this distribution is covered by the same copyright terms * except that the holder is Tim Hudson (tjh@cryptsoft.com). * * Copyright remains Eric Young's, and as such any Copyright notices in * the code are not to be removed. * If this package is used in a product, Eric Young should be given attribution * as the author of the parts of the library used. * This can be in the form of a textual message at program startup or * in documentation (online or textual) provided with the package. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * "This product includes cryptographic software written by * Eric Young (eay@cryptsoft.com)" * The word 'cryptographic' can be left out if the rouines from the library * being used are not cryptographic related :-). * 4. If you include any Windows specific code (or a derivative thereof) from * the apps directory (application code) you must include an acknowledgement: * "This product includes software written by Tim Hudson (tjh@cryptsoft.com)" * * THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * The licence and distribution terms for any publically available version or * derivative of this code cannot be changed. i.e. this code cannot simply be * copied and put under another distribution licence * [including the GNU Public Licence.] */ slurm-slurm-15-08-7-1/META000066400000000000000000000020521265000126300147750ustar00rootroot00000000000000## # $Id$ ## # Metadata for RPM/TAR makefile targets ## # See src/api/Makefile.am for guidance on setting API_ values ## Meta: 1 Name: slurm Major: 15 Minor: 08 Micro: 7 Version: 15.08.7 Release: 1 ## # When making a new Major/Minor version update # src/common/slurm_protocol_common.h # with a new SLURM_PROTOCOL_VERSION signifing the old one and the version # it was so the slurmdbd can continue to send the old protocol version. # In src/common/slurm_protocol_util.c check_header_version(), # and _get_slurm_version() # need to be updated also when changes are added also. # # NOTE: The API version can not be the same as the Slurm version above. The # version in the code is referenced as a uint16_t which if 1403 was the # API_CURRENT it would go over the limit. So keep is a relatively # small number. # # NOTE: The values below are used to set up environment variables in # the config.h file that may be used throughout Slurm, so don't remove # them. ## API_CURRENT: 29 API_AGE: 0 API_REVISION: 0 slurm-slurm-15-08-7-1/Makefile.am000066400000000000000000000031761265000126300163700ustar00rootroot00000000000000 AUTOMAKE_OPTIONS = foreign ACLOCAL_AMFLAGS = -I auxdir SUBDIRS = auxdir src testsuite doc EXTRA_DIST = \ etc/bluegene.conf.example \ etc/cgroup.conf.example \ etc/cgroup.release_common.example.in \ etc/cgroup_allowed_devices_file.conf.example \ etc/init.d.slurm.in \ etc/init.d.slurmdbd.in \ etc/layouts.d.power.conf.example \ etc/slurm.conf.example \ etc/slurm.epilog.clean \ etc/slurmctld.service.in \ etc/slurmd.service.in \ etc/slurmdbd.conf.example \ etc/slurmdbd.service.in \ autogen.sh \ slurm.spec \ README.rst \ RELEASE_NOTES \ DISCLAIMER \ COPYING \ AUTHORS \ INSTALL \ LICENSE.OpenSSL \ NEWS \ ChangeLog \ META \ config.xml pkginclude_HEADERS = \ slurm/pmi.h \ slurm/slurm.h \ slurm/slurmdb.h \ slurm/slurm_errno.h \ slurm/smd_ns.h \ slurm/spank.h MAINTAINERCLEANFILES = \ aclocal.m4 config.guess config.xml \ config.h.in config.sub configure install-sh \ ltconfig ltmain.sh missing mkinstalldirs \ slurm/slurm.h \ stamp-h.in distclean-local: -(cd $(top_srcdir) && rm -rf autom4te*.cache autoscan.*) -(cd $(top_srcdir) && rm -rf $(PACKAGE)-*) mrproper: distclean-local clean -(cd $(top_srcdir) && rm -rf autom4te.cache config.h config.log) -(cd $(top_srcdir) && rm -rf config.status libtool stamp-h1) -(cd $(top_srcdir)/auxdir && rm -rf mkinstalldirs) -(cd $(top_srcdir)/slurm && rm -rf stamp-h2 slurm.h) -find $(top_srcdir)/src -name "Makefile" -exec rm {} \; -find $(top_srcdir) -depth -name ".deps" -exec rm -rf {} \; contrib: @cd contribs && \ $(MAKE) && \ cd ..; install-contrib: @cd contribs && \ $(MAKE) DESTDIR=$(DESTDIR) install && \ cd ..; slurm-slurm-15-08-7-1/Makefile.in000066400000000000000000001112101265000126300163660ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = . DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am \ $(top_srcdir)/configure $(am__configure_deps) \ $(srcdir)/config.h.in $(top_srcdir)/slurm/slurm.h.in \ $(srcdir)/config.xml.in \ $(top_srcdir)/contribs/perlapi/libslurm/perl/Makefile.PL.in \ $(top_srcdir)/contribs/perlapi/libslurmdb/perl/Makefile.PL.in \ $(top_srcdir)/contribs/phpext/slurm_php/config.m4.in \ $(top_srcdir)/etc/cgroup.release_common.example.in \ $(top_srcdir)/etc/init.d.slurm.in \ $(top_srcdir)/etc/init.d.slurmdbd.in \ $(top_srcdir)/etc/slurmctld.service.in \ $(top_srcdir)/etc/slurmd.service.in \ $(top_srcdir)/etc/slurmdbd.service.in $(pkginclude_HEADERS) \ AUTHORS COPYING ChangeLog INSTALL NEWS \ $(top_srcdir)/auxdir/compile $(top_srcdir)/auxdir/config.guess \ $(top_srcdir)/auxdir/config.sub \ $(top_srcdir)/auxdir/install-sh $(top_srcdir)/auxdir/ltmain.sh \ $(top_srcdir)/auxdir/missing ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \ configure.lineno config.status.lineno mkinstalldirs = $(install_sh) -d CONFIG_HEADER = config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = config.xml \ contribs/perlapi/libslurm/perl/Makefile.PL \ contribs/perlapi/libslurmdb/perl/Makefile.PL \ contribs/phpext/slurm_php/config.m4 \ etc/cgroup.release_common.example etc/init.d.slurm \ etc/init.d.slurmdbd etc/slurmctld.service etc/slurmd.service \ etc/slurmdbd.service CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \ ctags-recursive dvi-recursive html-recursive info-recursive \ install-data-recursive install-dvi-recursive \ install-exec-recursive install-html-recursive \ install-info-recursive install-pdf-recursive \ install-ps-recursive install-recursive installcheck-recursive \ installdirs-recursive pdf-recursive ps-recursive \ tags-recursive uninstall-recursive am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(pkgincludedir)" HEADERS = $(pkginclude_HEADERS) RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \ distclean-recursive maintainer-clean-recursive am__recursive_targets = \ $(RECURSIVE_TARGETS) \ $(RECURSIVE_CLEAN_TARGETS) \ $(am__extra_recursive_targets) AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \ cscope distdir dist dist-all distcheck am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) \ $(LISP)config.h.in # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` ETAGS = etags CTAGS = ctags CSCOPE = cscope DIST_SUBDIRS = $(SUBDIRS) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) distdir = $(PACKAGE)-$(VERSION) top_distdir = $(distdir) am__remove_distdir = \ if test -d "$(distdir)"; then \ find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \ && rm -rf "$(distdir)" \ || { sleep 5 && rm -rf "$(distdir)"; }; \ else :; fi am__post_remove_distdir = $(am__remove_distdir) am__relativize = \ dir0=`pwd`; \ sed_first='s,^\([^/]*\)/.*$$,\1,'; \ sed_rest='s,^[^/]*/*,,'; \ sed_last='s,^.*/\([^/]*\)$$,\1,'; \ sed_butlast='s,/*[^/]*$$,,'; \ while test -n "$$dir1"; do \ first=`echo "$$dir1" | sed -e "$$sed_first"`; \ if test "$$first" != "."; then \ if test "$$first" = ".."; then \ dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \ dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \ else \ first2=`echo "$$dir2" | sed -e "$$sed_first"`; \ if test "$$first2" = "$$first"; then \ dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \ else \ dir2="../$$dir2"; \ fi; \ dir0="$$dir0"/"$$first"; \ fi; \ fi; \ dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \ done; \ reldir="$$dir2" DIST_ARCHIVES = $(distdir).tar.gz GZIP_ENV = --best DIST_TARGETS = dist-gzip distuninstallcheck_listfiles = find . -type f -print am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \ | sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$' distcleancheck_listfiles = find . -type f -print ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign ACLOCAL_AMFLAGS = -I auxdir SUBDIRS = auxdir src testsuite doc EXTRA_DIST = \ etc/bluegene.conf.example \ etc/cgroup.conf.example \ etc/cgroup.release_common.example.in \ etc/cgroup_allowed_devices_file.conf.example \ etc/init.d.slurm.in \ etc/init.d.slurmdbd.in \ etc/layouts.d.power.conf.example \ etc/slurm.conf.example \ etc/slurm.epilog.clean \ etc/slurmctld.service.in \ etc/slurmd.service.in \ etc/slurmdbd.conf.example \ etc/slurmdbd.service.in \ autogen.sh \ slurm.spec \ README.rst \ RELEASE_NOTES \ DISCLAIMER \ COPYING \ AUTHORS \ INSTALL \ LICENSE.OpenSSL \ NEWS \ ChangeLog \ META \ config.xml pkginclude_HEADERS = \ slurm/pmi.h \ slurm/slurm.h \ slurm/slurmdb.h \ slurm/slurm_errno.h \ slurm/smd_ns.h \ slurm/spank.h MAINTAINERCLEANFILES = \ aclocal.m4 config.guess config.xml \ config.h.in config.sub configure install-sh \ ltconfig ltmain.sh missing mkinstalldirs \ slurm/slurm.h \ stamp-h.in all: config.h $(MAKE) $(AM_MAKEFLAGS) all-recursive .SUFFIXES: am--refresh: Makefile @: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ echo ' cd $(srcdir) && $(AUTOMAKE) --foreign'; \ $(am__cd) $(srcdir) && $(AUTOMAKE) --foreign \ && exit 0; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ echo ' $(SHELL) ./config.status'; \ $(SHELL) ./config.status;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) $(SHELL) ./config.status --recheck $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) $(am__cd) $(srcdir) && $(AUTOCONF) $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) $(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS) $(am__aclocal_m4_deps): config.h: stamp-h1 @test -f $@ || rm -f stamp-h1 @test -f $@ || $(MAKE) $(AM_MAKEFLAGS) stamp-h1 stamp-h1: $(srcdir)/config.h.in $(top_builddir)/config.status @rm -f stamp-h1 cd $(top_builddir) && $(SHELL) ./config.status config.h $(srcdir)/config.h.in: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) ($(am__cd) $(top_srcdir) && $(AUTOHEADER)) rm -f stamp-h1 touch $@ slurm/slurm.h: slurm/stamp-h2 @test -f $@ || rm -f slurm/stamp-h2 @test -f $@ || $(MAKE) $(AM_MAKEFLAGS) slurm/stamp-h2 slurm/stamp-h2: $(top_srcdir)/slurm/slurm.h.in $(top_builddir)/config.status @rm -f slurm/stamp-h2 cd $(top_builddir) && $(SHELL) ./config.status slurm/slurm.h distclean-hdr: -rm -f config.h stamp-h1 slurm/slurm.h slurm/stamp-h2 config.xml: $(top_builddir)/config.status $(srcdir)/config.xml.in cd $(top_builddir) && $(SHELL) ./config.status $@ contribs/perlapi/libslurm/perl/Makefile.PL: $(top_builddir)/config.status $(top_srcdir)/contribs/perlapi/libslurm/perl/Makefile.PL.in cd $(top_builddir) && $(SHELL) ./config.status $@ contribs/perlapi/libslurmdb/perl/Makefile.PL: $(top_builddir)/config.status $(top_srcdir)/contribs/perlapi/libslurmdb/perl/Makefile.PL.in cd $(top_builddir) && $(SHELL) ./config.status $@ contribs/phpext/slurm_php/config.m4: $(top_builddir)/config.status $(top_srcdir)/contribs/phpext/slurm_php/config.m4.in cd $(top_builddir) && $(SHELL) ./config.status $@ etc/cgroup.release_common.example: $(top_builddir)/config.status $(top_srcdir)/etc/cgroup.release_common.example.in cd $(top_builddir) && $(SHELL) ./config.status $@ etc/init.d.slurm: $(top_builddir)/config.status $(top_srcdir)/etc/init.d.slurm.in cd $(top_builddir) && $(SHELL) ./config.status $@ etc/init.d.slurmdbd: $(top_builddir)/config.status $(top_srcdir)/etc/init.d.slurmdbd.in cd $(top_builddir) && $(SHELL) ./config.status $@ etc/slurmctld.service: $(top_builddir)/config.status $(top_srcdir)/etc/slurmctld.service.in cd $(top_builddir) && $(SHELL) ./config.status $@ etc/slurmd.service: $(top_builddir)/config.status $(top_srcdir)/etc/slurmd.service.in cd $(top_builddir) && $(SHELL) ./config.status $@ etc/slurmdbd.service: $(top_builddir)/config.status $(top_srcdir)/etc/slurmdbd.service.in cd $(top_builddir) && $(SHELL) ./config.status $@ mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs distclean-libtool: -rm -f libtool config.lt install-pkgincludeHEADERS: $(pkginclude_HEADERS) @$(NORMAL_INSTALL) @list='$(pkginclude_HEADERS)'; test -n "$(pkgincludedir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(pkgincludedir)'"; \ $(MKDIR_P) "$(DESTDIR)$(pkgincludedir)" || exit 1; \ fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(pkgincludedir)'"; \ $(INSTALL_HEADER) $$files "$(DESTDIR)$(pkgincludedir)" || exit $$?; \ done uninstall-pkgincludeHEADERS: @$(NORMAL_UNINSTALL) @list='$(pkginclude_HEADERS)'; test -n "$(pkgincludedir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(pkgincludedir)'; $(am__uninstall_files_from_dir) # This directory's subdirectories are mostly independent; you can cd # into them and run 'make' without going through this Makefile. # To change the values of 'make' variables: instead of editing Makefiles, # (1) if the variable is set in 'config.status', edit 'config.status' # (which will cause the Makefiles to be regenerated when you run 'make'); # (2) otherwise, pass the desired values on the 'make' command line. $(am__recursive_targets): @fail=; \ if $(am__make_keepgoing); then \ failcom='fail=yes'; \ else \ failcom='exit 1'; \ fi; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ case "$@" in \ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ *) list='$(SUBDIRS)' ;; \ esac; \ for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done; \ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-recursive TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ include_option=--etags-include; \ empty_fix=.; \ else \ include_option=--include; \ empty_fix=; \ fi; \ list='$(SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test ! -f $$subdir/TAGS || \ set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \ fi; \ done; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-recursive CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscope: cscope.files test ! -s cscope.files \ || $(CSCOPE) -b -q $(AM_CSCOPEFLAGS) $(CSCOPEFLAGS) -i cscope.files $(CSCOPE_ARGS) clean-cscope: -rm -f cscope.files cscope.files: clean-cscope cscopelist cscopelist: cscopelist-recursive cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags -rm -f cscope.out cscope.in.out cscope.po.out cscope.files distdir: $(DISTFILES) $(am__remove_distdir) test -d "$(distdir)" || mkdir "$(distdir)" @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ $(am__make_dryrun) \ || test -d "$(distdir)/$$subdir" \ || $(MKDIR_P) "$(distdir)/$$subdir" \ || exit 1; \ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \ $(am__relativize); \ new_distdir=$$reldir; \ dir1=$$subdir; dir2="$(top_distdir)"; \ $(am__relativize); \ new_top_distdir=$$reldir; \ echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \ echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \ ($(am__cd) $$subdir && \ $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$$new_top_distdir" \ distdir="$$new_distdir" \ am__remove_distdir=: \ am__skip_length_check=: \ am__skip_mode_fix=: \ distdir) \ || exit 1; \ fi; \ done -test -n "$(am__skip_mode_fix)" \ || find "$(distdir)" -type d ! -perm -755 \ -exec chmod u+rwx,go+rx {} \; -o \ ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \ ! -type d ! -perm -400 -exec chmod a+r {} \; -o \ ! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \ || chmod -R a+r "$(distdir)" dist-gzip: distdir tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz $(am__post_remove_distdir) dist-bzip2: distdir tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2 $(am__post_remove_distdir) dist-lzip: distdir tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz $(am__post_remove_distdir) dist-xz: distdir tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz $(am__post_remove_distdir) dist-tarZ: distdir @echo WARNING: "Support for shar distribution archives is" \ "deprecated." >&2 @echo WARNING: "It will be removed altogether in Automake 2.0" >&2 tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z $(am__post_remove_distdir) dist-shar: distdir @echo WARNING: "Support for distribution archives compressed with" \ "legacy program 'compress' is deprecated." >&2 @echo WARNING: "It will be removed altogether in Automake 2.0" >&2 shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz $(am__post_remove_distdir) dist-zip: distdir -rm -f $(distdir).zip zip -rq $(distdir).zip $(distdir) $(am__post_remove_distdir) dist dist-all: $(MAKE) $(AM_MAKEFLAGS) $(DIST_TARGETS) am__post_remove_distdir='@:' $(am__post_remove_distdir) # This target untars the dist file and tries a VPATH configuration. Then # it guarantees that the distribution is self-contained by making another # tarfile. distcheck: dist case '$(DIST_ARCHIVES)' in \ *.tar.gz*) \ GZIP=$(GZIP_ENV) gzip -dc $(distdir).tar.gz | $(am__untar) ;;\ *.tar.bz2*) \ bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\ *.tar.lz*) \ lzip -dc $(distdir).tar.lz | $(am__untar) ;;\ *.tar.xz*) \ xz -dc $(distdir).tar.xz | $(am__untar) ;;\ *.tar.Z*) \ uncompress -c $(distdir).tar.Z | $(am__untar) ;;\ *.shar.gz*) \ GZIP=$(GZIP_ENV) gzip -dc $(distdir).shar.gz | unshar ;;\ *.zip*) \ unzip $(distdir).zip ;;\ esac chmod -R a-w $(distdir) chmod u+w $(distdir) mkdir $(distdir)/_build $(distdir)/_inst chmod a-w $(distdir) test -d $(distdir)/_build || exit 0; \ dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \ && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \ && am__cwd=`pwd` \ && $(am__cd) $(distdir)/_build \ && ../configure \ $(AM_DISTCHECK_CONFIGURE_FLAGS) \ $(DISTCHECK_CONFIGURE_FLAGS) \ --srcdir=.. --prefix="$$dc_install_base" \ && $(MAKE) $(AM_MAKEFLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) dvi \ && $(MAKE) $(AM_MAKEFLAGS) check \ && $(MAKE) $(AM_MAKEFLAGS) install \ && $(MAKE) $(AM_MAKEFLAGS) installcheck \ && $(MAKE) $(AM_MAKEFLAGS) uninstall \ && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \ distuninstallcheck \ && chmod -R a-w "$$dc_install_base" \ && ({ \ (cd ../.. && umask 077 && mkdir "$$dc_destdir") \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \ distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \ } || { rm -rf "$$dc_destdir"; exit 1; }) \ && rm -rf "$$dc_destdir" \ && $(MAKE) $(AM_MAKEFLAGS) dist \ && rm -rf $(DIST_ARCHIVES) \ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck \ && cd "$$am__cwd" \ || exit 1 $(am__post_remove_distdir) @(echo "$(distdir) archives ready for distribution: "; \ list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \ sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x' distuninstallcheck: @test -n '$(distuninstallcheck_dir)' || { \ echo 'ERROR: trying to run $@ with an empty' \ '$$(distuninstallcheck_dir)' >&2; \ exit 1; \ }; \ $(am__cd) '$(distuninstallcheck_dir)' || { \ echo 'ERROR: cannot chdir into $(distuninstallcheck_dir)' >&2; \ exit 1; \ }; \ test `$(am__distuninstallcheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left after uninstall:" ; \ if test -n "$(DESTDIR)"; then \ echo " (check DESTDIR support)"; \ fi ; \ $(distuninstallcheck_listfiles) ; \ exit 1; } >&2 distcleancheck: distclean @if test '$(srcdir)' = . ; then \ echo "ERROR: distcleancheck can only run from a VPATH build" ; \ exit 1 ; \ fi @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left in build directory after distclean:" ; \ $(distcleancheck_listfiles) ; \ exit 1; } >&2 check-am: all-am check: check-recursive all-am: Makefile $(HEADERS) config.h installdirs: installdirs-recursive installdirs-am: for dir in "$(DESTDIR)$(pkgincludedir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-recursive install-exec: install-exec-recursive install-data: install-data-recursive uninstall: uninstall-recursive install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-recursive install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." -test -z "$(MAINTAINERCLEANFILES)" || rm -f $(MAINTAINERCLEANFILES) clean: clean-recursive clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-recursive -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -f Makefile distclean-am: clean-am distclean-generic distclean-hdr \ distclean-libtool distclean-local distclean-tags dvi: dvi-recursive dvi-am: html: html-recursive html-am: info: info-recursive info-am: install-data-am: install-pkgincludeHEADERS install-dvi: install-dvi-recursive install-dvi-am: install-exec-am: install-html: install-html-recursive install-html-am: install-info: install-info-recursive install-info-am: install-man: install-pdf: install-pdf-recursive install-pdf-am: install-ps: install-ps-recursive install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-recursive -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -rf $(top_srcdir)/autom4te.cache -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-recursive mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-recursive pdf-am: ps: ps-recursive ps-am: uninstall-am: uninstall-pkgincludeHEADERS .MAKE: $(am__recursive_targets) all install-am install-strip .PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am \ am--refresh check check-am clean clean-cscope clean-generic \ clean-libtool cscope cscopelist-am ctags ctags-am dist \ dist-all dist-bzip2 dist-gzip dist-lzip dist-shar dist-tarZ \ dist-xz dist-zip distcheck distclean distclean-generic \ distclean-hdr distclean-libtool distclean-local distclean-tags \ distcleancheck distdir distuninstallcheck dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-html install-html-am install-info \ install-info-am install-man install-pdf install-pdf-am \ install-pkgincludeHEADERS install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ installdirs-am maintainer-clean maintainer-clean-generic \ mostlyclean mostlyclean-generic mostlyclean-libtool pdf pdf-am \ ps ps-am tags tags-am uninstall uninstall-am \ uninstall-pkgincludeHEADERS distclean-local: -(cd $(top_srcdir) && rm -rf autom4te*.cache autoscan.*) -(cd $(top_srcdir) && rm -rf $(PACKAGE)-*) mrproper: distclean-local clean -(cd $(top_srcdir) && rm -rf autom4te.cache config.h config.log) -(cd $(top_srcdir) && rm -rf config.status libtool stamp-h1) -(cd $(top_srcdir)/auxdir && rm -rf mkinstalldirs) -(cd $(top_srcdir)/slurm && rm -rf stamp-h2 slurm.h) -find $(top_srcdir)/src -name "Makefile" -exec rm {} \; -find $(top_srcdir) -depth -name ".deps" -exec rm -rf {} \; contrib: @cd contribs && \ $(MAKE) && \ cd ..; install-contrib: @cd contribs && \ $(MAKE) DESTDIR=$(DESTDIR) install && \ cd ..; # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/NEWS000066400000000000000000013143011265000126300150270ustar00rootroot00000000000000This file describes changes in recent versions of Slurm. It primarily documents those changes that are of interest to users and administrators. * Changes in Slurm 15.08.7 ========================== -- sched/backfill: If a job can not be started within the configured backfill_window, set it's start time to 0 (unknown) rather than the end of the backfill_window. -- Remove the 1024-character limit on lines in batch scripts. -- burst_buffer/cray: Round up swap size by configured granularity. -- select/cray: Log repeated aeld reconnects. -- task/affinity: Disable core-level task binding if more CPUs required than available cores. -- Preemption/gang scheduling: If a job is suspended at slurmctld restart or reconfiguration time, then leave it suspended rather than resume+suspend. -- Don't use lower weight nodes for job allocation when topology/tree used. -- BGQ - If a cable goes into error state remove the under lying block on a dynamic system and mark the block in error on a static/overlap system. -- BGQ - Fix regression in 9cc4ae8add7f where blocks would be deleted on static/overlap systems when some hardware issue happens when restarting the slurmctld. -- Log if CLOUD node configured without a resume/suspend program or suspend time. -- MYSQL - Better locking around g_qos_count which was previously unprotected. -- Correct size of buffer used for jobid2str to avoid truncation. -- Fix allocation/distribution of tasks across multiple nodes when --hint=nomultithread is requested. -- If a reservation's nodes value is "all" then track the current nodes in the system, even if those nodes change. -- Fix formatting if using "tree" option with sreport. -- Make it so sreport prints out a line for non-existent TRES instead of error message. -- Set job's reason to "Priority" when higher priority job in that partition (or reservation) can not start rather than leaving the reason set to "Resources". -- Fix memory corruption when a new non-generic TRES is added to the DBD for the first time. The corruption is only noticed at shutdown. -- burst_buffer/cray - Improve tracking of allocated resources to handle race condition when reading state while buffer allocation is in progress. -- If a job is submitted only with -c option and numcpus is updated before the job starts update the cpus_per_task appropriately. -- Update salloc/sbatch/srun documentation to mention time granularity. -- Fixed memory leak when freeing assoc_mgr_info_msg_t. -- Prevent possible use of empty reservation core bitmap, causing abort. -- Remove unneeded pack32's from qos_rec when qos_rec is NULL. -- Make sacctmgr print MaxJobsPerUser when adding/altering a QOS. -- Correct dependency formatting to print array task ids if set. -- Update sacctmgr help with current QOS options. -- Update slurmstepd to initialize authentication before task launch. -- burst_cray/cray: Eliminate need for dedicated nodes. -- If no MsgAggregationParams is set don't set the internal string to anything. The slurmd will process things correctly after the fact. -- Fix output from api when printing job step not found. -- Don't allow user specified reservation names to disrupt the normal reservation sequeuece numbering scheme. -- Fix scontrol to be able to accept TRES as an option when creating a reservation. -- contrib/torque/qstat.pl - return exit code of zero even with no records printed for 'qstat -u'. -- When a reservation is created or updated, compress user provided node names using hostlist functions (e.g. translate user input of "Nodes=tux1,tux2" into "Nodes=tux[1-2]"). -- Change output routines for scontrol show partition/reservation to handle unexpectedly large strings. -- Add more partition fields to "scontrol write config" output file. -- Backfill scheduling fix: If a job can't be started due to a "group" resource limit, rather than reserve resources for it when the next job ends, don't reserve any resources for it. -- Avoid slurmstepd abort if malloc fails during accounting gather operation. -- Fix nodes from being overallocated when allocation straddles multiple nodes. -- Fix memory leak in slurmctld job array logic. -- Prevent decrementing of TRESRunMins when AccountingStorageEnforce=limits is not set. -- Fix backfill scheduling bug which could postpone the scheduling of jobs due to avoidance of nodes in COMPLETING state. -- Properly account for memory, CPUs and GRES when slurmctld is reconfigured while there is a suspended job. Previous logic would add the CPUs, but not memory or GPUs. This would result in underflow/overflow errors in select cons_res plugin. -- Strip flags from a job state in qstat wrapper before evaluating. -- Add missing job states from the qstat wrapper. * Changes in Slurm 15.08.6 ========================== -- In slurmctld log file, log duplicate job ID found by slurmd. Previously was being logged as prolog/epilog failure. -- If a job is requeued while in the process of being launch, remove it's job ID from slurmd's record of active jobs in order to avoid generating a duplicate job ID error when launched for the second time (which would drain the node). -- Cleanup messages when handling job script and environment variables in older directory structure formats. -- Prevent triggering gang scheduling within a partition if configured with PreemptType=partition_prio and PreemptMode=suspend,gang. -- Decrease parallelism in job cancel request to prevent denial of service when cancelling huge numbers of jobs. -- If all ephemeral ports are in use, try using other port numbers. -- Revert way lib lua is handled when doing a dlopen, fixing a regression in 15.08.5. -- Set the debug level of the rmdir message in xcgroup_delete() to debug2. -- Fix the qstat wrapper when user is removed from the system but still has running jobs. -- Log the request to terminate a job at info level if DebugFlags includes the Steps keyword. -- Fix potential memory corruption in _slurm_rpc_epilog_complete as well as _slurm_rpc_complete_job_allocation. -- Fix cosmetic display of AccountingStorageEnforce option "nosteps" when in use. -- If a job can never be started due to unsatisfied job dependencies, report the full original job dependency specification rather than the dependencies remaining to be satisfied (typically NULL). -- Refactor logic to synchronize active batch jobs and their script/environment files, reducing overhead dramatically for large numbers of active jobs. -- Avoid hard-link/copy of script/environment files for job arrays. Use the master job record file for all tasks of the job array. NOTE: Job arrays submitted to Slurm version 15.08.6 or later will fail if the slurmctld daemon is downgraded to an earlier version of Slurm. -- Move slurmctld mail handler to separate thread for improved performance. -- Fix containment of adopted processes from pam_slurm_adopt. -- If a pending job array has multiple reasons for being in a pending state, then print all reasons in a comma separated list. * Changes in Slurm 15.08.5 ========================== -- Prevent "scontrol update job" from updating jobs that have already finished. -- Show requested TRES in "squeue -O tres" when job is pending. -- Backfill scheduler: Test association and QOS node limits before reserving resources for pending job. -- burst_buffer/cray: If teardown operations fails, sleep and retry. -- Clean up the external pids when using the PrologFlags=Contain feature and the job finishes. -- burst_buffer/cray: Support file staging when job lacks job-specific buffer (i.e. only persistent burst buffers). -- Added srun option of --bcast to copy executable file to compute nodes. -- Fix for advanced reservation of burst buffer space. -- BurstBuffer/cray: Add logic to terminate dw_wlm_cli child processes at shutdown. -- If job can't be launch or requeued, then terminate it. -- BurstBuffer/cray: Enable clearing of burst buffer string on completed job as a means of recovering from a failure mode. -- Fix wrong memory free when parsing SrunPortRange=0-0 configuration. -- BurstBuffer/cray: Fix job record purging if cancelled from pending state. -- BGQ - Handle database throw correctly when syncing users on blocks. -- MySQL - Make sure we don't have a NULL string returned when not requesting any specific association. -- sched/backfill: If max_rpc_cnt is configured and the backlog of RPCs has not cleared after yielding locks, then continue to sleep. -- Preserve the job dependency description displayed in 'scontrol show job' even if the dependee jobs was terminated and cleaned causing the dependent to never run because of DependencyNeverSatisfied. -- Correct job task count calculation if only node count and ntasks-per-node options supplied. -- Make sure the association manager converts any string to be lower case as all the associations from the database will be lower case. -- Sanity check for xcgroup_delete() to verify incoming parameter is valid. -- Fix formatting for sacct with variables that switched from uint32_t to uint64_t. -- Fix a typo in sacct man page. -- Set up extern step to track any childern of an ssh if it leaves anything else behind. -- Prevent slurmdbd divide by zero if no associations defined at rollup time. -- Multifactor - Add sanity check to make sure pending jobs are handled correctly when PriorityFlags=CALCULATE_RUNNING is set. -- Add slurmdb_find_tres_count_in_string() to slurm db perl api. -- Make lua dlopen() conditional on version found at build. -- sched/backfill - Delay backfill scheduler for completing jobs only if CompleteWait configuration parameter is set (make code match documentation). -- Release a job's allocated licenses only after epilog runs on all nodes rather than at start of termination process. -- Cray job NHC delayed until after burst buffer released and epilog completes on all allocated nodes. -- Fix abort of srun if using PrologFlags=NoHold -- Let devices step_extern cgroup inherit attributes of job cgroup. -- Add new hook to Task plugin to be able to put adopted processes in the step_extern cgroups. -- Fix AllowUsers documentation in burst_buffer.conf man page. Usernames are comma separated, not colon delimited. -- Fix issue with time limit not being set correctly from a QOS when a job requests no time limit. -- Various CLANG fixes. -- In both sched/basic and backfill: If a job can not be started due to some account/qos limit, then don't start other jobs which could delay jobs. The old logic would skip the job and start other jobs, which could delay the higher priority job. -- select/cray: Prevent NHC from running more than once per job or step. -- Fix fields not properly printed when adding an account through sacctmgr. -- Update LBNL Node Health Check (NHC) link on FAQ. -- Fix multifactor plugin to prevent slurmctld from getting segmentation fault should the tres_alloc_cnt be NULL. -- sbatch/salloc - Move nodelist logic before the time min_nodes is used so we can set it correctly before tasks are set. * Changes in Slurm 15.08.4 ========================== -- Fix typo for the "devices" cgroup subsystem in pam_slurm_adopt.c -- Fix TRES_MAX flag to work correctly. -- Improve the systemd startup files. -- Added burst_buffer.conf flag parameter of "TeardownFailure" which will teardown and remove a burst buffer after failed stage-in or stage-out. By default, the buffer will be preserved for analysis and manual teardown. -- Prevent a core dump in srun if the signal handler runs during the job allocation causing the step context to be NULL. -- Don't fail job if multiple prolog operations in progress at slurmctld restart time. -- Burst_buffer/cray: Fix to purge terminated jobs with burst buffer errors. -- Burst_buffer/cray: Don't stall scheduling of other jobs while a stage-in is in progress. -- Make it possible to query 'extern' step with sstat. -- Make 'extern' step show up in the database. -- MYSQL - Quote assoc table name in mysql query. -- Make SLURM_ARRAY_TASK_MIN, SLURM_ARRAY_TASK_MAX, and SLURM_ARRAY_TASK_STEP environment variables available to PrologSlurmctld and EpilogSlurmctld. -- Fix slurmctld bug in which a pending job array could be canceled by a user different from the owner or the administrator. -- Support taking node out of FUTURE state with "scontrol reconfig" command. -- Sched/backfill: Fix to properly enforce SchedulerParameters of bf_max_job_array_resv. -- Enable operator to reset sdiag data. -- jobcomp/elasticsearch plugin: Add array_job_id and array_task_id fields. -- Remove duplicate #define IS_NODE_POWER_UP. -- Added SchedulerParameters option of max_script_size. -- Add REQUEST_ADD_EXTERN_PID option to add pid to the slurmstepd's extern step. -- Add unique identifiers to anchor tags in HTML generated from the man pages. -- Add with_freeipmi option to spec file. -- Minor elasticsearch code improvements * Changes in Slurm 15.08.3 ========================== -- Correct Slurm's RPM build if Munge is not installed. -- Job array termination status email ExitCode based upon highest exit code from any task in the job array rather than the last task. Also change the state from "Ended" or "Failed" to "Mixed" where appropriate. -- Squeue recombines pending job array records only if their name and partition are identical. -- Fix some minor leaks in the job info and step info API. -- Export missing QOS id when filling in association with the association manager. -- Fix invalid reference if a lua job_submit plugin references a default qos when a user doesn't exist in the database. -- Use association enforcement in the lua plugin. -- Fix a few spots missing defines of accounting_enforce or acct_db_conn in the plugins. -- Show requested TRES in scontrol show jobs when job is pending. -- Improve sched/backfill support for job features, especially XOR construct. -- Correct scheduling logic for job features option with XOR construct that could delay a job's initiation. -- Remove unneeded frees when creating a tres string. -- Send a tres_alloc_str for the batch step -- Fix incorrect check for slurmdb_find_tres_count_in_string in various places, it needed to check for INFINITE64 instead of zero. -- Don't allow scontrol to create partitions with the name "DEFAULT". -- burst_buffer/cray: Change error from "invalid request" to "permssion denied" if a non-authorized user tries to create/destroy a persistent buffer. -- PrologFlags work: Setting a flag of "Contain" implicitly sets the "Alloc" flag. Fix code path which could prevent execution of the Prolog when the "Alloc" or "Contain" flag were set. -- Fix for acct_gather_energy/cray|ibmaem to work with missed enum. -- MYSQL - When inserting a job and begin_time is 0 do not set it to submit_time. 0 means the job isn't eligible yet so we need to treat it so. -- MYSQL - Don't display ineligible jobs when querying for a window of time. -- Fix creation of advanced reservation of cores on nodes which are DOWN. -- Return permission denied if regular user tries to release job held by an administrator. -- MYSQL - Fix rollups for multiple jobs running by the same association in an hour counting multiple times. -- Burstbuffer/Cray plugin - Fix for persistent burst buffer use. Don't call paths if no #DW options. -- Modifications to pam_slurm_adopt to work correctly for the "extern" step. -- Alphabetize debugflags when printing them out. -- Fix systemd's slurmd service from killing slurmstepds on shutdown. -- Fixed counter of not indexed jobs, error_cnt post-increment changed to pre-increment. * Changes in Slurm 15.08.2 ========================== -- Fix for tracking node state when jobs that have been allocated exclusive access to nodes (i.e. entire nodes) and later relinquish some nodes. Nodes would previously appear partly allocated and prevent use by other jobs. -- Correct some cgroup paths ("step_batch" vs. "step_4294967294", "step_exter" vs. "step_extern", and "step_extern" vs. "step_4294967295"). -- Fix advanced reservation core selection logic with network topology. -- MYSQL - Remove restriction to have to be at least an operator to query TRES values. -- For pending jobs have sacct print 0 for nnodes instead of the bogus 2. -- Fix for tracking node state when jobs that have been allocated exclusive access to nodes (i.e. entire nodes) and later relinquish some nodes. Nodes would previously appear partly allocated and prevent use by other jobs. -- Fix updating job in db after extending job's timelimit past partition's timelimit. -- Fix srun -I from flooding the controller with step create requests. -- Requeue/hold batch job launch request if job already running (possible if node went to DOWN state, but jobs remained active). -- If a job's CPUs/task ratio is increased due to configured MaxMemPerCPU, then increase it's allocated CPU count in order to enforce CPU limits. -- Don't mark powered down node as not responding. This could be triggered by race condition of the node suspend and ping logic, preventing use of the node. -- Don't requeue RPC going out from slurmctld to DOWN nodes (can generate repeating communication errors). -- Propagate sbatch "--dist=plane=#" option to srun. -- Add acct_gather_energy/ibmaem plugin for systems with IBM Systems Director Active Energy Manager. -- Fix spec file to look for mariadb or mysql devel packages for build requirements. -- MySQL - Improve the code with asking for jobs in a suspended state. -- Fix slurcmtld allowing root to see job steps using squeues -s. -- Do not send burst buffer stage out email unless the job uses burst buffers. -- Fix sacct to not return all jobs if the -j option is given with a trailing ','. -- Permit job_submit plugin to set a job's priority. -- Fix occasional srun segfault. -- Fix issue with sacct, printing 0_0 for array's that had finished in the database but the start record hadn't made it yet. -- sacctmgr - Don't allow default account associations to be removed from a user. -- Fix sacct -j, (nothing but a comma) to not return all jobs. -- Fixed slurmctld not sending cold-start messages correctly to the database when a cold-start (-c) happens to the slurmctld. -- Fix case where if the backup slurmdbd has existing connections when it gives up control that the it would be killed. -- Fix task/cgroup affinity to work correctly with multi-socket single-threaded cores. A regression caused only 1 socket to be used on this kind of node instead of all that were available. -- MYSQL - Fix minor issue after an index was added to the database it would previously take 2 restarts of the slurmdbd to make it stick correctly. -- Add hv_to_qos_cond() and qos_rec_to_hv() functions to the Perl interface. -- Add new burst_buffer.conf parameters: ValidateTimeout and OtherTimeout. See man page for details. -- Fix burst_buffer/cray support for interactive allocations >4GB. -- Correct backfill scheduling logic for job with INFINITE time limit. -- Fix issue on a scontrol reconfig all available GRES/TRES would be zeroed out. -- Set SLURM_HINT environment variable when --hint is used with sbatch or salloc. -- Add scancel -f/--full option to signal all steps including batch script and all of its child processes. -- Fix salloc -I to accept an argument. -- Avoid reporting more allocated CPUs than exist on a node. This can be triggered by resuming a previosly suspended job, resulting in oversubscription of CPUs. -- Fix the pty window manager in slurmstepd not to retry IO operation with srun if it read EOF from the connection with it. -- sbatch --ntasks option to take precedence over --ntasks-per-node plus node count, as documented. Set SLURM_NTASKS/SLURM_NPROCS environment variables accordingly. -- MYSQL - Make sure suspended time is only subtracted from the CPU TRES as it is the only TRES that can be given to another job while suspended. -- Clarify how TRESBillingWeights operates on memory and burst buffers. * Changes in Slurm 15.08.1 ========================== -- Fix test21.30 and 21.34 to check grpwall better. -- Add time to the partition QOS the job is running on instead of just the job QOS. -- Print usage for GrpJobs, GrpSubmitJobs and GrpWall even if there is no limit. -- If AccountingEnforce=safe is set make sure a job can finish before going over the limit with grpwall on a QOS or association. -- burst_buffer/cray - Major updates based upon recent Cray changes. -- Improve clean up logic of pmi2 plugin. -- Improve job state reason string when required nodes not available. -- Fix missing else when packing an update partition message -- Fix srun from inheriting the SLURM_CPU_BIND and SLURM_MEM_BIND environment variables when running in an existing srun (e.g. an srun within an salloc). -- Fix missing else when packing an update partition message. -- Use more flexible mechnanism to find json installation. -- Make sure safe_limits was initialized before processing limits in the slurmctld. -- Fix for burst_buffer/cray to parse type option correctly. -- Fix memory error and version number in the nonstop plugin and reservation code. -- When requesting GRES in a step check for correct variable for the count. -- Fix issue with GRES in steps so that if you have multiple exclusive steps and you use all the GRES up instead of reporting the configuration isn't available you hold the requesting step until the GRES is available. -- MYSQL - Change debug to print out with DebugFlags=DB_Step instead of debug4 -- Simplify code when user is selecting a job/step/array id and removed anomaly when only asking for 1 (task_id was never set to INFINITE). -- MYSQL - If user is requesting various task_ids only return requested steps. -- Fix issue when tres cnt for energy is 0 for total reported. -- Resolved scalability issues of power adaptive scheduling with layouts. -- Burst_buffer/cray bug - Fix teardown race condition that can result in infinite loop. -- Add support for --mail-type=NONE option. -- Job "--reboot" option automatically, set's exclusive node mode. -- Fix memory leak when using PrologFlags=Alloc. -- Fix truncation of job reason in squeue. -- If a node is in DOWN or DRAIN state, leave it unavailable for allocation when powered down. -- Update the slurm.conf man page documenting better nohold_on_prolog_fail variable. -- Don't trucate task ID information in "squeue --array/-r" or "sview". -- Fix a bug which caused scontrol to core dump when releasing or holding a job by name. -- Fix unit conversion bug in slurmd which caused wrong memory calculation for cgroups. -- Fix issue with GRES in steps so that if you have multiple exclusive steps and you use all the GRES up instead of reporting the configuration isn't available you hold the requesting step until the GRES is available. -- Fix slurmdbd backup to use DbdAddr when contacting the primary. -- Fix error in MPI documentation. -- Fix to handle arrays with respect to number of jobs submitted. Previously only 1 job was accounted (against MaxSubmitJob) for when an array was submitted. -- Correct counting for job array limits, job count limit underflow possible when master cancellation of master job record. -- Combine 2 _valid_uid_gid functions into a single function to avoid diversion. -- Pending job array records will be combined into single line by default, even if started and requeued or modified. -- Fix sacct --format=nnodes to print out correct information for pending jobs. -- Make is so 'scontrol update job 1234 qos='' will set the qos back to the default qos for the association. -- Add [Alloc|Req]Nodes to sacct to be more like cpus. -- Fix sacct documentation about [Alloc|Req]TRES -- Put node count in TRES string for steps. -- Fix issue with wrong protocol version when using the srun --no-allocate option. -- Fix TRES counts on GRES on a clean start of the slurmctld. -- Add ability to change a job array's maximum running task count: "scontrol update jobid=# arraytaskthrottle=#" * Changes in Slurm 15.08.0 ========================== -- Fix issue with frontend systems (outside ALPs or BlueGene) where srun wouldn't get the correct protocol version to launch a step. -- Fix for message aggregation return rpcs where none of the messages are intended for the head of the tree. -- Fix segfault in sreport when there was no response from the dbd. -- ALPS - Fix compile to not link against -ljob and -lexpat with every lib or binary. -- Fix testing for CR_Memory when CR_Memory and CR_ONE_TASK_PER_CORE are used with select/linear. -- When restarting or reconfiging the slurmctld, if job is completing handle accounting correctly to avoid meaningless errors about overflow. -- Add AccountingStorageTRES to scontrol show config -- MySQL - Fix minor memory leak if a connection ever goes away whist using it. -- ALPS - Make it so srun --hint=nomultithread works correctly. -- Make MaxTRESPerUser work in sacctmgr. -- Fix handling of requeued jobs with steps that are still finishing. -- Cleaner copy for PriorityWeightTRES, it also fixes a core dump when trying to free it otherwise. -- Add environment variables SLURM_ARRAY_TASK_MAX, SLURM_ARRAY_TASK_MIN, SLURM_ARRAY_TASK_STEP for job arrays. -- Fix srun to use the NoInAddrAny TopologyParam option. -- Change QOS flag name from PartitionQOS to OverPartQOS to be a better description. -- Fix rpmbuild issue on Centos7. * Changes in Slurm 15.08.0rc1 ============================== -- Added power_cpufreq layout. -- Make complete_batch_script RPC work with message aggregation. -- Do not count slurmctld threads waiting in a "throttle" lock against the daemon's thread limit as they are not contending for resources. -- Modify slurmctld outgoing RPC logic to support more parallel tasks (up to 85 RPCs and 256 pthreads; the old logic supported up to 21 RPCs and 256 threads). This change can dramatically improve performance for RPCs operating on small node counts. -- Increase total backfill scheduler run time in stats_info_response_msg data structure from 32 to 64 bits in order to prevent overflow. -- Add NoInAddrAny option to TopologyParam in the slurm.conf which allows to bind to the interface of return of gethostname instead of any address on the node which avoid RSIP issues in Cray systems. This is most likely useful in other systems as well. -- Fix memory leak in Slurm::load_jobs perl api call. -- Added --noconvert option to sacct, sstat, squeue and sinfo which allows values to be displayed in their original unit types (e.g. 2048M won't be converted to 2G). -- Fix spelling of node_rescrs to node_resrcs in Perl API. -- Fix node state race condition, UNKNOWN->IDLE without configuration info. -- Cray: Disable LDAP references from slurmstepd on job launch due for improved scalability. -- Remove srun "read header error" due to application termination race condition. -- Optimize sacct queries with additional db indexes. -- Add SLURM_TOPO_LEN env variable for scontrol show topology. -- Add free_mem to node information. -- Fix abort of batch launch if prolog is running, wait for prolog instead. -- Fix case where job would get the wrong cpu count when using --ntasks-per-core and --cpus-per-task together. -- Add TRESBillingWeights to partitions in slurm.conf which allows taking into consideration any TRES Type when calculating the usage of a job. -- Add PriorityWeightTRES slurm.conf option to be able to configure priority factors for TRES types. * Changes in Slurm 15.08.0pre6 ============================== -- Add scontrol options to view and modify layouts tables. -- Add MsgAggregationParams which controls a reverse tree to the slurmctld which can be used to aggregate messages to the slurmctld into a single message to reduce communication to the slurmctld. Currently only epilog complete messages and node registration messages use this logic. -- Add sacct and squeue options to print trackable resources. -- Add sacctmgr option to display trackable resources. -- If an salloc or srun command is executed on a "front-end" configuration, that job will be assigned a slurmd shepherd daemon on the same host as used to execute the command when possible rather than an slurmd daemon on an arbitrary front-end node. -- Add srun --accel-bind option to control how tasks are bound to GPUs and NIC Generic RESources (GRES). -- gres/nic plugin modified to set OMPI_MCA_btl_openib_if_include environment variable based upon allocated devices (usable with OpenMPI and Melanox). -- Make it so info options for srun/salloc/sbatch print with just 1 -v instead of 4. -- Add "no_backup_scheduling" SchedulerParameter to prevent jobs from being scheduled when the backup takes over. Jobs can be submitted, modified and cancelled while the backup is in control. -- Enable native Slurm backup controller to reside on an external Cray node when the "no_backup_scheduling" SchedulerParameter is used. -- Removed TICKET_BASED fairshare. Consider using the FAIR_TREE algorithm. -- Disable advanced reservation "REPLACE" option on IBM Bluegene systems. -- Add support for control distribution of tasks across cores (in addition to existing support for nodes and sockets, (e.g. "block", "cyclic" or "fcyclic" task distribution at 3 levels in the hardware rather than 2). -- Create db index on _assoc_table.acct. Deleting accounts that didn't have jobs in the job table could take a long time. -- The performance of Profiling with HDF5 is improved. In addition, internal structures are changed to make it easier to add new profile types, particularly energy sensors. sh5util will continue to work with either format. -- Add partition information to sshare output if the --partition option is specified on the sshare command line. -- Add sreport -T/--tres option to identify Trackable RESources (TRES) to report. -- Display job in sacct when single step's cpus are different from the job allocation. -- Add association usage information to "scontrol show cache" command output. -- MPI/MVAPICH plugin now requires Munge for authentication. -- job_submit/lua: Add default_qos fields. Add job record qos. Add partition record allow_qos and qos_char fields. * Changes in Slurm 15.08.0pre5 ============================== -- Add jobcomp/elasticsearch plugin. Libcurl is required for build. Configure the server as follows: "JobCompLoc=http://YOUR_ELASTICSEARCH_SERVER:9200". -- Scancel logic large re-written to better support job arrays. -- Added a slurm.conf parameter PrologEpilogTimeout to control how long prolog/epilog can run. -- Added TRES (Trackable resources) to track Mem, GRES, license, etc utilization. -- Add re-entrant versions of glibc time functions (e.g. localtime) to Slurm in order to eliminate rare deadlock of slurmstepd fork and exec calls. -- Constrain kernel memory (if available) in cgroups. -- Add PrologFlags option of "Contain" to create a proctrack container at job resource allocation time. -- Disable the OOM Killer in slurmd and slurmstepd's memory cgroup when using MemSpecLimit. * Changes in Slurm 15.08.0pre4 ============================== -- Burst_buffer/cray - Convert logic to use new commands/API names (e.g. "dws_setup" rather than "bbs_setup"). -- Remove the MinJobAge size limitation. It can now exceed 65533 as it is represented using an unsigned integer. -- Verify that all plugin version numbers are identical to the component attempting to load them. Without this verification, the plugin can reference Slurm functions in the caller which differ (e.g. the underlying function's arguments could have changed between Slurm versions). NOTE: All plugins (except SPANK) must be built against the identical version of Slurm in order to be used by any Slurm command or daemon. This should eliminate some very difficult to diagnose problems due to use of old plugins. -- Increase the MAX_PACK_MEM_LEN define to avoid PMI2 failure when fencing with large amount of ranks (to 1GB). -- Requests by normal user to reset a job priority (even to lower it) will result in an error saying to change the job's nice value instead. -- SPANK naming changes: For environment variables set using the spank_job_control_setenv() function, the values were available in the slurm_spank_job_prolog() and slurm_spank_job_epilog() functions using getenv where the name was given a prefix of "SPANK_". That prefix has been removed for consistency with the environment variables available in the Prolog and Epilog scripts. -- Major additions to the layouts framework code. -- Add "TopologyParam" configuration parameter. Optional value of "dragonfly" is supported. -- Optimize resource allocation for systems with dragonfly networks. -- Add "--thread-spec" option to salloc, sbatch and srun commands. This is the count of threads reserved for system use per node. -- job_submit/lua: Enable reading and writing job environment variables. For example: if (job_desc.environment.LANGUAGE == "en_US") then ... -- Added two new APIs slurm_job_cpus_allocated_str_on_node_id() and slurm_job_cpus_allocated_str_on_node() to print the CPUs id allocated to a job. -- Specialized memory (a node's MemSpecLimit configuration parameter) is not available for allocation to jobs. -- Modify scontrol update job to allow jobid specification without the = sign. 'scontrol update job=123 ...' and 'scontrol update job 123 ...' are both valid syntax. -- Archive a month at a time when there are lots of records to archive. -- Introduce new sbatch option '--kill-on-invalid-dep=yes|no' which allows users to specify which behavior they want if a job dependency is not satisfied. -- Add Slurmdb::qos_get() interface to perl api. -- If a job fails to start set the requeue reason to be: job requeued in held state. -- Implemented a new MPI key,value PMIX_RING() exchange algorithm as an alternative to PMI2. -- Remove possible deadlocks in the slurmctld when the slurmdbd is busy archiving/purging. -- Add DB_ARCHIVE debug flag for filtering out debug messages in the slurmdbd when the slurmdbd is archiving/purging. -- Fix some power_save mode issues: Parsing of SuspendTime in slurm.conf was bad, powered down nodes would get set non-responding if there was an in-flight message, and permit nodes to be powered down from any state. -- Initialize variables in consumable resource plugin to prevent core dump. * Changes in Slurm 15.08.0pre3 ============================== -- CRAY - addition of acct_gather_energy/cray plugin. -- Add job credential to "Run Prolog" RPC used with a configuration of PrologFlags=alloc. This allows the Prolog to be passed identification of GPUs allocated to the job. -- Add SLURM_JOB_CONSTAINTS to environment variables available to the Prolog. -- Added "--mail=stage_out" option to job submission commands to notify user when burst buffer state out is complete. -- Require a "Reason" when using scontrol to set a node state to DOWN. -- Mail notifications on job BEGIN, END and FAIL now apply to a job array as a whole rather than generating individual email messages for each task in the job array. -- task/affinity - Fix memory binding to NUMA with cpusets. -- Display job's estimated NodeCount based off of partition's configured resources rather than the whole system's. -- Add AuthInfo option of "cred_expire=#" to specify the lifetime of a job step credential. The default value was changed from 1200 to 120 seconds. -- Set the delay time for job requeue to the job credential lifetime (120 seconds by default). This insures that prolog runs on every node when a job is requeued. (This change will slow down launch of re-queued jobs). -- Add AuthInfo option of "cred_expire=#" to specify the lifetime of a job step credential. -- Remove srun --max-launch-time option. The option has not been functional since Slurm version 2.0. -- Add sockets and cores to TaskPluginParams' autobind option. -- Added LaunchParameters configuration parameter. Have srun command test locally for the executable file if LaunchParameters=test_exec or the environment variable SLURM_TEST_EXEC is set. Without this an invalid command will generate one error message per task launched. -- Fix the slurm /etc/init.d script to return 0 upon stopping the daemons and return 1 in case of failure. -- Add the ability for a compute node to be allocated to multiple jobs, but restricted to a single user. Added "--exclusive=user" option to salloc, sbatch and srun commands. Added "owner" field to node record, visible using the scontrol and sview commands. Added new partition configuration parameter "ExclusiveUser=yes|no". * Changes in Slurm 15.08.0pre2 ============================== -- Add the environment variables SLURM_JOB_ACCOUNT, SLURM_JOB_QOS and SLURM_JOB_RESERVATION in the batch/srun jobs. -- Add sview burst buffer display. -- Properly enforce partition Shared=YES option. Previously oversubscribing resources required gang scheduling to be configured. -- Enable per-partition gang scheduling resource resolution (e.g. the partition can have SelectTypeParameters=CR_CORE, while the global value is CR_SOCKET). -- Make it so a newer version of a slurmstepd can talk to an older srun. allocation. Nodes could have been added while waiting for an allocation. -- Expanded --cpu-freq parameters to include min-max:governor specifications. --cpu-freq now supported on salloc and sbatch. -- Add support for optimized job allocations with respect to SGI Hypercube topology. NOTE: Only supported with select/linear plugin. NOTE: The program contribs/sgi/netloc_to_topology can be used to build Slurm's topology.conf file. -- Remove 64k validation of incoming RPC nodelist size. Validated at 64MB when unpacking. -- In slurmstepd() add the user primary group if it is not part of the groups sent from the client. -- Added BurstBuffer field to advanced reservations. -- For advanced reservation, replace flag "License_only" with flag "Any_Nodes". It can be used to indicate the an advanced reservation resources (licenses and/or burst buffers) can be used with any compute nodes. -- Allow users to specify the srun --resv-ports as 0 in which case no ports will be reserved. The default behaviour is to allocate one port per task. -- Interpret a partition configuration of "Nodes=ALL" in slurm.conf as including all nodes defined in the cluster. -- Added new configuration parameters PowerParameters and PowerPlugin. -- Added power management plugin infrastructure. -- If job already exceeded one of its QOS/Accounting limits do not return error if user modifies QOS unrelated job settings. -- Added DebugFlags value of "Power". -- When caching user ids of AllowGroups use both getgrnam_r() and getgrent_r() then remove eventual duplicate entries. -- Remove rpm dependency between slurm-pam and slurm-devel. -- Remove support for the XCPU (cluster management) package. -- Add Slurmdb::jobs_get() interface to perl api. -- Performance improvement when sending data from srun to stepds when processing fencing. -- Add the feature to specify arbitrary field separator when running sacct -p or sacct -P. The command line option is --separator. -- Introduce slurm.conf parameter to use Proportional Set Size (PSS) instead of RSS to determinate the memory footprint of a job. Add an slurm.conf option not to kill jobs that is over memory limit. -- Add job submission command options: --sicp (available for inter-cluster dependencies) and --power (specify power management options) to salloc, sbatch, and srun commands. -- Add DebugFlags option of SICP (inter-cluster option logging). -- In order to support inter-cluster job dependencies, the MaxJobID configuration parameter default value has been reduced from 4,294,901,760 to 2,147,418,112 and it's maximum value is now 2,147,463,647. ANY JOBS WITH A JOB ID ABOVE 2,147,463,647 WILL BE PURGED WHEN SLURM IS UPGRADED FROM AN OLDER VERSION! -- Add QOS name to the output of a partition in squeue/scontrol/sview/smap. * Changes in Slurm 15.08.0pre1 ============================== -- Add sbcast support for file transfer to resources allocated to a job step rather than a job allocation. -- Change structures with association in them to assoc to save space. -- Add support for job dependencies jointed with OR operator (e.g. "--depend=afterok:123?afternotok:124"). -- Add "--bb" (burst buffer specification) option to salloc, sbatch, and srun. -- Added configuration parameters BurstBufferParameters and BurstBufferType. -- Added burst_buffer plugin infrastructure (needs many more functions). -- Make it so when the fanout logic comes across a node that is down we abandon the tree to avoid worst case scenarios when the entire branch is down and we have to try each serially. -- Add better error reporting of invalid partitions at submission time. -- Move will-run test for multiple clusters from the sbatch code into the API so that it can be used with DRMAA. -- If a non-exclusive allocation requests --hint=nomultithread on a CR_CORE/SOCKET system lay out tasks correctly. -- Avoid including unused CPUs in a job's allocation when cores or sockets are allocated. -- Added new job state of STOPPED indicating processes have been stopped with a SIGSTOP (using scancel or sview), but retain its allocated CPUs. Job state returns to RUNNING when SIGCONT is sent (also using scancel or sview). -- Added EioTimeout parameter to slurm.conf. It is the number of seconds srun waits for slurmstepd to close the TCP/IP connection used to relay data between the user application and srun when the user application terminates. -- Remove slurmctld/dynalloc plugin as the work was never completed, so it is not worth the effort of continued support at this time. -- Remove DynAllocPort configuration parameter. -- Add advance reservation flag of "replace" that causes allocated resources to be replaced with idle resources. This maintains a pool of available resources that maintains a constant size (to the extent possible). -- Added SchedulerParameters option of "bf_busy_nodes". When selecting resources for pending jobs to reserve for future execution (i.e. the job can not be started immediately), then preferentially select nodes that are in use. This will tend to leave currently idle resources available for backfilling longer running jobs, but may result in allocations having less than optimal network topology. This option is currently only supported by the select/cons_res plugin. -- Permit "SuspendTime=NONE" as slurm.conf value rather than only a numeric value to match "scontrol show config" output. -- Add the 'scontrol show cache' command which displays the associations in slurmctld. -- Test more frequently for node boot completion before starting a job. Provides better responsiveness. -- Fix PMI2 singleton initialization. -- Permit PreemptType=qos and PreemptMode=suspend,gang to be used together. A high-priority QOS job will now oversubscribe resources and gang schedule, but only if there are insufficient resources for the job to be started without preemption. NOTE: That with PreemptType=qos, the partition's Shared=FORCE:# configuration option will permit one job more per resource to be run than than specified, but only if started by preemption. -- Remove the CR_ALLOCATE_FULL_SOCKET configuration option. It is now the default. -- Fix a race condition in PMI2 when fencing counters can be out of sync. -- Increase the MAX_PACK_MEM_LEN define to avoid PMI2 failure when fencing with large amount of ranks. -- Add QOS option to a partition. This will allow a partition to have all the limits a QOS has. If a limit is set in both QOS the partition QOS will override the job's QOS unless the job's QOS has the OverPartQOS flag set. -- The task_dist_states variable has been split into "flags" and "base" components. Added SLURM_DIST_PACK_NODES and SLURM_DIST_NO_PACK_NODES values to give user greater control over task distribution. The srun --dist options has been modified to accept a "Pack" and "NoPack" option. These options can be used to override the CR_PACK_NODE configuration option. * Changes in Slurm 14.11.12 =========================== -- Correct dependency formatting to print array task ids if set. -- Fix for configuration of "AuthType=munge" and "AuthInfo=socket=..." with alternate munge socket path. * Changes in Slurm 14.11.11 =========================== -- Fix systemd's slurmd service from killing slurmstepds on shutdown. -- Fix the qstat wrapper when user is removed from the system but still has running jobs. -- Log the request to terminate a job at info level if DebugFlags includes the Steps keyword. -- Fix potential memory corruption in _slurm_rpc_epilog_complete as well as _slurm_rpc_complete_job_allocation. -- Fix incorrectly sized buffer used by jobid2str which will cause buffer overflow in slurmctld. (Bug 2295.) * Changes in Slurm 14.11.10 =========================== -- Fix truncation of job reason in squeue. -- If a node is in DOWN or DRAIN state, leave it unavailable for allocation when powered down. -- Update the slurm.conf man page documenting better nohold_on_prolog_fail variable. -- Don't trucate task ID information in "squeue --array/-r" or "sview". -- Fix a bug which caused scontrol to core dump when releasing or holding a job by name. -- Fix unit conversion bug in slurmd which caused wrong memory calculation for cgroups. -- Fix issue with GRES in steps so that if you have multiple exclusive steps and you use all the GRES up instead of reporting the configuration isn't available you hold the requesting step until the GRES is available. -- Fix slurmdbd backup to use DbdAddr when contacting the primary. -- Fix error in MPI documentation. -- Fix to handle arrays with respect to number of jobs submitted. Previously only 1 job was accounted (against MaxSubmitJob) for when an array was submitted. -- Correct counting for job array limits, job count limit underflow possible when master cancellation of master job record. -- For pending jobs have sacct print 0 for nnodes instead of the bogus 2. -- Fix for tracking node state when jobs that have been allocated exclusive access to nodes (i.e. entire nodes) and later relinquish some nodes. Nodes would previously appear partly allocated and prevent use by other jobs. -- Fix updating job in db after extending job's timelimit past partition's timelimit. -- Fix srun -I from flooding the controller with step create requests. -- Requeue/hold batch job launch request if job already running (possible if node went to DOWN state, but jobs remained active). -- If a job's CPUs/task ratio is increased due to configured MaxMemPerCPU, then increase it's allocated CPU count in order to enforce CPU limits. -- Don't mark powered down node as not responding. This could be triggered by race condition of the node suspend and ping logic. -- Don't requeue RPC going out from slurmctld to DOWN nodes (can generate repeating communication errors). -- Propagate sbatch "--dist=plane=#" option to srun. -- Fix sacct to not return all jobs if the -j option is given with a trailing ','. -- Permit job_submit plugin to set a job's priority. -- Fix occasional srun segfault. -- Fix issue with sacct, printing 0_0 for array's that had finished in the database but the start record hadn't made it yet. -- Fix sacct -j, (nothing but a comma) to not return all jobs. -- Prevent slurmstepd from core dumping if /proc//stat has unexpected format. * Changes in Slurm 14.11.9 ========================== -- Correct "sdiag" backfill cycle time calculation if it yields locks. A microsecond value was being treated as a second value resulting in an overflow in the calcuation. -- Fix segfault when updating timelimit on jobarray task. -- Fix to job array update logic that can result in a task ID of 4294967294. -- Fix of job array update, previous logic could fail to update some tasks of a job array for some fields. -- CRAY - Fix seg fault if a blade is replaced and slurmctld is restarted. -- Fix plane distribution to allocate in blocks rather than cyclically. -- squeue - Remove newline from job array ID value printed. -- squeue - Enable filtering for job state SPECIAL_EXIT. -- Prevent job array task ID being inappropriately set to NO_VAL. -- MYSQL - Make it so you don't have to restart the slurmctld to gain the correct limit when a parent account is root and you remove a subaccount's limit which exists on the parent account. -- MYSQL - Close chance of setting the wrong limit on an association when removing a limit from an association on multiple clusters at the same time. -- MYSQL - Fix minor memory leak when modifying an association but no change was made. -- srun command line of either --mem or --mem-per-cpu will override both the SLURM_MEM_PER_CPU and SLURM_MEM_PER_NODE environment variables. -- Prevent slurmctld abort on update of advanced reservation that contains no nodes. -- ALPS - Revert commit 2c95e2d22 which also removes commit 2e2de6a4 allowing cray with the SubAllocate option to work as it did with 2.5. -- Properly parse CPU frequency data on POWER systems. -- Correct sacct.a man pages describing -i option. -- Capture salloc/srun information in sdiag statistics. -- Fix bug in node selection with topology optimization. -- Don't set distribution when srun requests 0 memory. -- Read in correct number of nodes from SLURM_HOSTFILE when specifying nodes and --distribution=arbitrary. -- Fix segfault in Bluegene setups where RebootQOSList is defined in bluegene.conf and accounting is not setup. -- MYSQL - Update mod_time when updating a start job record or adding one. -- MYSQL - Fix issue where if an association id ever changes on at least a portion of a job array is pending after it's initial start in the database it could create another row for the remain array instead of using the already existing row. -- Fix scheduling anomaly with job arrays submitted to multiple partitions, jobs could be started out of priority order. -- If a host has suspened jobs do not reboot it. Reboot only hosts with no jobs in any state. -- ALPS - Fix issue when using --exclusive flag on srun to do the correct thing (-F exclusive) instead of -F share. -- Fix various memory leaks in the Perl API. -- Fix a bug in the controller which display jobs in CF state as RUNNING. -- Preserve advanced _core_ reservation when nodes added/removed/resized on slurmctld restart. Rebuild core_bitmap as needed. -- Fix for non-standard Munge port location for srun/pmi use. -- Fix gang scheduling/preemption issue that could cancel job at startup. -- Fix a bug in squeue which prevented squeue -tPD to print array jobs. -- Sort job arrays in job queue according to array_task_id when priorities are equal. -- Fix segfault in sreport when there was no response from the dbd. -- ALPS - Fix compile to not link against -ljob and -lexpat with every lib or binary. -- Fix testing for CR_Memory when CR_Memory and CR_ONE_TASK_PER_CORE are used with select/linear. -- MySQL - Fix minor memory leak if a connection ever goes away whist using it. -- ALPS - Make it so srun --hint=nomultithread works correctly. -- Prevent job array task ID from being reported as NO_VAL if last task in the array gets requeued. -- Fix some potential deadlock issues when state files don't exist in the association manager. -- Correct RebootProgram logic when executed outside of a maintenance reservation. -- Requeue job if possible when slurmstepd aborts. * Changes in Slurm 14.11.8 ========================== -- Eliminate need for user to set user_id on job_update calls. -- Correct list of unavailable nodes reported in a job's "reason" field when that job can not start. -- Map job --mem-per-cpu=0 to --mem=0. -- Fix squeue -o %m and %d unit conversion to Megabytes. -- Fix issue with incorrect time calculation in the priority plugin when a job runs past it's time limit. -- Prevent users from setting job's partition to an invalid partition. -- Fix sreport core dump when requesting 'job SizesByAccount grouping=individual'. -- select/linear: Correct count of CPUs allocated to job on system with hyperthreads. -- Fix race condition where last array task might not get updated in the db. -- CRAY - Remove libpmi from rpm install -- Fix squeue -o %X output to correctly handle NO_VAL and suffix. -- When deleting a job from the system set the job_id to 0 to avoid memory corruption if thread uses the pointer basing validity off the id. -- Fix issue where sbatch would set ntasks-per-node to 0 making any srun afterward cause a divide by zero error. -- switch/cray: Refine logic to set PMI_CRAY_NO_SMP_ENV environment variable. -- When sacctmgr loads archives with version less than 14.11 set the array task id to NO_VAL, so sacct can display the job ids correctly. -- When using memory cgroup if a task uses more memory than requested the failures are logged into memory.failcnt count file by cgroup and the user is notified by slurmstepd about it. -- Fix scheduling inconsistency with GRES bound to specific CPUs. -- If user belongs to a group which has split entries in /etc/group search for its username in all groups. -- Do not consider nodes explicitly powered up as DOWN with reason of "Node unexpected rebooted". -- Use correct slurmd spooldir when creating cpu-frequency locks. -- Note that TICKET_BASED fairshare will be deprecated in the future. Consider using the FAIR_TREE algorithm instead. -- Set job's reason to BadConstaints when job can't run on any node. -- Prevent abort on update of reservation with no nodes (licenses only). -- Prevent slurmctld from dumping core if job_resrcs is missing in the job data structure. -- Fix squeue to print array task ids according to man page when SLURM_BITSTR_LEN is defined in the environment. -- In squeue, sort jobs based on array job ID if available. -- Fix the calculation of job energy by not including the NO_VAL values. -- Advanced reservation fixes: enable update of bluegene reservation, avoid abort on multi-core reservations. -- Set the totalview_stepid to the value of the job step instead of NO_VAL. -- Fix slurmdbd core dump if the daemon does not have connection with the database. -- Display error message when attempting to modify priority of a held job. -- Backfill scheduler: The configured backfill_interval value (default 30 seconds) is now interpretted as a maximum run time for the backfill scheduler. Once reached, the scheduler will build a new job queue and start over, even if not all jobs have been tested. -- Backfill scheduler now considers OverTimeLimit and KillWait configuration parameters to estimate when running jobs will exit. -- Correct task layout with CR_Pack_Node option and more than 1 CPU per task. -- Fix the scontrol man page describing the release argument. -- When job QOS is modified, do so before attempting to change partition in order to validate the partition's Allow/DenyQOS parameter. * Changes in Slurm 14.11.7 ========================== -- Initialize some variables used with the srun --no-alloc option that may cause random failures. -- Add SchedulerParameters option of sched_min_interval that controls the minimum time interval between any job scheduling action. The default value is zero (disabled). -- Change default SchedulerParameters=max_sched_time from 4 seconds to 2. -- Refactor scancel so that all pending jobs are cancelled before starting cancellation of running jobs. Otherwise they happen in parallel and the pending jobs can be scheduled on resources as the running jobs are being cancelled. -- ALPS - Add new cray.conf variable NoAPIDSignalOnKill. When set to yes this will make it so the slurmctld will not signal the apid's in a batch job. Instead it relies on the rpc coming from the slurmctld to kill the job to end things correctly. -- ALPS - Have the slurmstepd running a batch job wait for an ALPS release before ending the job. -- Initialize variables in consumable resource plugin to prevent core dump. -- Fix scancel bug which could return an error on attempt to signal a job step. -- In slurmctld communication agent, make the thread timeout be the configured value of MessageTimeout rather than 30 seconds. -- sshare -U/--Users only flag was used uninitialized. -- Cray systems, add "plugstack.conf.template" sample SPANK configuration file. -- BLUEGENE - Set DB2NOEXITLIST when starting the slurmctld daemon to avoid random crashing in db2 when the slurmctld is exiting. -- Make full node reservations display correctly the core count instead of cpu count. -- Preserve original errno on execve() failure in task plugin. -- Add SLURM_JOB_NAME env variable to an salloc's environment. -- Overwrite SLURM_JOB_NAME in an srun when it gets an allocation. -- Make sure each job has a wckey if that is something that is tracked. -- Make sure old step data is cleared when job is requeued. -- Load libtinfo as needed when building ncurses tools. -- Fix small memory leak in backup controller. -- Fix segfault when backup controller takes control for second time. -- Cray - Fix backup controller running native Slurm. -- Provide prototypes for init_setproctitle()/fini_setproctitle on NetBSD. -- Add configuration test to find out the full path to su command. -- preempt/job_prio plugin: Fix for possible infinite loop when identifying preemptable jobs. -- preempt/job_prio plugin: Implement the concept of Warm-up Time here. Use the QoS GraceTime as the amount of time to wait before preempting. Basically, skip preemption if your time is not up. -- Make srun wait KillWait time when a task is cancelled. -- switch/cray: Revert logic added to 14.11.6 that set "PMI_CRAY_NO_SMP_ENV=1" if CR_PACK_NODES is configured. * Changes in Slurm 14.11.6 ========================== -- If SchedulerParameters value of bf_min_age_reserve is configured, then a newly submitted job can start immediately even if there is a higher priority non-runnable job which has been waiting for less time than bf_min_age_reserve. -- qsub wrapper modified to export "all" with -V option -- RequeueExit and RequeueExitHold configuration parameters modified to accept numeric ranges. For example "RequeueExit=1,2,3,4" and "RequeueExit=1-4" are equivalent. -- Correct the job array specification parser to accept brackets in job array expression (e.g. "123_[4,7-9]"). -- Fix for misleading job submit failure errors sent to users. Previous error could indicate why specific nodes could not be used (e.g. too small memory) when other nodes could be used, but were not for another reason. -- Fix squeue --array to display correctly the array elements when the % separator is specified at the array submission time. -- Fix priority from not being calculated correctly due to memory issues. -- Fix a transient pending reason 'JobId=job_id has invalid QOS'. -- A non-administrator change to job priority will not be persistent except for holding the job. User's wanting to change a job priority on a persistent basis should reset it's "nice" value. -- Print buffer sizes as unsigned values when failed to pack messages. -- Fix race condition where sprio would print factors without weights applied. -- Document the sacct option JobIDRaw which for arrays prints the jobid instead of the arrayTaskId. -- Allow users to modify MinCPUsNode, MinMemoryNode and MinTmpDiskNode of their own jobs. -- Increase the jobid print field in SQUEUE_FORMAT in opt_modulefiles_slurm.in. -- Enable compiling without optimizations and with debugging symbols by default. Disable this by configuring with --disable-debug. -- job_submit/lua plugin: Add mail_type and mail_user fields. -- Correct output message from sshare. -- Use standard statvfs(2) syscall if available, in preference to non-standard statfs. -- Add a new option -U/--Users to sshare to display only users information, parent and ancestors are not printed. -- Purge 50000 records at a time so that locks can released periodically. -- Fix potentially uninitialized variables -- ALPS - Fix issue where a frontend node could become unresponsive and never added back into the system. -- Gate epilog complete messages as done with other messages -- If we have more than a certain number of agents (50) wait longer when gating rpcs. -- FrontEnd - ping non-responding or down nodes. -- switch/cray: If CR_PACK_NODES is configured, then set the environment variable "PMI_CRAY_NO_SMP_ENV=1" -- Fix invalid memory reference in SlurmDBD when putting a node up. -- Allow opening of plugstack.conf even when a symlink. -- Fix scontrol reboot so that rebooted nodes will not be set down with reason 'Node xyz unexpectedly rebooted' but will be correctly put back to service. -- CRAY - Throttle the post NHC operations as to not hog the job write lock if many steps/jobs finish at once. -- Disable changes to GRES count while jobs are running on the node. -- CRAY - Fix issue with scontrol reconfig. -- slurmd: Remove wrong reporting of "Error reading step ... memory limit". The logic was treating success as an error. -- Eliminate "Node ping apparently hung" error messages. -- Fix average CPU frequency calculation. -- When allocating resources with resolution of sockets, charge the job for all CPUs on allocated sockets rather than just the CPUs on used cores. -- Prevent slurmdbd error if cluster added or removed while rollup in progress. Removing a cluster can cause slurmdbd to abort. Adding a cluster can cause the slurmdbd rollup to hang. -- sview - When right clicking on a tab make sure we don't display the page list, but only the column list. -- FRONTEND - If doing a clean start make sure the nodes are brought up in the database. -- MySQL - Fix issue when using the TrackSlurmctldDown and nodes are down at the same time, don't double bill the down time. -- MySQL - Various memory leak fixes. -- sreport - Fix Energy displays -- Fix node manager logic to keep unexpectedly rebooted node in state NODE_STATE_DOWN even if already down when rebooted. -- Fix for array jobs submitted to multiple partitions not starting. -- CRAY - Enable ALPs mpp compatibility code in sbatch for native Slurm. -- ALPS - Move basil_inventory to less confusing function. -- Add SchedulerParameters option of "sched_max_job_start=" to limit the number of jobs that can be started in any single execution of the main scheduling logic. -- Fixed compiler warnings generated by gcc version >= 4.6. -- sbatch to stop parsing script for "#SBATCH" directives after first command, which matches the documentation. -- Overwrite the SLURM_JOB_NAME in sbatch if already exist in the environment and use the one specified on the command line --job-name. -- Remove xmalloc_nz from unpack functions. If the unpack ever failed the free afterwards would not have zeroed out memory on the variables that didn't get unpacked. -- Improve database interaction from controller. -- Fix for data shift when loading job archives. -- ALPS - Added new SchedulerParameters=inventory_interval to specify how often an inventory request is handled. -- ALPS - Don't run a release on a reservation on the slurmctld for a batch job. This is already handled on the stepd when the script finishes. * Changes in Slurm 14.11.5 ========================== -- Correct the squeue command taking into account that a node can have NULL name if it is not in DNS but still in slurm.conf. -- Fix slurmdbd regression which would cause a segfault when a node is set down with no reason. -- BGQ - Fix issue with job arrays not being handled correctly in the runjob_mux plugin. -- Print FAIR_TREE, if configured, in "scontrol show config" output for PriorityFlags. -- Add SLURM_JOB_GPUS environment variable to those available in the Prolog. -- Load lua-5.2 library if using lua5.2 for lua job submit plugin. -- GRES logic: Prevent bad node_offset due to not preserving no_consume flag. -- Fix wrong variables used in the wrapper functions needed for systems that don't support strong_alias -- Fix code for apple computers SOL_TCP is not defined -- Cray/BASIL - Check for mysql credentials in /root/.my.cnf. -- Fix sprio showing wrong priority for job arrays until priority is recalculated. -- Account to batch step all CPUs that are allocated to a job not just one since the batch step has access to all CPUs like other steps. -- Fix job getting EligibleTime set before meeting dependency requirements. -- Correct the initialization of QOS MinCPUs per job limit. -- Set the debug level of information messages in cgroup plugin to debug2. -- For job running under a debugger, if the exec of the task fails, then cancel its I/O and abort immediately rather than waiting 60 seconds for I/O timeout. -- Fix associations not getting default qos set until after a restart. -- Set the value of total_cpus not to be zero before invoking acct_policy_job_runnable_post_select. -- MySQL - When requesting cluster resources, only return resources for the cluster(s) requested. -- Add TaskPluginParam=autobind=threads option to set a default binding in the case that "auto binding" doesn't find a match. -- Introduce a new SchedulerParameters variable nohold_on_prolog_fail. If configured don't requeue jobs on hold is a Prolog fails. -- Make it so sched_params isn't read over and over when an epilog complete message comes in -- Fix squeue -L not filtering out jobs with licenses. -- Changed the implementation of xcpuinfo_abs_to_mac() be identical _abs_to_mac() to fix CPUs allocation using cpuset cgroup. -- Improve the explanation of the unbuffered feature in the srun man page. -- Make taskplugin=cgroup work for core spec. needed to have task/cgroup before. -- Fix reports not using the month usage table. -- BGQ - Sanity check given for translating small blocks into slurm bg_records. -- Fix bug preventing the requeue/hold or requeue/special_exit of job from the completing state. -- Cray - Fix for launching batch step within an existing job allocation. -- Cray - Add ALPS_APP_ID_ENV environment variable. -- Increase maximum MaxArraySize configuration parameter value from 1,000,001 to 4,000,001. -- Added new SchedulerParameters value of bf_min_age_reserve. The backfill scheduler will not reserve resources for pending jobs until they have been pending for at least the specified number of seconds. This can be valuable if jobs lack time limits or all time limits have the same value. -- Fix support for --mem=0 (all memory of a node) with select/cons_res plugin. -- Fix bug that can permit someone to kill job array belonging to another user. -- Don't set the default partition on a license only reservation. -- Show a NodeCnt=0, instead of NO_VAL, in "scontrol show res" for a license only reservation. -- BGQ - When using static small blocks make sure when clearing the job the block is set up to it's original state. -- Start job allocation using lowest numbered sockets for block task distribution for consistency with cyclic distribution. * Changes in Slurm 14.11.4 ========================== -- Make sure assoc_mgr locks are initialized correctly. -- Correct check of enforcement when filling in an association. -- Make sacctmgr print out classification correctly for clusters. -- Add array_task_str to the perlapi job info. -- Fix for slurmctld abort with GRES types configured and no CPU binding. -- Fix for GRES scheduling where count > 1 per topology type (or GRES types). -- Make CR_ONE_TASK_PER_CORE work correctly with task/affinity. -- job_submit/pbs - Fix possible deadlock. -- job_submit/lua - Add "alloc_node" to job information available. -- Fix memory leak in mysql accounting when usage rollup happens. -- If users specify ALL together with other variables using the --export sbatch/srun command line option, propagate the users' environ to the execution side. -- Fix job array scheduling anomaly that can stop scheduling of valid tasks. -- Fix perl api tests for libslurmdb to work correctly. -- Remove some misleading logs related to non-consumable GRES. -- Allow --ignore-pbs to take effect when read as an #SBATCH argument. -- Fix Slurmdb::clusters_get() in perl api from not returning information. -- Fix TaskPluginParam=Cpusets from logging error message about not being able to remove cpuset dir which was already removed by the release_agent. -- Fix sorting by time left in squeue. -- Fix the file name substitution for job stderr when %A, %a %j and %u are specified. -- Remove minor warning when compiling slurmstepd. -- Fix database resources so they can add new clusters to them after they have initially been added. -- Use the slurm_getpwuid_r wrapper of getpwuid_r to handle possible interrupts. -- Correct the scontrol man page and command listing which node states can be set by the command. -- Stop sacct from printing non-existent stat information for Front End systems. -- Correct srun and acct_gather.conf man pages, mention Filesystem instead of Lustre. -- When a job using multiple partition starts send to slurmdbd only the partition in which the job runs. -- ALPS - Fix depth for MemoryAllocation in BASIL with CLE 5.2.3. -- Fix assoc_mgr hash to deal with users that don't have a uid yet when making reservations. -- When a job uses multiple partition set the environment variable SLURM_JOB_PARTITION to be the one in which the job started. -- Print spurious message about the absence of cgroup.conf at log level debug2 instead of info. -- Enable CUDA v7.0+ use with a Slurm configuration of TaskPlugin=task/cgroup ConstrainDevices=yes (in cgroup.conf). With that configuration CUDA_VISIBLE_DEVICES will start at 0 rather than the device number. -- Fix job array logic that can cause slurmctld to abort. -- Report job "shared" field properly in scontrol, squeue, and sview. -- If a job is requeued because of RequeueExit or RequeueExitHold sent event REQUEUED to slurmdbd. -- Fix build if hwloc is in non-standard location. -- Fix slurmctld job recovery logic which could cause the last task in a job array to be lost. -- Fix slurmctld initialization problem which could cause requeue of the last task in a job array to fail if executed prior to the slurmctld loading the maximum size of a job array into a variable in the job_mgr.c module. -- Fix fatal in controller when deleting a user association of a user which had been previously removed from the system. -- MySQL - If a node state and reason are the same on a node state change don't insert a new row in the event table. -- Fix issue with "sreport cluster AccountUtilizationByUser" when using PrivateData=users. -- Fix perlapi tests for libslurm perl module. -- MySQL - Fix potential issue when PrivateData=Usage and a normal user runs certain sreport reports. * Changes in Slurm 14.11.3 ========================== -- Prevent vestigial job record when canceling a pending job array record. -- Fixed squeue core dump. -- Fix job array hash table bug, could result in slurmctld infinite loop or invalid memory reference. -- In srun honor ntasks_per_node before looking at cpu count when the user doesn't request a number of tasks. -- Fix ghost job when submitting job after all jobids are exhausted. -- MySQL - Enhanced coordinator security checks. -- Fix for task/affinity if an admin configures a node for having threads but then sets CPUs to only represent the number of cores on the node. -- Make it so previous versions of salloc/srun work with newer versions of Slurm daemons. -- Avoid delay on commit for PMI rank 0 to improve performance with some MPI implementations. -- auth/munge - Correct logic to read old format AccountingStoragePass. -- Reset node "RESERVED" state as appropriate when deleting a maintenance reservation. -- Prevent a job manually suspended from being resumed by gang scheduler once free resources are available. -- Prevent invalid job array task ID value if a task is started using gang scheduling. -- Fixes for clean build on FreeBSD. -- Fix documentation bugs in slurm.conf.5. DenyAccount should be DenyAccounts. -- For backward compatibility with older versions of OMPI not compiled with --with-pmi restore the SLURM_STEP_RESV_PORTS in the job environment. -- Update the html documentation describing the integration with openmpi. -- Fix sacct when searching by nodelist. -- Fix cosmetic info statements when dealing with a job array task instead of a normal job. -- Fix segfault with job arrays. -- Correct the sbatch pbs parser to process -j. -- BGQ - Put print statement under a DebugFlag. This was just an oversight. -- BLUEGENE - Remove check that would erroneously remove the CONFIGURING flag from a job while the job is waiting for a block to boot. -- Fix segfault in slurmstepd when job exceeded memory limit. -- Fix race condition that could start a job that is dependent upon a job array before all tasks of that job array complete. -- PMI2 race condition fix. * Changes in Slurm 14.11.2 ========================== -- Fix Centos5 compile errors. -- Fix issue with association hash not getting the correct index which could result in seg fault. -- Fix salloc/sbatch -B segfault. -- Avoid huge malloc if GRES configured with "Type" and huge "Count". -- Fix jobs from starting in overlapping reservations that won't finish before a "maint" reservation begins. -- When node gets drained while in state mixed display its status as draining in sinfo output. -- Allow priority/multifactor to work with sched/wiki(2) if all priorities have no weight. This allows for association and QOS decay limits to work. -- Fix "squeue --start" to override SQUEUE_FORMAT env variable. -- Fix scancel to be able to cancel multiple jobs that are space delimited. -- Log Cray MPI job calling exit() without mpi_fini(), but do not treat it as a fatal error. This partially reverts logic added in version 14.03.9. -- sview - Fix displaying of suspended steps elapsed times. -- Increase number of messages that get cached before throwing them away when the DBD is down. -- Fix jobs from starting in overlapping reservations that won't finish before a "maint" reservation begins. -- Restore GRES functionality with select/linear plugin. It was broken in version 14.03.10. -- Fix bug with GRES having multiple types that can cause slurmctld abort. -- Fix squeue issue with not recognizing "localhost" in --nodelist option. -- Make sure the bitstrings for a partitions Allow/DenyQOS are up to date when running from cache. -- Add smap support for job arrays and larger job ID values. -- Fix possible race condition when attempting to use QOS on a system running accounting_storage/filetxt. -- Fix issue with accounting_storage/filetxt and job arrays not being printed correctly. -- In proctrack/linuxproc and proctrack/pgid, check the result of strtol() for error condition rather than errno, which might have a vestigial error code. -- Improve information recording for jobs deferred due to advanced reservation. -- Exports eio_new_initial_obj to the plugins and initialize kvs_seq on mpi/pmi2 setup to support launching. * Changes in Slurm 14.11.1 ========================== -- Get libs correct when doing the xtree/xhash make check. -- Update xhash/tree make check to work correctly with current code. -- Remove the reference 'experimental' for the jobacct_gather/cgroup plugin. -- Add QOS manipulation examples to the qos.html documentation page. -- If 'squeue -w node_name' specifies an unknown host name print an error message and return 1. -- Fix race condition in job_submit plugin logic that could cause slurmctld to deadlock. -- Job wait reason of "ReqNodeNotAvail" expanded to identify unavailable nodes (e.g. "ReqNodeNotAvail(Unavailable:tux[3-6])"). * Changes in Slurm 14.11.0 ========================== -- ALPS - Fix issue with core_spec warning. -- Allow multiple partitions to be specified in sinfo -p. -- Install the service files in /usr/lib/systemd/system. -- MYSQL - Add id_array_job and id_resv keys to $CLUSTER_job_table. THIS COULD TAKE A WHILE TO CREATE THE KEYS SO BE PATIENT. -- CRAY - Resize bitmaps on a restart and find we have more blades than before. -- Add new eio API function for removing unused connections. -- ALPS - Fix issue where batch allocations weren't correctly confirmed or released. -- Define DEFAULT_MAX_TASKS_PER_NODE based on MAX_TASKS_PER_NODE from slurm.h as per documentation. -- Update the FAQ about relocating slurmctld. -- In the memory cgroup enable memory.use_hierarchy in the cgroup root. -- Export eio.c functions for use by MPI/PMI2. -- Add SLURM_CLUSTER_NAME to job environment. * Changes in Slurm 14.11.0rc3 ============================= -- Allow envs to override autotools binaries in autogen.sh -- Added system services files. -- If the jobs pends with DependencyNeverSatisfied keep it pending even after the job which it was depending upon was cleaned. -- Let operators (in addition to user root and SlurmUser) see job script for other user's jobs. -- Perl API modified to return node state of MIXED rather than ALLOCATED if only some CPUs allocated. -- Double Munge connect retry timeout from 1 to 2 seconds. -- sview - Remove unneeded code that was resolved globally in commit 98e24b0dedc. -- Collect and report the accounting of the batch step and its children. -- Add configure checks for faccessat and eaccess, and make use of one of them if available. -- Make configure --enable-developer also set --enable-debug -- Introduce a SchedulerParameters variable kill_invalid_depend, if set then jobs pending with invalid dependency are going to be terminated. -- Move spank_user_task() call in slurmstepd after the task_g_pre_launch() so that the task affinity information is available to spank. -- Make /etc/init.d/slurm script return value 3 when the daemon is not running. This is required by Linux Standard Base Core Specification 3.1 * Changes in Slurm 14.11.0rc2 ============================= -- Logs for jobs which are explicitly requeued will say so rather than saying that a node in their allocation failed. -- Updated the documentation about the remote licenses served by the Slurm database. -- Insure that slurm_spank_exit() is only called once from srun. -- Change the signature of net_set_low_water() to use 4 bytes instead of 8. -- Export working_cluster_rec in libslurmdb.so as well as move some function definitions needed for drmaa. -- If using cons_res or serial cause a fatal in the plugin instead of causing the SelectTypeParameters to magically set to CR_CPU. -- Enhance task/affinity auto binding to consider tasks * cpus-per-task. -- Fix regression the priority/multifactor which would cause memory corruption. Issue is only in rc1. -- Add PrivateData value of "cloud". If set, powered down nodes in the cloud will be visible. -- Sched/backfill - Eliminate clearing start_time of running jobs. -- Fix various backwards compatibility issues. -- If failed to launch a batch job, requeue it in hold. * Changes in Slurm 14.11.0rc1 ============================= -- When using cgroup name the batch step as step_batch instead of batch_4294967294 -- Changed LEVEL_BASED priority to be "Fair_Tree" -- Port to NetBSD. -- BGQ - Add cnode based reservations. -- Alongside totalview_jobid implement totalview_stepid available to sattach. -- Add ability to include other files in slurm.conf based upon the ClusterName. -- Update strlcpy to latest upstream version. -- Add reservation information in the sacct and sreport output. -- Add job priority calculation check for overflow and fix memory leak. -- Add SchedulerParameters option of pack_serial_at_end to put serial jobs at the end of the available nodes rather than using a best fit algorithm. -- Allow regular users to view default sinfo output when privatedata=reservations is set. -- PrivateData=reservation modified to permit users to view the reservations which they have access to (rather then preventing them from seeing ANY reservation). -- job_submit/lua: Fix job_desc set field logic * Changes in Slurm 14.11.0pre5 ============================== -- Fix sbatch --export=ALL, it was treated by srun as a request to explicitly export only the environment variable named "ALL". -- Improve scheduling of jobs in reservations that overlap other reservations. -- Modify sgather to make global file systems easier to configure. -- Added sacctmgr reconfig to reread the slurmdbd.conf in the slurmdbd. -- Modify scontrol job operations to accept comma delimited list of job IDs. Applies to job update, hold, release, suspend, resume, requeue, and requeuehold operations. -- Refactor job_submit/lua interface. LUA FUNCTIONS NEED TO CHANGE! The lua script no longer needs to explicitly load meta-tables, but information is available directly using names slurm.reservations, slurm.jobs, slurm.log_info, etc. Also, the job_submit.lua script is reloaded when updated without restarting the slurmctld daemon. -- Allow users to specify --resv_ports to have value 0. -- Cray MPMD (Multiple-Program Multiple-Data) support completed. -- Added ability for "scontrol update" to references jobs by JobName (and filtered optionally by UserID). -- Add support for an advanced reservation start time that remains constant relative to the current time. This can be used to prevent the starting of longer running jobs on select nodes for maintenance purpose. See the reservation flag "TIME_FLOAT" for more information. -- Enlarge the jobid field to 18 characters in squeue output. -- Added "scontrol write config" option to save a copy of the current configuration in a file containing a time stamp. -- Eliminate native Cray specific port management. Native Cray systems must now use the MpiParams configuration parameter to specify ports to be used for commmunications. When upgrading Native Cray systems from version 14.03, all running jobs should be killed and the switch_cray_state file (in SaveStateLocation of the nodes where the slurmctld daemon runs) must be explicitly deleted. * Changes in Slurm 14.11.0pre4 ============================== -- Added job array data structure and removed 64k array size restriction. -- Added SchedulerParameters options of bf_max_job_array_resv to control how many tasks of a job array should have resources reserved for them. -- Added more validity checking of incoming job submit requests. -- Added srun --export option to set/export specific environment variables. -- Scontrol modified to print separate error messages for job arrays with different exit codes on the different tasks of the job array. Applies to job suspend and resume operations. -- Fix race condition in CPU frequency set with job preemption. -- Always call select plugin on step termination, even if the job is also complete. -- Srun executable names beginning with "." will be resolved based upon the working directory and path on the compute node rather than the submit node. -- Add node state string suffix of "$" to identify nodes in maintenance reservation or scheduled for reboot. This applies to scontrol, sinfo, and sview commands. -- Enable scontrol to clear a nodes's scheduled reboot by setting its state to "RESUME". -- As per sbatch and srun documentation when the --signal option is used signal only the steps and unless, in the case, of a batch job B is specified in which case signal only the batch script. -- Modify AuthInfo configuration parameter to accept credential lifetime option. -- Modify crypto/munge plugin to use socket and timeout specified in AuthInfo. -- If we have a state for a step on completion put that in the database instead of guessing off the exit_code. -- Added squeue -P/--priority option that can be used to display pending jobs in the same order as used by the Slurm scheduler even if jobs are submitted to multiple partitions (job is reported once per usable partition). -- Improve the pending reason description for various QOS limits. For each QOS limit that causes a job to be pending print its specific reason. For example if job pends because of GrpCpus the squeue command will print QOSGrpCpuLimit as pending reason. -- sched/backfill - Set expected start time of job submitted to multiple partitions to the earliest start time on any of the partitions. -- Introduce a MAX_BATCH_REQUEUE define that indicates how many times a job can be requeued upon prolog failure. When the number is reached the job is put on hold with reason JobHoldMaxRequeue. -- Add sbatch job array option to limit the number of simultaneously running tasks from a job array (e.g. "--array=0-15%4"). -- Implemented a new QOS limit MinCPUs. Users running under a QOS must request a minimum number of CPUs which is at least MinCPUs otherwise their job will pend. -- Introduced a new pending reason WAIT_QOS_MIN_CPUS to reflect the new QOS limit. -- Job array dependency based upon state is now dependent upon the state of the array as a whole (e.g. afterok requires ALL tasks to complete sucessfully, afternotok is true if ANY tasks does not complete successfully, and after requires all tasks to at least be started). -- The srun -u/--unbuffered options set the stdout of the task launched by srun to be line buffered. -- The srun options -/--label and -u/--unbuffered can be specified together. This limitation has been removed. -- Provide sacct display of gres accounting information per job. -- Change the node status size from uin16_t to uint32_t. * Changes in Slurm 14.11.0pre3 ============================== -- Move xcpuinfo.[c|h] to the slurmd since it isn't needed anywhere else and will avoid the need for all the daemons to link to libhwloc. -- Add memory test to job_submit/partition plugin. -- Added new internal Slurm functions xmalloc_nz() and xrealloc_nz(), which do not initialize the allocated memory to zero for improved performance. -- Modify hostlist function to dynamically allocate buffer space for improved performance. -- In the job_submit plugin: Remove all slurmctld locks prior to job_submit() being called for improved performance. If any slurmctld data structures are read or modified, add locks directly in the plugin. -- Added PriorityFlag LEVEL_BASED described in doc/html/level_based.shtml -- If Fairshare=parent is set on an account, that account's children will be effectively reparented for fairshare calculations to the first parent of their parent that is not Fairshare=parent. Limits remain the same, only it's fairshare value is affected. * Changes in Slurm 14.11.0pre2 ============================== -- Added AllowSpecResourcesUsage configuration parameter in slurm.conf. This allows jobs to use specialized resources on nodes allocated to them if the job designates --core-spec=0. -- Add new SchedulerParameters option of build_queue_timeout to throttle how much time can be consumed building the job queue for scheduling. -- Added HealthCheckNodeState option of "cycle" to cycle through the compute nodes over the course of HealthCheckInterval rather than running all at the same time. -- Add job "reboot" option for Linux clusters. This invokes the configured RebootProgram to reboot nodes allocated to a job before it begins execution. -- Added squeue -O/--Format option that makes all job and step fields available for printing. -- Improve database slurmctld entry speed dramatically. -- Add "CPUs" count to output of "scontrol show step". -- Add support for lua5.2 -- scancel -b signals only the batch step neither any other step nor any children of the shell script. -- MySQL - enforce NO_ENGINE_SUBSTITUTION -- Added CpuFreqDef configuration parameter in slurm.conf to specify the default CPU frequency and governor to be set at job end. -- Added support for job email triggers: TIME_LIMIT, TIME_LIMIT_90 (reached 90% of time limit), TIME_LIMIT_80 (reached 80% of time limit), and TIME_LIMIT_50 (reached 50% of time limit). Applies to salloc, sbatch and srun commands. -- In slurm.conf add the parameter SrunPortRange=min-max. If this is configured then srun will use its dynamic ports only from the configured range. -- Make debug_flags 64 bit to handle more flags. * Changes in Slurm 14.11.0pre1 ============================== -- Modify etc/cgroup.release_common.example to set specify full path to the scontrol command. Also find cgroup mount point by reading cgroup.conf file. -- Improve qsub wrapper support for passing environment variables. -- Modify sdiag to report Slurm RPC traffic by user, type, count and time consumed. -- In select plugins, stop triggering extra logging based upon the debug flag CPU_Bind and use SelectType instead. -- Added SchedulerParameters options of bf_yield_interval and bf_yield_sleep to control how frequently and for how long the backfill scheduler will relinquish its locks. -- To support larger numbers of jobs when the StateSaveDirectory is on a file system that supports a limited number of files in a directory, add a subdirectory called "hash.#" based upon the last digit of the job ID. -- More gracefully handle missing batch script file. Just kill the job and do not drain the compute node. -- Add support for allocation of GRES by model type for heterogenous systems (e.g. request a Kepler GPU, a Tesla GPU, or a GPU of any type). -- Record and enable display of nodes anticipated to be used for pending jobs. -- Modify squeue --start option to print the nodes expected to be used for pending job (in addition to expected start time, etc.). -- Add association hash to the assoc_mgr. -- Better logic to handle resized jobs when the DBD is down. -- Introduce MemLimitEnforce yes|no in slurm.conf. If set no Slurm will not terminate jobs if they exceed requested memory. -- Add support for non-consumable generic resources for resources that are limited, but can be shared between jobs. -- Introduce 5 new Slurm errors in slurm_errno.h related to job to better report error conditions. -- Modify scontrol to print error message for each array task when updating the entire array. -- Added gres_drain and gres_used fields to node_info_t. -- Added PriorityParameters configuration parameter in slurm.conf. -- Introduce automatic job requeue policy based on exit value. See RequeueExit and RequeueExitHold descriptions in slurm.conf man page. -- Modify slurmd to cache launched job IDs for more responsive job suspend and gang scheduling. -- Permit jobs steps full control over cpu_bind options if specialized cores are included in the job allocation. -- Added ChosLoc configuration parameter to specifiy the pathname of the Chroot OS tool. -- Sent SIGCONT/SIGTERM when a job is selected for preemption with GraceTime configured rather than waiting for GraceTime to be reached before notifying the job. -- Do not resume a job with specialized cores on a node running another job with specialized cores (only one can run at a time). -- Add specialized core count to job suspend/resume calls. -- task/affinity and task/cgroup - Correct specialized core task binding with user supplied invalid CPU mask or map. -- Add srun --cpu-freq options to set the CPU governor (OnDemand, Performance, PowerSave or UserSpace). -- Add support for a job step's CPU governor and/or frequency to be reset on suspend/resume (or gang scheduling). The default for an idle CPU will now be "ondemand" rather than "userspace" with the lowest frequency (to recover from hard slurmd failures and support gang scheduling). -- Added PriorityFlags option of Calulate_Running to continue recalculating the priority of running jobs. -- Replace round-robin front-end node selection with least-loaded algorithm. -- CRAY - Improve support of XC30 systems when running natively. -- Add new node configuration parameters CoreSpecCount, CPUSpecList and MemSpecLimit which support the reservation of resources for system use with Linux cgroup. -- Add child_forked() function to the slurm_acct_gather_profile plugin to close open files, leaving application with no extra open file descriptors. -- Cray/ALPS system - Enable backup controller to run outside of the Cray to accept new job submissions and most other operations on the pending jobs. -- Have sacct print job and task array id's for job arrays. -- Smooth out fanout logic -- If is present name major threads in slurmctld, for example backfill thread: slurmctld_bckfl, the rpc manager: slurmctld_rpcmg etc. The name can be seen for example using top -H. -- sview - Better job_array support. -- Provide more precise error message when job allocation can not be satisfied (e.g. memory, disk, cpu count, etc. rather than just "node configuration not available"). -- Create a new DebugFlags named TraceJobs in slurm.conf to print detailed information about jobs in slurmctld. The information include job ids, state and node count. -- When a job dependency can never be satisfied do not cancel the job but keep pending with reason WAIT_DEP_INVALID (DependencyNeverSatisfied). * Changes in Slurm 14.03.12 =========================== -- Make it so previous versions of salloc/srun work with newer versions of Slurm daemons. -- PMI2 race condition fix. -- Avoid delay on commit for PMI rank 0 to improve performance with some MPI implementations. -- Correct the sbatch pbs parser to process -j. -- Squeue modified to not merge tasks of a job array if their wait reasons differ. -- Use the slurm_getpwuid_r wrapper of getpwuid_r to handle possible interrupts. -- Allow --ignore-pbs to take effect when read as an #SBATCH argument. -- Do not launch step if job killed while the prolog was running. * Changes in Slurm 14.03.11 =========================== -- ALPS - Fix depth for Memory items in BASIL with CLE 5.2 (changed starting in 5.2.3). -- ALPS - Fix issue when tracking memory on a PerNode basis instead of PerCPU. -- Modify assoc_mgr_fill_in_qos() to allow for a flag to know if the QOS read lock was locked outside of the function or not. -- Give even better estimates on pending node count if no node count is requested. -- Fix jobcomp/mysql plugin for MariaDB 10+/Mysql 5.6+ to work with reserved work "partition". -- If requested (scontrol reboot node_name) reboot a node even if it has an maintenance reservation that is not active yet. -- Fix issue where exclusive allocations wouldn't lay tasks out correctly with CR_PACK_NODES. -- Do not requeue a batch job from slurmd daemon if it is killed while in the process of being launched (a race condition introduced in v14.03.9). -- Do not let srun overwrite SLURM_JOB_NUM_NODES if already in an allocation. -- Prevent a job's end_time from being too small after a basil reservation error. -- Fix sbatch --ntasks-per-core option from setting invalid SLURM_NTASKS_PER_CORE environment value. -- Prevent scancel abort when no job satisfies filter options. -- ALPS - Fix --ntasks-per-core option on multiple nodes. -- Double max string that Slurm can pack from 16MB to 32MB to support larger MPI2 configurations. -- Fix Centos5 compile issues. -- Log Cray MPI job calling exit() without mpi_fini(), but do not treat it as a fatal error. This partially reverts logic added in version 14.03.9. -- sview - Fix displaying of suspended steps elapsed times. -- Increase number of messages that get cached before throwing them away when the DBD is down. -- Fix jobs from starting in overlapping reservations that won't finish before a "maint" reservation begins. -- Fix "squeue --start" to override SQUEUE_FORMAT env variable. -- Restore GRES functionality with select/linear plugin. It was broken in version 14.03.10. -- Fix possible race condition when attempting to use QOS on a system running accounting_storage/filetxt. -- Sanity check for Correct QOS on startup. * Changes in Slurm 14.03.10 =========================== -- Fix a few sacctmgr error messages. -- Treat non-zero SlurmSchedLogLevel without SlurmSchedLogFile as a fatal error. -- Correct sched_config.html documentation SchedulingParameters should be SchedulerParameters. -- When using gres and cgroup ConstrainDevices set correct access permission for the batch step. -- Fix minor memory leak in jobcomp/mysql on slurmctld reconfig. -- Fix bug that prevented preservation of a job's GRES bitmap on slurmctld restart or reconfigure (bug was introduced in 14.03.5 "Clear record of a job's gres when requeued" and only applies when GRES mapped to specific files). -- BGQ: Fix race condition when job fails due to hardware failure and is requeued. Previous code could result in slurmctld abort with NULL pointer. -- Prevent negative job array index, which could cause slurmctld to crash. -- Fix issue with squeue/scontrol showing correct node_cnt when only tasks are specified. -- Check the status of the database connection before using it. -- ALPS - If an allocation requests -n set the BASIL -N option to the amount of tasks / number of node. -- ALPS - Don't set the env var APRUN_DEFAULT_MEMORY, it is not needed anymore. -- Fix potential buffer overflow. -- Give better estimates on pending node count if no node count is requested. -- BLUEGENE - Fix issue where requeuing jobs could cause an assert. * Changes in Slurm 14.03.9 ========================== -- If slurmd fails to stat(2) the configuration print the string describing the error code. -- Fix for mixing core base reservations with whole node based reservations to avoid overlapping erroneously. -- BLUEGENE - Remove references to Base Partition. -- sview - If compiled on a non-bluegene system then used to view a BGQ fix to allow sview to display blocks correctly. -- Fix bug in update reservation. When modifying the reservation the end time was set incorrectly. -- The start time of a reservation that is in ACTIVE state cannot be modified. -- Update the cgroup documentation about release agent for devices. -- MYSQL - fix for setting up preempt list on a QOS for multiple QOS. -- Correct a minor error in the scancel.1 man page related to the --signal option. -- Enhance the scancel.1 man page to document the sequence of signals sent -- Fix slurmstepd core dump if the cgroup hierarchy is not completed when terminating the job. -- Fix hostlist_shift to be able to give correct node names on names with a different number of dimensions than the cluster. -- BLUEGENE - Fix invalid pointer in corner case in the plugin. -- Make sure on a reconfigure the select information for a node is preserved. -- Correct logic to support job GRES specification over 31 bits (problem in logic converting int to uint32_t). -- Remove logic that was creating GRES bitmap for node when not needed (only needed when GRES mapped to specific files). -- BLUEGENE - Fix sinfo -tr before it would only print idle nodes correctly. -- BLUEGENE - Fix for licenses_only reservation on bluegene systems. -- sview - Verify pointer before using strchr. -- -M option on tools talking to a Cray from a non-Cray fixed. -- CRAY - Fix rpmbuild issue for missing file slurm.conf.template. -- Fix race condition when dealing with removing many associations at different times when reservations are using the associations that are being deleted. -- When a node's state is set to power_down/power_up, then execute SuspendProgram/ResumeProgram even if previously executed for that node. -- Fix logic determining when job configuration (i.e. running node power up logic) is complete. -- Setting the state of a node in powered down state node to "resume" will no longer cause it to reboot, but only clear the "drain" state flag. -- Fix srun documentation to remove SLURM_NODELIST being equivalent as the -w option (since it isn't). -- Fix issue with --hint=nomultithread and allocations with steps running arbitrary layouts (test1.59). -- PrivateData=reservation modified to permit users to view the reservations which they have access to (rather then preventing them from seeing ANY reservation). Backport from 14.11 commit 77c2bd25c. -- Fix PrivateData=reservation when using associations to give privileges to a reservation. -- Better checking to see if select plugin is linear or not. -- Add support for time specification of "fika" (3 PM). -- Standardize qstat wrapper more. -- Provide better estimate of minimum node count for pending jobs using more job parameters. -- ALPS - Add SubAllocate to cray.conf file for those who like the way <=2.5 did the ALPS reservation. -- Safer check to avoid invalid reads when shutting down the slurmctld with lots of jobs. -- Fix minor memory leak in the backfill scheduler when shutting down. -- Add ArchiveResvs to the output of sacctmgr show config and init the variable on slurmdbd startup. -- SLURMDBD - Only set the archive flag if purging the object (i.e ArchiveJobs PurgeJobs). This is only a cosmetic change. -- Fix for job step memory allocation logic if step requests GRES and memory is not allocations are not managed. -- Fix sinfo to display mixed nodes as allocated in '%F' output. -- Sview - Fix cpu and node counts for partitions. -- Ignore NO_VAL in SLURMDB_PURGE_* macros. -- ALPS - Don't drain nodes if epilog fails. It leaves them in drain state with no way to get them out. -- Fix issue with task/affinity oversubscribing cpus erroneously when using --ntasks-per-node. -- MYSQL - Fix load of archive files. -- Treat Cray MPI job calling exit() without mpi_fini() as fatal error for that specific task and let srun handle all timeout logic. -- Fix small memory leak in jobcomp/mysql. -- Correct tracking of licenses for suspended jobs on slurmctld reconfigure or restart. -- If failed to launch a batch job requeue it in hold. * Changes in Slurm 14.03.8 ========================== -- Fix minor memory leak when Job doesn't have nodes on it (Meaning the job has finished) -- Fix sinfo/sview to be able to query against nodes in reserved and other states. -- Make sbatch/salloc read in (SLURM|(SBATCH|SALLOC))_HINT in order to handle sruns in the script that will use it. -- srun properly interprets a leading "." in the executable name based upon the working directory of the compute node rather than the submit host. -- Fix Lustre misspellings in hdf5 guide -- Fix wrong reference in slurm.conf man page to what --profile option should be used for AcctGatherFilesystemType. -- Update HDF5 document to point out the SlurmdUser is who creates the ProfileHDF5Dir directory as well as all it's sub-directories and files. -- CRAY NATIVE - Remove error message for srun's ran inside an salloc that had --network= specified. -- Defer job step initiation of required GRES are in use by other steps rather than immediately returning an error. -- Deprecate --cpu_bind from sbatch and salloc. These never worked correctly and only caused confusion since the cpu_bind options mostly refer to a step we opted to only allow srun to set them in future versions. -- Modify sgather to work if Nodename and NodeHostname differ. -- Changed use of JobContainerPlugin where it should be JobContainerType. -- Fix for possible error if job has GRES, but the step explicitly requests a GRES count of zero. -- Make "srun --gres=none ..." work when executed without a job allocation. -- Change the global eio_shutdown_time to a field in eio handle. -- Advanced reservation fixes for heterogeneous systems, especially when reserving cores. -- If --hint=nomultithread is used in a job allocation make sure any srun's ran inside the allocation can read the environment correctly. -- If batchdir can't be made set errno correctly so the slurmctld is notified correctly. -- Remove repeated batch complete if batch directory isn't able to be made since the slurmd will send the same message. -- sacctmgr fix default format for list transactions. -- BLUEGENE - Fix backfill issue with backfilling jobs on blocks already reserved for higher priority jobs. -- When creating job arrays the job specification files for each elements are hard links to the first element specification files. If the controller fails to make the links the files are copied instead. -- Fix error handling for job array create failure due to inability to copy job files (script and environment). -- Added patch in the contribs directory for integrating make version 4.0 with Slurm and renamed the previous patch "make-3.81.slurm.patch". -- Don't wait for an update message from the DBD to finish before sending rc message back. In slow systems with many associations this could speed responsiveness in sacctmgr after adding associations. -- Eliminate race condition in enforcement of MaxJobCount limit for job arrays. -- Fix anomaly allocating cores for GRES with specific device/CPU mapping. -- cons_res - When requesting exclusive access make sure we set the number of cpus in the job_resources_t structure so as nodes finish the correct cpu count is displayed in the user tools. -- If the job_submit plugin calls take longer than 1 second to run, print a warning. -- Make sure transfer_s_p_options transfers all the portions of the s_p_options_t struct. -- Correct the srun man page, the SLURM_CPU_BIND_VERBOSE, SLURM_CPU_BIND_TYPE SLURM_CPU_BIND_LIST environment variable are set only when task/affinity plugin is configured. -- sacct - Initialize variables correctly to avoid incorrect structure reference. -- Performance adjustment to avoid calling a function multiple times when it only needs to be called once. -- Give more correct waiting reason if job is waiting on association/QOS MaxNode limit. -- DB - When sending lft updates to the slurmctld only send non-deleted lfts. -- BLUEGENE - Fix documentation on how to build a reservation less than a midplane. -- If Slurmctld fails to read the job environment consider it an error and abort the job. -- Add the name of the node a job is running on to the message printed by slurmstepd when terminating a job. -- Remove unsupported options from sacctmgr help and the dump function. -- Update sacctmgr man page removing reference to obsolete parameter MaxProcSecondsPerJob. -- Added more validity checking of incoming job submit requests. * Changes in Slurm 14.03.7 ========================== -- Correct typos in man pages. -- Add note to MaxNodesPerUser and multiple jobs running on the same node counting as multiple nodes. -- PerlAPI - fix renamed call from slurm_api_set_conf_file to slurm_conf_reinit. -- Fix gres race condition that could result in job deallocation error message. -- Correct NumCPUs count for jobs with --exclusive option. -- When creating reservation with CoreCnt, check that Slurm uses SelectType=select/cons_res, otherwise don't send the request to slurmctld and return an error. -- Save the state of scheduled node reboots so they will not be lost should the slurmctld restart. -- In select/cons_res plugin - Insure the node count does not exceed the task count. -- switch/nrt - Do not explicitly unload windows for a job on termination, only unload its table (which automatically unloads its windows). -- When HealthCheckNodeState is configured as IDLE don't run the HealthCheckProgram for nodes in any other states than IDLE. -- Remove all slurmctld locks prior to job_submit() being called in plugins. If any slurmctld data structures are read or modified, add locks directly in the plugin. -- Minor sanity check to verify the string sent in isn't NULL when using bit_unfmt. -- CRAY NATIVE - Fix issue on heavy systems to only run the NHC once per job/step completion. -- Remove unneeded step cleanup for pending steps. -- Fix issue where if a batch job was manually requeued the batch step information wasn't stored in accounting. -- When job is release from a requeue hold state clean up its previous exit code. -- Correct the srun man page about how the output from the user application is sent to srun. -- Increase the timeout of the main thread while waiting for the i/o thread. Allow up to 180 seconds for the i/o thread to complete. -- When using sacct -c to read the job completion data compute the correct job elapsed time. -- Perl package: Define some missing node states. -- When using AccountingStorageType=accounting_storage/mysql zero out the database index for the array elements avoiding duplicate database values. -- Reword the explanation of cputime and cputimeraw in the sacct man page. -- JobCompType allows "jobcomp/mysql" as valid name but the code used "job_comp/mysql" setting an incorrect default database. -- Try to load libslurm.so only when necessary. -- When nodes scheduled for reboot, set state to DOWN rather than FUTURE so they are still visible to sinfo. State set to IDLE after reboot completes. -- Apply BatchStartTimeout configuration to task launch and avoid aborting srun commands due to long running Prolog scripts. -- Fix minor memory leaks when freeing node_info_t structure. -- Fix various memory leaks in sview -- If a batch script is requeued and running steps get correct exit code/signal previous it was always -2. -- If step exitcode hasn't been set display with sacct the -2 instead of acting like it is a signal and exitcode. -- Send calculated step_rc for batch step instead of raw status as done for normal steps. -- If a job times out, set the exit code in accounting to 1 instead of the signal 1. -- Update the acct_gather.conf.5 man page removing the reference to InfinibandOFEDFrequency. -- Fix gang scheduling for jobs submitted to multiple partitions. -- Enable srun to submit job to multiple partitions. -- Update slurm.conf man page. When Epilog or Prolog fail the node state is set ro DRAIN. -- Start a job in the highest priority partition possible, even if it requires preempting other jobs and delaying initiation, rather than using a lower priority partition. Previous logic would preempt lower priority jobs, but then might start the job in a lower priority partition and not use the resources released by the preempted jobs. -- Fix SelectTypeParameters=CR_PACK_NODES for srun making both job and step resource allocation. -- BGQ - Make it possible to pack multiple tasks on a core when not using the entire cnode. -- MYSQL - if unable to connect to mysqld close connection that was inited. -- DBD - when connecting make sure we wait MessageTimeout + 5 since the timeout when talking to the Database is the same timeout so a race condition could occur in the requesting client when receiving the response if the database is unresponsive. * Changes in Slurm 14.03.6 ========================== -- Added examples to demonstrate the use of the sacct -T option to the man page. -- Fix for regression in 14.03.5 with sacctmgr load when Parent has "'" around it. -- Update comments in sacctmgr dump header. -- Fix for possible abort on change in GRES configuration. -- CRAY - fix modules file, (backport from 14.11 commit 78fe86192b. -- Fix race condition which could result in requeue if batch job exit and node registration occur at the same time. -- switch/nrt - Unload job tables (in addition to windows) in user space mode. -- Differentiate between two identical debug messages about purging vestigial job scripts. -- If the socket used by slurmstepd to communicate with slurmd exist when slurmstepd attempts to create it, for example left over from a previous requeue or crash, delete it and recreate it. * Changes in Slurm 14.03.5 ========================== -- If a srun runs in an exclusive allocation and doesn't use the entire allocation and CR_PACK_NODES is set layout tasks appropriately. -- Correct Shared field in job state information seen by scontrol, sview, etc. -- Print Slurm error string in scontrol update job and reset the Slurm errno before each call to the API. -- Fix task/cgroup to handle -mblock:fcyclic correctly -- Fix for core-based advanced reservations where the distribution of cores across nodes is not even. -- Fix issue where association maxnodes wouldn't be evaluated correctly if a QOS had a GrpNodes set. -- GRES fix with multiple files defined per line in gres.conf. -- When a job is requeued make sure accounting marks it as such. -- Print the state of requeued job as REQUEUED. -- Fix if a job's partition was taken away from it don't allow a requeue. -- Make sure we lock on the conf when sending slurmd's conf to the slurmstepd. -- Fix issue with sacctmgr 'load' not able to gracefully handle bad formatted file. -- sched/backfill: Correct job start time estimate with advanced reservations. -- Error message added when in proctrack/cgroup the step freezer path isn't able to be destroyed for debug. -- Added extra index's into the database for better performance when deleting users. -- Fix issue with wckeys when tracking wckeys, but not enforcing them, you could get multiple '*' wckeys. -- Fix bug which could report to squeue the wrong partition for a running job that is submitted to multiple partitions. -- Report correct CPU count allocated to job when allocated whole node even if not using all CPUs. -- If job's constraints cannot be satisfied put it in pending state with reason BadConstraints and don't remove it. -- sched/backfill - If job started with infinite time limit, set its end_time one year in the future. -- Clear record of a job's gres when requeued. -- Clear QOS GrpUsedCPUs when resetting raw usage if QOS is not using any cpus. -- Remove log message left over from debugging. -- When using CR_PACK_NODES fix make --ntasks-per-node work correctly. -- Report correct partition associated with a step if the job is submitted to multiple partitions. -- Fix to allow removing of preemption from a QOS -- If the proctrack plugins fail to destroy the job container print an error message and avoid to loop forever, give up after 120 seconds. -- Make srun obey POSIX convention and increase the exit code by 128 when the process terminated by a signal. -- Sanity check for acct_gather_energy/rapl -- If the proctrack plugins fail to destroy the job container print an error message and avoid to loop forever, give up after 120 seconds. -- If the sbatch command specifies the option --signal=B:signum sent the signal to the batch script only. -- If we cancel a task and we have no other exit code send the signal and exit code. -- Added note about InnoDB storage engine being used with MySQL. -- Set the job exit code when the job is signaled and set the log level to debug2() when processing an already completed job. -- Reset diagnostics time stamp when "sdiag --reset" is called. -- squeue and scontrol to report a job's "shared" value based upon partition options rather than reporting "unknown" if job submission does not use --exclusive or --shared option. -- task/cgroup - Fix cpuset binding for batch script. -- sched/backfill - Fix anomaly that could result in jobs being scheduled out of order. -- Expand pseudo-terminal size data structure field sizes from 8 to 16 bits. -- Set the job exit code when the job is signaled and set the log level to debug2() when processing an already completed job. -- Distinguish between two identical error messages. -- If using accounting_storage/mysql directly without a DBD fix issue with start of requeued jobs. -- If a job fails because of batch node failure and the job is requeued and an epilog complete message comes from that node do not process the batch step information since the job has already been requeued because the epilog script running isn't guaranteed in this situation. -- Change message to note a NO_VAL for return code could of come from node failure as well as interactive user. -- Modify test4.5 to only look at one partition instead of all of them. -- Fix sh5util -u to accept username different from the user that runs the command. -- Corrections to man pages:salloc.1 sbatch.1 srun.1 nonstop.conf.5 slurm.conf.5. -- Restore srun --pty resize ability. -- Have sacctmgr dump cluster handle situations where users or such have special characters in their names like ':' -- Add more debugging for information should the job ran on wrong node and should there be problems accessing the state files. * Changes in Slurm 14.03.4 ========================== -- Fix issue where not enforcing QOS but a partition either allows or denies them. -- CRAY - Make switch/cray default when running on a Cray natively. -- CRAY - Make job_container/cncu default when running on a Cray natively. -- Disable job time limit change if it's preemption is in progress. -- Correct logic to properly enforce job preemption GraceTime. -- Fix sinfo -R to print each down/drained node once, rather than once per partition. -- If a job has non-responding node, retry job step create rather than returning with DOWN node error. -- Support SLURM_CONF path which does not have "slurm.conf" as the file name. -- CRAY - make job_container/cncu default when running on a Cray natively -- Fix issue where batch cpuset wasn't looked at correctly in jobacct_gather/cgroup. -- Correct squeue's job node and CPU counts for requeued jobs. -- Correct SelectTypeParameters=CR_LLN with job selecition of specific nodes. -- Only if ALL of their partitions are hidden will a job be hidden by default. -- Run EpilogSlurmctld for a job is killed during slurmctld reconfiguration. -- Close window with srun if waiting for an allocation and while printing something you also get a signal which would produce deadlock. -- Add SelectTypeParameters option of CR_PACK_NODES to pack a job's tasks tightly on its allocated nodes rather than distributing them evenly across the allocated nodes. -- cpus-per-task support: Try to pack all CPUs of each tasks onto one socket. Previous logic could spread the tasks CPUs across multiple sockets. -- Add new distribution method fcyclic so when a task is using multiple cpus it can bind cyclically across sockets. -- task/affinity - When using --hint=nomultithread only bind to the first thread in a core. -- Make cgroup task layout (block | cyclic) method mirror that of task/affinity. -- If TaskProlog sets SLURM_PROLOG_CPU_MASK reset affinity for that task based on the mask given. -- Keep supporting 'srun -N x --pty bash' for historical reasons. -- If EnforcePartLimits=Yes and QOS job is using can override limits, allow it. -- Fix issues if partition allows or denies account's or QOS' and either are not set. -- If a job requests a partition and it doesn't allow a QOS or account the job is requesting pend unless EnforcePartLimits=Yes. Before it would always kill the job at submit. -- Fix format output of scontrol command when printing node state. -- Improve the clean up of cgroup hierarchy when using the jobacct_gather/cgroup plugin. -- Added SchedulerParameters value of Ignore_NUMA. -- Fix issues with code when using automake 1.14.1 -- select/cons_res plugin: Fix memory leak related to job preemption. -- After reconfig rebuild the job node counters only for jobs that have not finished yet, otherwise if requeued the job may enter an invalid COMPLETING state. -- Do not purge the script and environment files for completed jobs on slurmctld reconfiguration or restart (they might be later requeued). -- scontrol now accepts the option job=xxx or jobid=xxx for the requeue, requeuehold and release operations. -- task/cgroup - fix to bind batch job in the proper CPUs. -- Added strigger option of -N, --noheader to not print the header when displaying a list of triggers. -- Modify strigger to accept arguments to the program to execute when an event trigger occurs. -- Attempt to create duplicate event trigger now generates ESLURM_TRIGGER_DUP ("Duplicate event trigger"). -- Treat special characters like %A, %s etc. literally in the file names when specified escaped e.g. sbatch -o /home/zebra\\%s will not expand %s as the stepid of the running job. -- CRAYALPS - Add better support for CLE 5.2 when running Slurm over ALPS. -- Test time when job_state file was written to detect multiple primary slurmctld daemons (e.g. both backup and primary are functioning as primary and there is a split brain problem). -- Fix scontrol to accept update jobid=# numtasks=# -- If the backup slurmctld assumes primary status, then do NOT purge any job state files (batch script and environment files) and do not re-use them. This may indicate that multiple primary slurmctld daemons are active (e.g. both backup and primary are functioning as primary and there is a split brain problem). -- Set correct error code when requeuing a completing/pending job -- When checking for if dependency of type afterany, afterok and afternotok don't clear the dependency if the job is completing. -- Cleanup the JOB_COMPLETING flag and eventually requeue the job when the last epilog completes, either slurmd epilog or slurmctld epilog, whichever comes last. -- When attempting to requeue a job distinguish the case in which the job is JOB_COMPLETING or already pending. -- When reconfiguring the controller don't restart the slurmctld epilog if it is already running. -- Email messages for job array events print now use the job ID using the format "#_# (#)" rather than just the internal job ID. -- Set the number of free licenses to be 0 if the global license count decreases and total is less than in use. -- Add DebugFlag of BackfillMap. Previously a DebugFlag value of Backfill logged information about what it was doing plus a map of expected resouce use in the future. Now that very verbose resource use map is only logged with a DebugFlag value of BackfillMap -- Fix slurmstepd core dump. -- Modify the description of -E and -S option of sacct command as point in time 'before' or 'after' the database records are returned. -- Correct support for partition with Shared=YES configuration. -- If job requests --exclusive then do not use nodes which have any cores in an advanced reservation. Also prevents case where nodes can be shared by other jobs. -- For "scontrol --details show job" report the correct CPU_IDs when thre are multiple threads per core (we are translating a core bitmap to CPU IDs). -- If DebugFlags=Protocol is configured in slurm.conf print details of the connection, ip address and port accepted by the controller. -- Fix minor memory leak when reading in incomplete node data checkpoint file. -- Enlarge the width specifier when printing partition SHARE to display larger sharing values. -- sinfo locks added to prevent possibly duplicate record printing for resources in multiple partitions. * Changes in Slurm 14.03.3-2 ============================ -- BGQ - Fix issue with uninitialized variable. * Changes in Slurm 14.03.3 ========================== -- Correction to default batch output file name. In version 14.03.2 was using "slurm__4294967294.out" due to error in job array logic. -- In slurm.spec file, replace "Requires cray-MySQL-devel-enterprise" with "Requires mysql-devel". * Changes in Slurm 14.03.2 ========================== -- Fix race condition if PrologFlags=Alloc,NoHold is used. -- Cray - Make NPC only limit running other NPC jobs on shared blades instead of limited non NPC jobs. -- Fix for sbatch #PBS -m (mail) option parsing. -- Fix job dependency bug. Jobs dependent upon multiple other jobs may start prematurely. -- Set "Reason" field for all elements of a job array on short-circuited scheduling for job arrays. -- Allow -D option of salloc/srun/sbatch to specify relative path. -- Added SchedulerParameter of batch_sched_delay to permit many batch jobs to be submitted between each scheduling attempt to reduce overhead of scheduling logic. -- Added job reason of "SchedTimeout" if the scheduler was not able to reach the job to attempt scheduling it. -- Add job's exit state and exit code to email message. -- scontrol hold/release accepts job name option (in addition to job ID). -- Handle when trying to cancel a step that hasn't started yet better. -- Handle Max/GrpCPU limits better -- Add --priority option to salloc, sbatch and srun commands. -- Honor partition priorities over job priorities. -- Fix sacct -c when using jobcomp/filetxt to read newer variables -- Fix segfault of sacct -c if spaces are in the variables. -- Release held job only with "scontrol release " and not by resetting the job's priority. This is needed to support job arrays better. -- Correct squeue command not to merge jobs with state pending and completing together. -- Fix issue where user is requesting --acctg-freq=0 and no memory limits. -- Fix issue with GrpCPURunMins if a job's timelimit is altered while the job is running. -- Temporary fix for handling our typemap for the perl api with newer perl. -- Fix allowgroup on bad group seg fault with the controller. -- Handle node ranges better when dealing with accounting max node limits. * Changes in Slurm 14.03.1-2 ========================== -- Update configure to set correct version without having to run autogen.sh * Changes in Slurm 14.03.1 ========================== -- Add support for job std_in, std_out and std_err fields in Perl API. -- Add "Scheduling Configuration Guide" web page. -- BGQ - fix check for jobinfo when it is NULL -- Do not check cleaning on "pending" steps. -- task/cgroup plugin - Fix for building on older hwloc (v1.0.2). -- In the PMI implementation by default don't check for duplicate keys. Set the SLURM_PMI_KVS_DUP_KEYS if you want the code to check for duplicate keys. -- Add job submission time to squeue. -- Permit user root to propagate resource limits higher than the hard limit slurmd has on that compute node has (i.e. raise both current and maximum limits). -- Fix issue with license used count when doing an scontrol reconfig. -- Fix the PMI iterator to not report duplicated keys. -- Fix issue with sinfo when -o is used without the %P option. -- Rather than immediately invoking an execution of the scheduling logic on every event type that can enable the execution of a new job, queue its execution. This permits faster execution of some operations, such as modifying large counts of jobs, by executing the scheduling logic less frequently, but still in a timely fashion. -- If the environment variable is greater than MAX_ENV_STRLEN don't set it in the job env otherwise the exec() fails. -- Optimize scontrol hold/release logic for job arrays. -- Modify srun to report an exit code of zero rather than nine if some tasks exit with a return code of zero and others are killed with SIGKILL. Only an exit code of zero did this. -- Fix a typo in scontrol man page. -- Avoid slurmctld crash getting job info if detail_ptr is NULL. -- Fix sacctmgr add user where both defaultaccount and accounts are specified. -- Added SchedulerParameters option of max_sched_time to limit how long the main scheduling loop can execute for. -- Added SchedulerParameters option of sched_interval to control how frequently the main scheduling loop will execute. -- Move start time of main scheduling loop timeout after locks are aquired. -- Add squeue job format option of "%y" to print a job's nice value. -- Update scontrol update jobID logic to operate on entire job arrays. -- Fix PrologFlags=Alloc to run the prolog on each of the nodes in the allocation instead of just the first. -- Fix race condition if a step is starting while the slurmd is being restarted. -- Make sure a job's prolog has ran before starting a step. -- BGQ - Fix invalid memory read when using DefaultConnType in the bluegene.conf -- Make sure we send node state to the DBD on clean start of controller. -- Fix some sinfo and squeue sorting anomalies due to differences in data types. -- Only send message back to slurmctld when PrologFlags=Alloc is used on a Cray/ALPS system, otherwise use the slurmd to wait on the prolog to gate the start of the step. -- Remove need to check PrologFlags=Alloc in slurmd since we can tell if prolog has ran yet or not. -- Fix squeue to use a correct macro to check job state. -- BGQ - Fix incorrect logic issues if MaxBlockInError=0 in the bluegene.conf. -- priority/basic - Insure job priorities continue to decrease when jobs are submitted with the --nice option. -- Make the PrologFlag=Alloc work on batch scripts -- Make PrologFlag=NoHold (automatically sets PrologFlag=Alloc) not hold in salloc/srun, instead wait in the slurmd when a step hits a node and the prolog is still running. -- Added --cpu-freq=highm1 (high minus one) option. -- Expand StdIn/Out/Err string length output by "scontrol show job" from 128 to 1024 bytes. -- squeue %F format will now print the job ID for non-array jobs. -- Use quicksort for all priority based job sorting, which improves performance significantly with large job counts. -- If a job has already been released from a held state ignore successive release requests. -- Fix srun/salloc/sbatch man pages for the --no-kill option. -- Add squeue -L/--licenses option to filter jobs by license names. -- Handle abort job on node on front end systems without core dumping. -- Fix dependency support for job arrays. -- When updating jobs verify the update request is not identical to the current settings. -- When sorting jobs and priorities are equal sort by job_id. -- Do not overwrite existing reason for node being down or drained. -- Requeue batch job if Munge is down and credential can not be created. -- Make _slurm_init_msg_engine() tolerate bug in bind() returning a busy ephemeral port. -- Don't block scheduling of entire job array if it could run in multiple partitions. -- Introduce a new debug flag Protocol to print protocol requests received together with the remote IP address and port. -- CRAY - Set up the network even when only using 1 node. -- CRAY - Greatly reduce the number of error messages produced from the task plugin and provide more information in the message. * Changes in Slurm 14.03.0 ========================== -- job_submit/lua: Fix invalid memory reference if script returns error message for user. -- Add logic to sleep and retry if slurm.conf can't be read. -- Reset a node's CpuLoad value at least once each SlurmdTimeout seconds. -- Scheduler enhancements for reservations: When a job needs to run in reservation, but can not due to busy resources, then do not block all jobs in that partition from being scheduled, but only the jobs in that reservation. -- Export "SLURM*" environment variables from sbatch even if --export=NONE. -- When recovering node state if the Slurm version is 2.6 or 2.5 set the protocol version to be SLURM_2_5_PROTOCOL_VERSION which is the minimum supported version. -- Update the scancel man page documenting the -s option. -- Update sacctmgr man page documenting how to modify account's QOS. -- Fix for sjstat which currently does not print >1TB memory values correctly. -- Change xmalloc()/xfree() to malloc()/free() in hostlist.c for better performance. -- Update squeue.1 man page describing the SPECIAL_EXIT state. -- Added scontrol option of errnumstr to return error message given a slurm error number. -- If srun invoked with the --multi-prog option, but no task count, then use the task count provided in the MPMD configuration file. -- Prevent sview abort on some systems when adding or removing columns to the display for nodes, jobs, partitions, etc. -- Add job array hash table for improved performance. -- Make AccountingStorageEnforce=all not include nojobs or nosteps. -- Added sacctmgr mod qos set RawUsage=0. -- Modify hostlist functions to accept more than two numeric ranges (e.g. "row[1-3]rack[0-8]slot[0-63]") * Changes in Slurm 14.03.0rc1 ============================== -- Fixed typos in srun_cr man page. -- Run job scheduling logic immediately when nodes enter service. -- Added sbatch '--parsable' option to output only the job id number and the cluster name separated by a semicolon. Errors will still be displayed. -- Added failure management "slurmctld/nonstop" plugin. -- Prevent jobs being killed when a checkpoint plugin is enabled or disabled. -- Update the documentation about SLURM_PMI_KVS_NO_DUP_KEYS environment variable. -- select/cons_res bug fix for range of node counts with --cpus-per-task option (e.g. "srun -N2-3 -c2 hostname" would allocate 2 CPUs on the first node and 0 CPUs on the second node). -- Change reservation flags field from 16 to 32-bits. -- Add reservation flag value of "FIRST_CORES". -- Added the idea of Resources to the database. Framework for handling license servers outside of Slurm. -- When starting the slurmctld only send past job/node state information to accounting if running for the first time (should speed up startup dramatically on systems with lots of nodes or lots of jobs). -- Compile and run on FreeBSD 8.4. -- Make job array expressions more flexible to accept multiple step counts in the expression (e.g. "--array=1-10:2,50-60:5,123"). -- switch/cray - add state save/restore logic tracking allocated ports. -- SchedulerParameters - Replace max_job_bf with bf_max_job_start (both will work for now). -- Add SchedulerParameters options of preempt_reorder_count and preempt_strict_order. -- Make memory types in acct_gather uint64_t to handle systems with more than 4TB of memory on them. -- BGQ - --export=NONE option for srun to make it so only the SLURM_JOB_ID and SLURM_STEP_ID env vars are set. -- Munge plugins - Add sleep between retries if can't connect to socket. -- Added DebugFlags value of "License". -- Added --enable-developer which will give you -Werror when compiling. -- Fix for job request with GRES count of zero. -- Fix a potential memory leak in hostlist. -- Job array dependency logic: Cache results for major performance improvement. -- Modify squeue to support filter on job states Special_Exit and Resizing. -- Defer purging job record until after EpilogSlurmctld completes. -- Add -j option for jobid to sbcast. -- Fix handling RPCs from a 14.03 slurmctld to a 2.6 slurmd * Changes in Slurm 14.03.0pre6 ============================== -- Modify slurmstepd to log messages according to the LogTimeFormat parameter in slurm.conf. -- Insure that overlapping reservations do not oversubscribe available licenses. -- Added core specialization logic to select/cons_res plugin. -- Added whole_node field to job_resources structure and enable gang scheduling for jobs with core specialization. -- When using FastSchedule = 1 the nodes with less than configured resources are not longer set DOWN, they are set to DRAIN instead. -- Modified 'sacctmgr show associations' command to show GrpCPURunMins by default. -- Replace the hostlist_push() function with a more efficient hostlist_push_host(). -- Modify the reading of lustre file system statistics to print more information when debug and when io error occur. -- Add specialized core count field to job credential data. NOTE: This changes the communications protocol from other pre-releases of version 14.03. All programs must be cancelled and daemons upgraded from previous pre-releases of version 14.03. Upgrades from version 2.6 or earlier can take place without loss of jobs -- Add version number to node and front-end configuration information visible using the scontrol tool. -- Add idea of a RESERVED flag for node state so idle resources are marked not "idle" when in a reservation. -- Added core specialization plugin infrastructure. -- Added new job_submit/trottle plugin to control the rate at which a user can submit jobs. -- CRAY - added network performance counters option. -- Allow scontrol suspend/resume to accept jobid in the format jobid_taskid to suspend/resume array elements. -- In the slurmctld job record, split "shared" variable into "share_res" (share resource) and "whole_node" fields. -- Fix the format of SLURM_STEP_RESV_PORTS. It was generated incorrectly when using the hostlist_push_host function and input surrounded by []. -- Modify the srun --slurmd-debug option to accept debug string tags (quiet, fatal, error, info verbose) beside the numerical values. -- Fix the bug where --cpu_bind=map_cpu is interpreted as mask_cpu. -- Update the documentation egarding the state of cpu frequencies after a step using --cpu-freq completes. -- CRAY - Fix issue when a job is requeued and nhc is still running as it is being scheduled to run again. This would erase the previous job info that was still needed to clean up the nodes from the previous job run. (Bug 526). -- Set SLURM_JOB_PARTITION environment variable set for all job allocations. -- Set SLURM_JOB_PARTITION environment variable for Prolog program. -- Added SchedulerParameters option of partition_job_depth to limit scheduling logic depth by partition. -- Handle the case in which errno is not reset to 0 after calling getgrent_r(), which causes the controller to core dump. * Changes in Slurm 14.03.0pre5 ============================== -- Added squeue format option of "%X" (core specialization count). -- Added core specialization web page (just a start for now). -- Added the SLURM_ARRAY_JOB_ID and SLURM_ARRAY_TASK_ID in epilog slurmctld environment. -- Fix bug in job step allocation failing due to memory limit. -- Modify the pbsnodes script to reflect its output on a TORQUE system. -- Add ability to clear a node's DRAIN flag using scontrol or sview by setting it's state to "UNDRAIN". The node's base state (e.g. "DOWN" or "IDLE") will not be changed. -- Modify the output of 'scontrol show partition' by displaying DefMemPerCPU=UNLIMITED and MaxMemPerCPU=UNLIMITED when these limits are configured as 0. -- mpirun-mic - Major re-write of the command wrapper for Xeon Phi use. -- Add new configuration parameter of AuthInfo to specify port used by authentication plugin. -- Fixed conditional RPM compiling. -- Corrected slurmstepd ident name when logging to syslog. -- Fixed sh5util loop when there are no node-step files. -- Add SLURM_CLUSTER_NAME to environment variables passed to PrologSlurmctld, Prolog, EpilogSlurmctld, and Epilog -- Add the idea of running a prolog right when an allocation happens instead of when running on the node for the first time. -- If user runs 'scontrol reconfig' but hostnames or the host count changes the slurmctld throws a fatal error. -- gres.conf - Add "NodeName" specification so that a single gres.conf file can be used for a heterogeneous cluster. -- Add flag to accounting RPC to indicate if job data is packed or not. -- After all srun tasks have terminated on a node close the stdout/stderr channel with the slurmstepd on that node. -- In case of i/o error with slurmstepd log an error message and abort the job. -- Add --test-only option to sbatch command to validate the script and options. The response includes expected start time and resources to be allocated. * Changes in Slurm 14.03.0pre4 ============================== -- Remove the ThreadID documentation from slurm.conf. This functionality has been obsoleted by the LogTimeFormat. -- Sched plugins - rename global and plugin functions names for consistency with other plugin types. -- BGQ - Added RebootQOSList option to bluegene.conf to allow an implicate reboot of a block if only jobs in the list are running on it when cnodes go into a failure state. -- Correct task count of pending job steps. -- Improve limit enforcement for jobs, set RLIMIT_RSS, RLIMIT_AS and/or RLIMIT_DATA to enforce memory limit. -- Pending job steps will have step_id of INFINITE rather than NO_VAL and will be reported as "TBD" by scontrol and squeue commands. -- Add logic so PMI_Abort or PMI2_Abort can propagate an exit code. -- Added SlurmdPlugstack configuration parameter. -- Added PriorityFlag DEPTH_OBLIVIOUS to have the depth of an association not effect it's priorty. -- Multi-thread the sinfo command (one thread per partition). -- Added sgather tool to gather files from a job's compute nodes into a central location. -- Added configuration parameter FairShareDampeningFactor to offer a greater priority range based upon utilization. -- Change MaxArraySize and job's array_task_id from 16-bit to 32-bit field. Additional Slurm enhancements are be required to support larger job arrays. -- Added -S/--core-spec option to salloc, sbatch and srun commands to reserve specialized cores for system use. Modify scontrol and sview to get/set the new field. No enforcement exists yet for these new options. struct job_info / slurm_job_info_t: Added core_spec struct job_descriptorjob_desc_msg_t: Added core_spec * Changes in Slurm 14.03.0pre3 ============================== -- Do not set SLURM_NODEID environment variable on front-end systems. -- Convert bitmap functions to use int32_t instead of int in data structures and function arguments. This is to reliably enable use of bitmaps containing up to 4 billion elements. Several data structures containing index values were also changed from data type int to int32_t: - Struct job_info / slurm_job_info_t: Changed exc_node_inx, node_inx, and req_node_inx from type int to type int32_t - job_step_info_t: Changed node_inx from type int to type int32_t - Struct partition_info / partition_info_t: Changed node_inx from type int to type int32_t - block_job_info_t: Changed cnode_inx from type int to type int32_t - block_info_t: Changed ionode_inx and mp_inx from type int to type int32_t - Struct reserve_info / reserve_info_t: Changed node_inx from type int to type int32_t -- Modify qsub wrapper output to match torque command output, just print the job ID rather than "Submitted batch job #" -- Change Slurm error string for ESLURM_MISSING_TIME_LIMIT from "Missing time limit" to "Time limit specification required, but not provided" -- Change salloc job_allocate error message header from "Failed to allocate resources" to "Job submit/allocate failed" -- Modify slurmctld message retry logic to support Cray cold-standby SDB. * Changes in Slurm 14.03.0pre2 ============================== -- Added "JobAcctGatherParams" configuration parameter. Value of "NoShare" disables accounting for shared memory. -- Added fields to "scontrol show job" output: boards_per_node, sockets_per_board, ntasks_per_node, ntasks_per_board, ntasks_per_socket, ntasks_per_core, and nice. -- Add squeue output format options for job command and working directory (%o and %Z respectively). -- Add stdin/out/err to sview job output. -- Add new job_state of JOB_BOOT_FAIL for job terminations due to failure to boot it's allocated nodes or BlueGene block. -- CRAY - Add SelectTypeParameters NHC_NO_STEPS and NHC_NO which will disable the node health check script for steps and allocations respectfully. -- Reservation with CoreCnt: Avoid possible invalid memory reference. -- Add new error code for attempt to create a reservation with duplicate name. -- Validate that a hostlist file contains text (i.e. not a binary). -- switch/generic - propagate switch information from srun down to slurmd and slurmstepd. -- CRAY - Do not package Slurm's libpmi or libpmi2 libraries. The Cray version of those libraries must be used. -- Added a new option to the scontrol command to view licenses that are configured in use and avalable. 'scontrol show licenses'. -- MySQL - Made Slurm compatible with 5.6 * Changes in Slurm 14.03.0pre1 ============================== -- sview - improve scalability -- Add task pointer to the task_post_term() function in task plugins. The terminating task's PID is available in task->pid. -- Move select/cray to select/alps -- Defer sending SIGKILL signal to processes while core dump in progress. -- Added JobContainerPlugin configuration parameter and plugin infrastructure. -- Added partition configuration parameters AllowAccounts, AllowQOS, DenyAccounts and DenyQOS. -- The rpmbuild option for a cray system with ALPS has changed from %_with_cray to %_with_cray_alps. -- The log file timestamp format can now be selected at runtime via the LogTimeFormat configuration option. See the slurm.conf and slurmdbd.conf man pages for details. -- Added switch/generic plugin to a job's convey network topology. -- BLUEGENE - If block is in 'D' state or has more cnodes in error than MaxBlockInError set the job wait reason appropriately. -- API use: Generate an error return rather than fatal error and exit if the configuraiton file is absent or invalid. This will permit Slurm APIs to be more reliably used by other programs. -- Add support for load-based scheduling, allocate jobs to nodes with the largest number of available CPUs. Added SchedulingParameters paramter of "CR_LLN" and partition parameter of "LLN=yes|no". -- Added job_info() and step_info() functions to the gres plugins to extract plugin specific fields from the job's or step's GRES data structure. -- Added sbatch --signal option of "B:" to signal the batch shell rather than only the spawned job steps. -- Added sinfo and squeue format option of "%all" to print all fields available for the data type with a vertical bar separating each field. -- Add mechanism for job_submit plugin to generate error message for srun, salloc or sbatch to stderr. New argument added to job_submit function in the plugin. -- Add StdIn, StdOut, and StdErr paths to job information dumped with "scontrol show job". -- Permit Slurm administrator to submit a batch job as any user. -- Set a job's RLIMIT_AS limit based upon it's memory limit and VsizeFactor configuration value. -- Remove Postgres plugins -- Make jobacct_gather/cgroup work correctly and also make all jobacct_gather plugins more maintainable. -- Proctrack/pgid - Add support for proctrack_p_plugin_get_pids() function. -- Sched/backfill - Change default max_job_bf parameter from 50 to 100. -- Added -I|--item-extract option to sh5util to extract data item from series. * Changes in Slurm 2.6.10 ========================= -- Switch/nrt - On switch resource allocation failure, free partial allocation. -- Switch/nrt - Properly track usage of CAU and RDMA resources with multiple tasks per compute node. -- Fix issue where user is requesting --acctg-freq=0 and no memory limits. -- BGQ - Temp fix issue where job could be left on job_list after it finished. -- BGQ - Fix issue where limits were checked on midplane counts instead of cnode counts. -- BGQ - Move code to only start job on a block after limits are checked. -- Handle node ranges better when dealing with accounting max node limits. -- Fix perlapi to compile correctly with perl 5.18 -- BGQ - Fix issue with uninitialized variable. -- Correct sinfo --sort fields to match documentation: E => Reason, H -> Reason Time (new), R -> Partition Name, u/U -> Reason user (new) -- If an invalid assoc_ptr comes in don't use the id to verify it. -- Sched/backfill modified to avoid using nodes in completing state. -- Correct support for job --profile=none option and related documentation. -- Properly enforce job --requeue and --norequeue options. -- If a job --mem-per-cpu limit exceeds the partition or system limit, then scale the job's memory limit and CPUs per task to satisfy the limit. -- Correct logic to support Power7 processor with 1 or 2 threads per core (CPU IDs are not consecutive). * Changes in Slurm 2.6.9 ======================== -- Fix sinfo to work correctly with draining/mixed nodes as well as filtering on Mixed state. -- Fix sacctmgr update user with no "where" condition. -- Fix logic bugs for SchedulerParameters option of max_rpc_cnt. * Changes in Slurm 2.6.8 ======================== -- Add support for Torque/PBS job array options and environment variables. -- CRAY/ALPS - Add support for CLE52 -- Fix issue where jobs still pending after a reservation would remain in waiting reason ReqNodeNotAvail. -- Update last_job_update when a job's state_reason was modified. -- Free job_ptr->state_desc where ever state_reason is set. -- Fixed sacct.1 and srun.1 manual pages which contains a hyphen where a minus sign for options was intended. -- sinfo - Make sure if partition name is long and the default the last char doesn't get chopped off. -- task/affinity - Protect against zero divide when simulating more hardware than you really have. -- NRT - Fix issue with 1 node jobs. It turns out the network does need to be setup for 1 node jobs. -- Fix recovery of job dependency on task of job array when slurmctld restarts. -- mysql - Fix invalid memory reference. -- Lock the /cgroup/freezer subsystem when creating files for tracking processes. -- Fix preempt/partition_prio to avoid preempting jobs in partitions with PreemptMode=OFF -- launch/poe - Implicitly set --network in job step create request as needed. -- Permit multiple batch job submissions to be made for each run of the scheduler logic if the job submissions occur at the nearly same time. -- Fix issue where associations weren't correct if backup takes control and new associations were added since it was started. -- Fix race condition is corner case with backup slurmctld. -- With the backup slurmctld make sure we reinit beginning values in the slurmdbd plugin. -- Fix sinfo to work correctly with draining/mixed nodes. -- MySQL - Fix it so a lock isn't held unnecessarily. -- Added new SchedulerParameters option of max_rpc_cnt when too many RPCs are active. -- BGQ - Fix deny_pass to work correctly. -- BGQ - Fix sub block steps using a block when the block has passthrough's in it. * Changes in Slurm 2.6.7 ======================== -- Properly enforce a job's cpus-per-task option when a job's allocation is constrained on some nodes by the mem-per-cpu option. -- Correct the slurm.conf man pages and checkpoint_blcr.html page describing that jobs must be drained from cluster before deploying any checkpoint plugin. Corrected in version 14.03. -- Fix issue where if using munge and munge wasn't running and a slurmd needed to forward a message, the slurmd would core dump. -- Update srun.1 man page documenting the PMI2 support. -- Fix slurmctld core dump when a jobs gets its QOS updated but there is not a corresponding association. -- If a job requires specific nodes and can not run due to those nodes being busy, the main scheduling loop will block those specific nodes rather than the entire queue/partition. -- Fix minor memory leak when updating a job's name. -- Fix minor memory leak when updating a reservation on a partition using "ALL" nodes. -- Fix minor memory leak when adding a reservation with a nodelist and core count. -- Update sacct man page description of job states. -- BGQ - Fix minor memory leak when selecting blocks that can't immediately be placed. -- Fixed minor memory leak in backfill scheduler. -- MYSQL - Fixed memory leak when querying clusters. -- MYSQL - Fix when updating QOS on an association. -- NRT - Fix to supply correct error messages to poe/pmd when a launch fails. -- Add SLURM_STEP_ID to Prolog environment. -- Add support for SchedulerParameters value of bf_max_job_start that limits the total number of jobs that can be started in a single iteration of the backfill scheduler. -- Don't print negative number when dealing with large memory sizes with sacct. -- Fix sinfo output so that host in state allocated and mixed will not be merged together. -- GRES: Avoid crash if GRES configurations is inconstent. -- Make S_SLURM_RESTART_COUNT item available to SPANK. -- Munge plugins - Add sleep between retries if can't connect to socket. -- Fix the database query to return all pending jobs in a given time interval. -- switch/nrt - Correct logic to get dynamic window count. -- Remove need to use job->ctx_params in the launch plugin, just to simplify code. -- NRT - Fix possible memory leak if using multiple adapters. -- NRT - Fix issue where there are more than NRT_MAXADAPTERS on a system. -- NRT - Increase Max number of adapters from 8 -> 9 -- NRT - Initialize missing variables when the PMD is starting a job. -- NRT - Fix issue where we are launching hosts out of numerical order, this would cause pmd's to hang. -- NRT - Change xmalloc's to malloc just to be safe. -- NRT - Sanity check to make sure a jobinfo is there before packing. -- Add missing options to the print of TaskPluginParam. -- Fix a couple of issues with scontrol reconfig and adding nodes to slurm.conf. Rebooting daemons after adding nodes to the slurm.conf is highly recommended. * Changes in Slurm 2.6.6 ======================== -- sched/backfill - Fix bug that could result in failing to reserve resources for high priority jobs. -- Correct job RunTime if requeued from suspended state. -- Reset job priority from zero (held) on manual resume from suspend state. -- If FastSchedule=0 then do not DOWN a node with low memory or disk size. -- Remove vestigial note. -- Update sshare.1 man page making it consistent with sacctmgr.1. -- Do not reset a job's priority when the slurmctld restarts if previously set to some specific value. -- sview - Fix regression where the Node tab wasn't able to add/remove columns. -- Fix slurmstepd lock when job terminates inside the infiniband network traffic accounting plugin. -- Correct the documentation to read filesystem instead of Lustre. Update the srun help. -- Fix the acct_gather_filesystem_lustre.c to compute the Lustre accounting data correctly accumulating differences between sampling intervals. Fix the data structure mismatch between acct_gather_filesystem_lustre.c and slurm_jobacct_gather.h which caused the hdf5 plugin to log incorrect data. -- Don't allow PMI_TIME to be zero which will cause floating exception. -- Fix purging of old reservation errors in database. -- MYSQL - If starting the plugin and the database isn't up attempt to connect in a loop instead of producing a fatal. -- BLUEGENE - If IONodesPerMP changes in bluegene.conf recalculate bitmaps based on ionode count correctly on slurmctld restart. -- Fix step allocation when some CPUs are not available due to memory limits. This happens when one step is active and using memory that blocks the scheduling of another step on a portion of the CPUs needed. The new step is now delayed rather than aborting with "Requested node configuration is not available". -- Make sure node limits get assessed if no node count was given in request. -- Removed obsolete slurm_terminate_job() API. -- Update documentation about QOS limits -- Retry task exit message from slurmstepd to srun on message timeout. -- Correction to logic reserving all nodes in a specified partition. -- Added support for selecting AMD GPU by setting GPU_DEVICE_ORDINAL env var. -- Properly enforce GrpSubmit limit for job arrays. -- CRAY - fix issue with using CR_ONE_TASK_PER_CORE -- CRAY - fix memory leak when using accelerators * Changes in Slurm 2.6.5 ======================== -- Correction to hostlist parsing bug introduced in v2.6.4 for hostlists with more than one numeric range in brackets (e.g. rack[0-3]_blade[0-63]"). -- Add notification if using proctrack/cgroup and task/cgroup when oom hits. -- Corrections to advanced reservation logic with overlapping jobs. -- job_submit/lua - add cpus_per_task field to those available. -- Add cpu_load to the node information available using the Perl API. -- Correct a job's GRES allocation data in accounting records for non-Cray systems. -- Substantial performance improvement for systems with Shared=YES or FORCE and large numbers of running jobs (replace bubble sort with quick sort). -- proctrack/cgroup - Add locking to prevent race condition where one job step is ending for a user or job at the same time another job stepsis starting and the user or job container is deleted from under the starting job step. -- Fixed sh5util loop when there are no node-step files. -- Fix race condition on batch job termination that could result in a job exit code of 0xfffffffe if the slurmd on node zero registers its active jobs at the same time that slurmstepd is recording the job's exit code. -- Correct logic returning remaining job dependencies in job information reported by scontrol and squeue. Eliminates vestigial descriptors with no job ID values (e.g. "afterany"). -- Improve performance of REQUEST_JOB_INFO_SINGLE RPC by removing unnecessary locks and use hash function to find the desired job. -- jobcomp/filetxt - Reopen the file when slurmctld daemon is reconfigured or gets SIGHUP. -- Remove notice of CVE with very old/deprecated versions of Slurm in news.html. -- Fix if hwloc_get_nbobjs_by_type() returns zero core count (set to 1). -- Added ApbasilTimeout parameter to the cray.conf configuration file. -- Handle in the API if parts of the node structure are NULL. -- Fix srun hang when IO fails to start at launch. -- Fix for GRES bitmap not matching the GRES count resulting in abort (requires manual resetting of GRES count, changes to gres.conf file, and slurmd restarts). -- Modify sview to better support job arrays. -- Modify squeue to support longer job ID values (for many job array tasks). -- Fix race condition in authentication credential creation that could corrupt memory. (NOTE: This race condition has existed since 2003 and would be exceedingly rare.) -- HDF5 - Fix minor memory leak. -- Slurmstepd variable initialization - Without this patch, free() is called on a random memory location (i.e. whatever is on the stack), which can result in slurmstepd dying and a completed job not being purged in a timely fashion. -- Fix slurmstepd race condition when separate threads are reading and modifying the job's environment, which can result in the slurmstepd failing with an invalid memory reference. -- Fix erroneous error messages when running gang scheduling. -- Fix minor memory leak. -- scontrol modified to suspend, resume, hold, uhold, or release multiple jobs in a space separated list. -- Minor debug error when a connection goes away at the end of a job. -- Validate return code from calls to slurm_get_peer_addr -- BGQ - Fix issues with making sure all cnodes are accounted for when mulitple steps cause multiple cnodes in one allocation to go into error at the same time. -- scontrol show job - Correct NumNodes value calculated based upon job specifications. -- BGQ - Fix issue if user runs multiple sub-block jobs inside a multiple midplane block that starts on a higher coordinate than it ends (i.e if a block has midplanes [0010,0013] 0013 is the start even though it is listed second in the hostlist). -- BGQ - Add midplane to the total_cnodes used in the runjob_mux plugin for better debug. -- Update AllocNodes paragraph in slurm.conf.5. * Changes in Slurm 2.6.4 ======================== -- Fixed sh5util to print its usage. -- Corrected commit f9a3c7e4e8ec. -- Honor ntasks-per-node option with exclusive node allocations. -- sched/backfill - Prevent invalid memory reference if bf_continue option is configured and slurm is reconfigured during one of the sleep cycles or if there are any changes to the partition configuration or if the normal scheduler runs and starts a job that the backfill scheduler is actively working on. -- Update man pages information about acct-freq and JobAcctGatherFrequency to reflect only the latest supported format. -- Minor document update to include note about PrivateData=Usage for the slurm.conf when using the DBD. -- Expand information reported with DebugFlags=backfill. -- Initiate jobs pending to run in a reservation as soon as the reservation becomes active. -- Purged expired reservation even if it has pending jobs. -- Corrections to calculation of a pending job's expected start time. -- Remove some vestigial logic treating job priority of 1 as a special case. -- Memory freeing up to avoid minor memory leaks at close of daemons -- Updated documentation to give correct units being displayed. -- Report AccountingStorageBackupHost with "scontrol show config". -- init scripts ignore quotes around Pid file name specifications. -- Fixed typo about command case in quickstart.html. -- task/cgroup - handle new cpuset files, similar to commit c4223940. -- Replace the tempname() function call with mkstemp(). -- Fix for --cpu_bind=map_cpu/mask_cpu/map_ldom/mask_ldom plus --mem_bind=map_mem/mask_mem options, broken in 2.6.2. -- Restore default behavior of allocating cores to jobs on a cyclic basis across the sockets unless SelectTypeParameters=CR_CORE_DEFAULT_DIST_BLOCK or user specifies other distribution options. -- Enforce JobRequeue configuration parameter on node failure. Previously always requeued the job. -- acct_gather_energy/ipmi - Add delay before retry on read error. -- select/cons_res with GRES and multiple threads per core, fix possible infinite loop. -- proctrack/cgroup - Add cgroup create retry logic in case one step is starting at the same time as another step is ending and the logic to create and delete cgroups overlaps. -- Improve setting of job wait "Reason" field. -- Correct sbatch documentation and job_submit/pbs plugin "%j" is job ID, not "%J" (which is job_id.step_id). -- Improvements to sinfo performance, especially for large numbers of partitions. -- SlurmdDebug - Permit changes to slurmd debug level with "scontrol reconfig" -- smap - Avoid invalid memory reference with hidden nodes. -- Fix sacctmgr modify qos set preempt+/-=. -- BLUEGENE - fix issue where node count wasn't set up correctly when srun preforms the allocation, regression in 2.6.3. -- Add support for dependencies of job array elements (e.g. "sbatch --depend=afterok:123_4 ...") or all elements of a job array (e.g. "sbatch --depend=afterok:123 ..."). -- Add support for new options in sbatch qsub wrapper: -W block=true (wait for job completion) Clear PBS_NODEFILE environment variable -- Fixed the MaxSubmitJobsPerUser limit in QOS which limited submissions a job too early. -- sched/wiki, sched/wiki2 - Fix to work with change logic introduced in version 2.6.3 preventing Maui/Moab from starting jobs. -- Updated the QOS limits documentation and man page. * Changes in Slurm 2.6.3 ======================== -- Add support for some new #PBS options in sbatch scripts and qsub wrapper: -l accelerator=true|false (GPU use) -l mpiprocs=# (processors per node) -l naccelerators=# (GPU count) -l select=# (node count) -l ncpus=# (task count) -v key=value (environment variable) -W depend=opts (job dependencies, including "on" and "before" options) -W umask=# (set job's umask) -- Added qalter and qrerun commands to torque package. -- Corrections to qstat logic: job CPU count and partition time format. -- Add job_submit/pbs plugin to translate PBS job dependency options to the extend possible (no support for PBS "before" options) and set some PBS environment variables. -- Add spank/pbs plugin to set a bunch of PBS environment variables. -- Backported sh5util from master to 2.6 as there are some important bugfixes and the new item extraction feature. -- select/cons_res - Correct MacCPUsPerNode partition constraint for CR_Socket. -- scontrol - for setdebugflags command, avoid parsing "-flagname" as an scontrol command line option. -- Fix issue with step accounting if a job is requeued. -- Close file descriptors on exec of prolog, epilog, etc. -- Fix issue when a user has held a job and then sets the begin time into the future. -- Scontrol - Enable changing a job's stdout file. -- Fix issues where memory or node count of a srun job is altered while the srun is pending. The step creation would use the old values and possibly hang srun since the step wouldn't be able to be created in the modified allocation. -- Add support for new SchedulerParameters value of "bf_max_job_part", the maximum depth the backfill scheduler should go in any single partition. -- acct_gather/infiniband plugin - Correct packets_in/out values. -- BLUEGENE - Don't ignore a conn-type request from the user. -- BGQ - Force a request on a Q for a MESH to be a TORUS in a dimension that can only be a TORUS (1). -- Change max message length from 100MB to 1GB before generating "Insane message length" error. -- sched/backfill - Prevent possible memory corruption due to use of bf_continue option and long running scheduling cycle (pending jobs could have been cancelled and purged). -- CRAY - fix AcceleratorAllocation depth correctly for basil 1.3 -- Created the environment variable SLURM_JOB_NUM_NODES for srun jobs and updated the srun man page. -- BLUEGENE/CRAY - Don't set env variables that pertain to a node when Slurm isn't doing the launching. -- gres/gpu and gres/mic - Do not treat the existence of an empty gres.conf file as a fatal error. -- Fixed for if hours are specified as 0 the time days-0:min specification is not parsed correctly. -- switch/nrt - Fix for memory leak. -- Subtract the PMII_COMMANDLEN_SIZE in contribs/pmi2/pmi2_api.c to prevent certain implementation of snprintf() to segfault. * Changes in Slurm 2.6.2 ======================== -- Fix issue with reconfig and GrpCPURunMins -- Fix of wrong node/job state problem after reconfig -- Allow users who are coordinators update their own limits in the accounts they are coordinators over. -- BackupController - Make sure we have a connection to the DBD first thing to avoid it thinking we don't have a cluster name. -- Correct value of min_nodes returned by loading job information to consider the job's task count and maximum CPUs per node. -- If running jobacct_gather/none fix issue on unpacking step completion. -- Reservation with CoreCnt: Avoid possible invalid memory reference. -- sjstat - Add man page when generating rpms. -- Make sure GrpCPURunMins is added when creating a user, account or QOS with sacctmgr. -- Fix for invalid memory reference due to multiple free calls caused by job arrays submitted to multiple partitions. -- Enforce --ntasks-per-socket=1 job option when allocating by socket. -- Validate permissions of key directories at slurmctld startup. Report anything that is world writable. -- Improve GRES support for CPU topology. Previous logic would pick CPUs then reject jobs that can not match GRES to the allocated CPUs. New logic first filters out CPUs that can not use the GRES, next picks CPUs for the job, and finally picks the GRES that best match those CPUs. -- Switch/nrt - Prevent invalid memory reference when allocating single adapter per node of specific adapter type -- CRAY - Make Slurm work with CLE 5.1.1 -- Fix segfault if submitting to multiple partitions and holding the job. -- Use MAXPATHLEN instead of the hardcoded value 1024 for maximum file path lengths. -- If OverTimeLimit is defined do not declare failed those jobs that ended in the OverTimeLimit interval. * Changes in Slurm 2.6.1 ======================== -- slurmdbd - Allow job derived ec and comments to be modified by non-root users. -- Fix issue with job name being truncated to 24 chars when sending a mail message. -- Fix minor issues with spec file, missing files and including files erroneously on a bluegene system. -- sacct - fix --name and --partition options when using accounting_storage/filetxt. -- squeue - Remove extra whitespace of default printout. -- BGQ - added head ppcfloor as an include dir when building. -- BGQ - Better debug messages in runjob_mux plugin. -- PMI2 Updated the Makefile.am to build a versioned library. -- CRAY - Fix srun --mem_bind=local option with launch/aprun. -- PMI2 Corrected buffer size computation in the pmi2_api.c module. -- GRES accounting data wrong in database: gres_alloc, gres_req, and gres_used fields were empty if the job was not started immediately. -- Fix sbatch and srun task count logic when --ntasks-per-node specified, but no explicit task count. -- Corrected the hdf5 profile user guide and the acct_gather.conf documentation. -- IPMI - Fix Math bug getting new wattage. -- Corrected the AcctGatherProfileType documentation in slurm.conf -- Corrected the sh5util program to print the header in the csv file only once, set the debug messages at debug() level, make the argument check case insensitive and avoid printing duplicate \n. -- If cannot collect energy values send message to the controller to drain the node and log error slurmd log file. -- Handle complete removal of CPURunMins time at the end of the job instead of at multifactor poll. -- sview - Add missing debug_flag options. -- PGSQL - Notes about Postgres functionality being removed in the next version of Slurm. -- MYSQL - fix issue when rolling up usage and events happened when a cluster was down (slurmctld not running) during that time period. -- sched/wiki2 - Insure that Moab gets current CPU load information. -- Prevent infinite loop in parsing configuration if including file containing one blank line. -- Fix pack and unpack between 2.6 and 2.5. -- Fix job state recovery logic in which a job's accounting frequency was not set. This would result in a value of 65534 seconds being used (the equivalent of NO_VAL in uint16_t), which could result in the job being requeued or aborted. -- Validate a job's accounting frequency at submission time rather than waiting for it's initiation to possibly fail. -- Fix CPURunMins if a job is requeued from a failed launch. -- Fix in accounting_storage/filetxt to correct start times which sometimes could end up before the job started. -- Fix issue with potentially referencing past an array in parse_time() -- CRAY - fix issue with accelerators on a cray when parsing BASIL 1.3 XML. -- Fix issue with a 2.5 slurmstepd locking up when talking to a 2.6 slurmd. -- Add argument to priority plugin's priority_p_reconfig function to note when the association and QOS used_cpu_run_secs field has been reset. * Changes in Slurm 2.6.0 ======================== -- Fix it so bluegene and serial systems don't get warnings over new NODEDATA enum. -- When a job is aborted send a message for any tasks that have completed. -- Correction to memory per CPU calculation on system with threads and allocating cores or sockets. -- Requeue batch job if it's node reboots (used to abort the job). -- Enlarge maximum size of srun's hostlist file. -- IPMI - Fix first poll to get correct consumed_energy for a step. -- Correction to job state recovery logic that could result in assert failure. -- Record partial step accounting record if allocated nodes fail abnormally. -- Accounting - fix issue where PrivateData=jobs or users could potentially show information to users that had no associations on the system. -- Make PrivateData in slurmdbd.conf case insensitive. -- sacct/sstat - Add format option ConsumedEnergyRaw to print full energy values. * Changes in Slurm 2.6.0rc2 =========================== -- HDF5 - Fix issue with Ubuntu where HDF5 development headers are overwritten by the parallel versions thus making it so we need handle both cases. -- ACCT_GATHER - handle suspending correctly for polling threads. -- Make SLURM_DISTRIBUTION env var hold both types of distribution if specified. -- Remove hardcoded /usr/local from slurm.spec. -- Modify slurmctld locking to improve performance under heavy load with very large numbers of batch job submissions or job cancellations. -- sstat - Fix issue where if -j wasn't given allow last argument to be checked for as the job/step id. -- IPMI - fix adjustment on poll when using EnergyIPMICalcAdjustment. * Changes in Slurm 2.6.0rc1 =========================== -- Added helper script for launching symmetric and MIC-only MPI tasks within SLURM (in contribs/mic/mpirun-mic). -- Change maximum delay for state save from 2 secs to 5 secs. Make timeout configurable at build time by defining SAVE_MAX_WAIT. -- Modify slurmctld data structure locking to interleave read and write locks rather than always favor write locks over read locks. -- Added sacct format option of "ALL" to print all fields. -- Deprecate the SchedulerParameters value of "interval" use "bf_interval" instead as documented. -- Add acct_gather_profile/hdf5 to profile jobs with hdf5 -- Added MaxCPUsPerNode partition configuration parameter. This can be especially useful to schedule systems with GPUs. -- Permit "scontrol reboot_node" for nodes in MAINT reservation. -- Added "PriorityFlags" value of "SMALL_RELATIVE_TO_TIME". If set, the job's size component will be based upon not the job size alone, but the job's size divided by it's time limit. -- Added sbatch option "--ignore-pbs" to ignore "#PBS" options in the batch script. -- Rename slurm_step_ctx_params_t field from "mem_per_cpu" to "pn_min_memory". Job step now accepts memory specification in either per-cpu or per-node basis. -- Add ability to specify host repitition count in the srun hostfile (e.g. "host1*2" is equivalent to "host1,host1"). * Changes in Slurm 2.6.0pre3 ============================ -- Add milliseconds to default log message header (both RFC 5424 and ISO 8601 time formats). Disable milliseconds logging using the configure parameter "--disable-log-time-msec". Default time format changes to ISO 8601 (without time zone information). Specify "--enable-rfc5424time" to restore the time zone information. -- Add username (%u) to the filename pattern in the batch script. -- Added options for front end nodes of AllowGroups, AllowUsers, DenyGroups, and DenyUsers. -- Fix sched/backfill logic to initiate jobs with maximum time limit over the partition limit, but the minimum time limit permits it to start. -- gres/gpu - Fix for gres.conf file with multiple files on a single line using a slurm expression (e.g. "File=/dev/nvidia[0-1]"). -- Replaced ipmi.conf with generic acct_gather.conf file for all acct_gather plugins. For those doing development to use this follow the model set forth in the acct_gather_energy_ipmi plugin. -- Added more options to update a step's information -- Add DebugFlags=ThreadID which will print the thread id of the calling thread. -- CRAY - Allocate whole node (CPUs) in reservation despite what the user requests. We have found any srun/aprun afterwards will work on a subset of resources. * Changes in Slurm 2.6.0pre2 ============================ -- Do not purge inactive interactive jobs that lack a port to ping (added for MR+ operation). -- Advanced reservations with hostname and core counts now supports asymetric reservations (e.g. specific different core count for each node). -- Added slurmctld/dynalloc plugin for MapReduce+ support. -- Added "DynAllocPort" configuration parameter. -- Added partition paramter of SelectTypeParameters to override system-wide value. -- Added cr_type to partition_info data structure. -- Added allocated memory to node information available (within the existing select_nodeinfo field of the node_info_t data structure). Added Allocated Memory to node information displayed by sview and scontrol commands. -- Make sched/backfill the default scheduling plugin rather than sched/builtin (FIFO). -- Added support for a job having different priorities in different partitions. -- Added new SchedulerParameters configuration parameter of "bf_continue" which permits the backfill scheduler to continue considering jobs for backfill scheduling after yielding locks even if new jobs have been submitted. This can result in lower priority jobs from being backfill scheduled instead of newly arrived higher priority jobs, but will permit more queued jobs to be considered for backfill scheduling. -- Added support to purge reservation records from accounting. -- Cray - Add support for Basil 1.3 * Changes in SLURM 2.6.0pre1 ============================ -- Add "state" field to job step information reported by scontrol. -- Notify srun to retry step creation upon completion of other job steps rather than polling. This results in much faster throughput for job step execution with --exclusive option. -- Added "ResvEpilog" and "ResvProlog" configuration parameters to execute a program at the beginning and end of each reservation. -- Added "slurm_load_job_user" function. This is a variation of "slurm_load_jobs", but accepts a user ID argument, potentially resulting in substantial performance improvement for "squeue --user=ID" -- Added "slurm_load_node_single" function. This is a variation of "slurm_load_nodes", but accepts a node name argument, potentially resulting in substantial performance improvement for "sinfo --nodes=NAME". -- Added "HealthCheckNodeState" configuration parameter identify node states on which HealthCheckProgram should be executed. -- Remove sacct --dump --formatted-dump options which were deprecated in 2.5. -- Added support for job arrays (phase 1 of effort). See "man sbatch" option -a/--array for details. -- Add new AccountStorageEnforce options of 'nojobs' and 'nosteps' which will allow the use of accounting features like associations, qos and limits but not keep track of jobs or steps in accounting. -- Cray - Add new cray.conf parameter of "AlpsEngine" to specify the communication protocol to be used for ALPS/BASIL. -- select/cons_res plugin: Correction to CPU allocation count logic in for cores without hyperthreading. -- Added new SelectTypeParameter value of "CR_ALLOCATE_FULL_SOCKET". -- Added PriorityFlags value of "TICKET_BASED" and merged priority/multifactor2 plugin into priority/multifactor plugin. -- Add "KeepAliveTime" configuration parameter controlling how long sockets used for srun/slurmstepd communications are kept alive after disconnect. -- Added SLURM_SUBMIT_HOST to salloc, sbatch and srun job environment. -- Added SLURM_ARRAY_TASK_ID to environment of job array. -- Added squeue --array/-r option to optimize output for job arrays. -- Added "SlurmctldPlugstack" configuration parameter for generic stack of slurmctld daemon plugins. -- Removed contribs/arrayrun tool. Use native support for job arrays. -- Modify default installation locations for RPMs to match "make install": _prefix /usr/local _slurm_sysconfdir %{_prefix}/etc/slurm _mandir %{_prefix}/share/man _infodir %{_prefix}/share/info -- Add acct_gather_energy/ipmi which works off freeipmi for energy gathering * Changes in Slurm 2.5.8 ======================== -- Fix for slurmctld segfault on NULL front-end reason field. -- Avoid gres step allocation errors when a job shrinks in size due to either down nodes or explicit resizing. Generated slurmctld errors of this type: "step_test ... gres_bit_alloc is NULL" -- Fix bug that would leak memory and over-write the AllowGroups field if on "scontrol reconfig" when AllowNodes is manually changed using scontrol. -- Get html/man files to install in correct places with rpms. -- Remove --program-prefix from spec file since it appears to be added by default and appeared to break other things. -- Updated the automake min version in autogen.sh to be correct. -- Select/cons_res - Correct total CPU count allocated to a job with --exclusive and --cpus-per-task options -- switch/nrt - Don't allocate network resources unless job step has 2+ nodes. -- select/cons_res - Avoid extraneous "oversubscribe" error messages. -- Reorder get config logic to avoid deadlock. -- Enforce QOS MaxCPUsMin limit when job submission contains no user-specified time limit. -- EpilogSlurmctld pthread is passed required arguments rather than a pointer to the job record, which under some conditions could be purged and result in an invalid memory reference. * Changes in Slurm 2.5.7 ======================== -- Fix for linking to the select/cray plugin to not give warning about undefined variable. -- Add missing symbols to the xlator.h -- Avoid placing pending jobs in AdminHold state due to backfill scheduler interactions with advanced reservation. -- Accounting - make average by task not cpu. -- CRAY - Change logging of transient ALPS errors from error() to debug(). -- POE - Correct logic to support poe option "-euidevice sn_all" and "-euidevice sn_single". -- Accounting - Fix minor initialization error. -- POE - Correct logic to support srun network instances count with POE. -- POE - With the srun --launch-cmd option, report proper task count when the --cpus-per-task option is used without the --ntasks option. -- POE - Fix logic binding tasks to CPUs. -- sview - Fix race condition where new information could of slipped past the node tab and we didn't notice. -- Accounting - Fix an invalid memory read when slurmctld sends data about start job to slurmdbd. -- If a prolog or epilog failure occurs, drain the node rather than setting it down and killing all of its jobs. -- Priority/multifactor - Avoid underflow in half-life calculation. -- POE - pack missing variable to allow fanout (more than 32 nodes) -- Prevent clearing reason field for pending jobs. This bug was introduced in v2.5.5 (see "Reject job at submit time ..."). -- BGQ - Fix issue with preemption on sub-block jobs where a job would kill all preemptable jobs on the midplane instead of just the ones it needed to. -- switch/nrt - Validate dynamic window allocation size. -- BGQ - When --geo is requested do not impose the default conn_types. -- CRAY - Support CLE 4.2.0 -- RebootNode logic - Defers (rather than forgets) reboot request with job running on the node within a reservation. -- switch/nrt - Correct network_id use logic. Correct support for user sn_all and sn_single options. -- sched/backfill - Modify logic to reduce overhead under heavy load. -- Fix job step allocation with --exclusive and --hostlist option. -- Select/cons_res - Fix bug resulting in error of "cons_res: sync loop not progressing, holding job #" -- checkpoint/blcr - Reset max_nodes from zero to NO_VAL on job restart. -- launch/poe - Fix for hostlist file support with repeated host names. -- priority/multifactor2 - Prevent possible divide by zero. -- srun - Don't check for executable if --test-only flag is used. -- energy - On a single node only use the last task for gathering energy. Since we don't currently track energy usage per task (only per step). Otherwise we get double the energy. * Changes in Slurm 2.5.6 ======================== -- Gres fix for requeued jobs. -- Gres accounting - Fix regression in 2.5.5 for keeping track of gres requested and allocated. * Changes in Slurm 2.5.5 ======================== -- Fix for sacctmgr add qos to handle the 'flags' option. -- Export SLURM_ environment variables from sbatch, even if "--export" option does not explicitly list them. -- If node is in more than one partition, correct counting of allocated CPUs. -- If step requests more CPUs than possible in specified node count of job allocation then return ESLURM_TOO_MANY_REQUESTED_CPUS rather than ESLURM_NODES_BUSY and retrying. -- CRAY - Fix SLURM_TASKS_PER_NODE to be set correctly. -- Accounting - more checks for strings with a possible `'` in it. -- sreport - Fix by adding planned down time to utilization reports. -- Do not report an error when sstat identifies job steps terminated during its execution, but log using debug type message. -- Select/cons_res - Permit node removed from job by going down to be returned to service and re-used by another job. -- Select/cons_res - Tighter packing of job allocations on sockets. -- SlurmDBD - fix to allow user root along with the slurm user to register a cluster. -- Select/cons_res - Fix for support of consecutive node option. -- Select/cray - Modify build to enable direct use of libslurm library. -- Bug fixes related to job step allocation logic. -- Cray - Disable enforcement of MaxTasksPerNode, which is not applicable with launch/aprun. -- Accounting - When rolling up data from past usage ignore "idle" time from a reservation when it has the "Ignore_Jobs" flag set. Since jobs could run outside of the reservation in it's nodes without this you could have double time. -- Accounting - Minor fix to avoid reuse of variable erroneously. -- Reject job at submit time if the node count is invalid. Previously such a job submitted to a DOWN partition would be queued. -- Purge vestigial job scripts when the slurmd cold starts or slurmstepd terminates abnormally. -- Add support for FreeBSD. -- Add sanity check for NULL cluster names trying to register. -- BGQ - Push action 'D' info to scontrol for admins. -- Reset a job's reason from PartitionDown when the partition is set up. -- BGQ - Handle issue where blocks would have a pending job on them and while it was free cnodes would go into software error and kill the job. -- BGQ - Fix issue where if for some reason we are freeing a block with a pending job on it we don't kill the job. -- BGQ - Fix race condition were a job could of been removed from a block without it still existing there. This is extremely rare. -- BGQ - Fix for when a step completes in Slurm before the runjob_mux notifies the slurmctld there were software errors on some nodes. -- BGQ - Fix issue on state recover if block states are not around and when reading in state from DB2 we find a block that can't be created. You can now do a clean start to rid the bad block. -- Modify slurmdbd to retransmit to slurmctld daemon if it is not responding. -- BLUEGENE - Fix issue where when doing backfill preemptable jobs were never looked at to determine eligibility of backfillable job. -- Cray/BlueGene - Disable srun --pty option unless LaunchType=launch/slurm. -- CRAY - Fix sanity check for systems with more than 32 cores per node. -- CRAY - Remove other objects from MySQL query that are available from the XML. -- BLUEGENE - Set the geometry of a job when a block is picked and the job isn't a sub-block job. -- Cray - avoid check of macro versions of CLE for version 5.0. -- CRAY - Fix memory issue with reading in the cray.conf file. -- CRAY - If hostlist is given with srun make sure the node count is the same as the hosts given. -- CRAY - If task count specified, but no tasks-per-node, then set the tasks per node in the BASIL reservation request. -- CRAY - fix issue with --mem option not giving correct amount of memory per cpu. -- CRAY - Fix if srun --mem is given outside an allocation to set the APRUN_DEFAULT_MEMORY env var for aprun. This scenario will not display the option when used with --launch-cmd. -- Change sview to use GMutex instead of GStaticMutex -- CRAY - set APRUN_DEFAULT_MEMROY instead of CRAY_AUTO_APRUN_OPTIONS -- sview - fix issue where if a partition was completely in one state the cpu count would be reflected correctly. -- BGQ - fix for handling half rack system in STATIC of OVERLAP mode to implicitly create full system block. -- CRAY - Dynamically create BASIL XML buffer to resize as needed. -- Fix checking if QOS limit MaxCPUMinsPJ is set along with DenyOnLimit to deny the job instead of holding it. -- Make sure on systems that use a different launcher than launch/slurm not to attempt to signal tasks on the frontend node. -- Cray - when a step is requested count other steps running on nodes in the allocation as taking up the entire node instead of just part of the node allocated. And always enforce exclusive on a step request. -- Cray - display correct nodelist, node/cpu count on steps. * Changes in Slurm 2.5.4 ======================== -- Fix bug in PrologSlurmctld use that would block job steps until node responds. -- CRAY - If a partition has MinNodes=0 and a batch job doesn't request nodes put the allocation to 1 instead of 0 which prevents the allocation to happen. -- Better debug when the database is down and using the --cluster option in the user commands. -- When asking for job states with sacct, default to 'now' instead of midnight of the current day. -- Fix for handling a test-only job or immediate job that fails while being built. -- Comment out all of the logic in the job_submit/defaults plugin. The logic is only an example and not meant for actual use. -- Eliminate configuration file 4096 character line limitation. -- More robust logic for tree message forward -- BGQ - When cnodes fail in a timeout fashion correctly look up parent midplane. -- Correct sinfo "%c" (node's CPU count) output value for Bluegene systems. -- Backfill - Responsive improvements for systems with large numbers of jobs (>5000) and using the SchedulerParameters option bf_max_job_user. -- slurmstepd: ensure that IO redirection openings from/to files correctly handle interruption -- BGQ - Able to handle when midplanes go into Hardware::SoftwareFailure -- GRES - Correct tracking of specific resources used after slurmctld restart. Counts would previously go negative as jobs terminate and decrement from a base value of zero. -- Fix for priority/multifactor2 plugin to not assert when configured with --enable-debug. -- Select/cons_res - If the job request specified --ntasks-per-socket and the allocation using is cores, then pack the tasks onto the sockets up to the specified value. -- BGQ - If a cnode goes into an 'error' state and the block containing the cnode does not have a job running on it do not resume the block. -- BGQ - Handle blocks that don't free themselves in a reasonable time better. -- BGQ - Fix for signaling steps when allocation ends before step. -- Fix for backfill scheduling logic with job preemption; starts more jobs. -- xcgroup - remove bugs with EINTR management in write calls -- jobacct_gather - fix total values to not always == the max values. -- Fix for handling node registration messages from older versions without energy data. -- BGQ - Allow user to request full dimensional mesh. -- sdiag command - Correction to jobs started value reported. -- Prevent slurmctld assert when invalid change to reservation with running jobs is made. -- BGQ - If signal is NODE_FAIL allow forward even if job is completing and timeout in the runjob_mux trying to send in this situation. -- BGQ - More robust checking for correct node, task, and ntasks-per-node options in srun, and push that logic to salloc and sbatch. -- GRES topology bug in core selection logic fixed. -- Fix to handle init.d script for querying status and not return 1 on success. * Changes in SLURM 2.5.3 ======================== -- Gres/gpu plugin - If no GPUs requested, set CUDA_VISIBLE_DEVICES=NoDevFiles. This bug was introduced in 2.5.2 for the case where a GPU count was configured, but without device files. -- task/affinity plugin - Fix bug in CPU masks for some processors. -- Modify sacct command to get format from SACCT_FORMAT environment variable. -- BGQ - Changed order of library inclusions and fixed incorrect declaration to compile correctly on newer compilers -- Fix for not building sview if glib exists on a system but not the gtk libs. -- BGQ - Fix for handling a job cleanup on a small block if the job has long since left the system. -- Fix race condition in job dependency logic which can result in invalid memory reference. * Changes in SLURM 2.5.2 ======================== -- Fix advanced reservation recovery logic when upgrading from version 2.4. -- BLUEGENE - fix for QOS/Association node limits. -- Add missing "safe" flag from print of AccountStorageEnforce option. -- Fix logic to optimize GRES topology with respect to allocated CPUs. -- Add job_submit/all_partitions plugin to set a job's default partition to ALL available partitions in the cluster. -- Modify switch/nrt logic to permit build without libnrt.so library. -- Handle srun task launch failure without duplicate error messages or abort. -- Fix bug in QoS limits enforcement when slurmctld restarts and user not yet added to the QOS list. -- Fix issue where sjstat and sjobexitmod was installed in 2 different RPMs. -- Fix for job request of multiple partitions in which some partitions lack nodes with required features. -- Permit a job to use a QOS they do not have access to if an administrator manually set the job's QOS (previously the job would be rejected). -- Make more variables available to job_submit/lua plugin: slurm.MEM_PER_CPU, slurm.NO_VAL, etc. -- Fix topology/tree logic when nodes defined in slurm.conf get re-ordered. -- In select/cons_res, correct logic to allocate whole sockets to jobs. Work by Magnus Jonsson, Umea University. -- In select/cons_res, correct logic when job removed from only some nodes. -- Avoid apparent kernel bug in 2.6.32 which apparently is solved in at least 3.5.0. This avoids a stack overflow when running jobs on more than 120k nodes. -- BLUEGENE - If we made a block that isn't runnable because of a overlapping block, destroy it correctly. -- Switch/nrt - Dynamically load libnrt.so from within the plugin as needed. This eliminates the need for libnrt.so on the head node. -- BLUEGENE - Fix in reservation logic that could cause abort. * Changes in SLURM 2.5.1 ======================== -- Correction to hostlist sorting for hostnames that contain two numeric components and the first numeric component has various sizes (e.g. "rack9blade1" should come before "rack10blade1") -- BGQ - Only poll on initialized blocks instead of calling getBlocks on each block independently. -- Fix of task/affinity plugin logic for Power7 processors having hyper- threading disabled (cpu mask has gaps). -- Fix of job priority ordering with sched/builtin and priority/multifactor. Patch from Chris Read. -- CRAY - Fix for setting up the aprun for a large job (+2000 nodes). -- Fix for race condition related to compute node boot resulting in node being set down with reason of "Node unexpectedly rebooted" -- RAPL - Fix for handling errors when opening msr files. -- BGQ - Fix for salloc/sbatch to do the correct allocation when asking for -N1 -n#. -- BGQ - in emulation make it so we can pretend to run large jobs (>64k nodes) -- BLUEGENE - Correct method to update conn_type of a job. -- BLUEGENE - Fix issue with preemption when needing to preempt multiple jobs to make one job run. -- Fixed issue where if an srun dies inside of an allocation abnormally it would of also killed the allocation. -- FRONTEND - fixed issue where if a systems nodes weren't defined in the slurm.conf with NodeAddr's signals going to a step could be handled incorrectly. -- If sched/backfill starts a job with a QOS having NO_RESERVE and not job time limit, start it with the partition time limit (or one year if the partition has no time limit) rather than NO_VAL (140 year time limit); -- Alter hostlist logic to allocate large grid dynamically instead of on stack. -- Change RPC version checks to support version 2.5 slurmctld with version 2.4 slurmd daemons. -- Correct core reservation logic for use with select/serial plugin. -- Exit scontrol command on stdin EOF. -- Disable job --exclusive option with select/serial plugin. * Changes in SLURM 2.5.0 ======================== -- Add DenyOnLimit flag for QOS to deny jobs at submission time if they request resources that reach a 'Max' limit. -- Permit SlurmUser or operator to change QOS of non-pending jobs (e.g. running jobs). -- BGQ - move initial poll to beginning of realtime interaction, which will also cause it to run if the realtime server ever goes away. * Changes in SLURM 2.5.0-rc2 ============================ -- Modify sbcast logic to survive slurmd daemon restart while file a transmission is in progress. -- Add retry logic to munge encode/decode calls. This is needed if the munge deamon is under very heavy load (e.g. with 1000 slurmd daemons per compute node). -- Add launch and acct_gather_energy plugins to RPMs. -- Restore support for srun "--mpi=list" option. -- CRAY - Introduce step accounting for a Cray. -- Modify srun to abandon I/O 60 seconds after the last task ends. Otherwise an aborted slurmstepd can cause the srun process to hang indefinitely. -- ENERGY - RAPL - alter code to close open files (and only open them once where needed) -- If the PrologSlurmctld fails, then requeue the job an indefinite number of times instead of only one time. * Changes in SLURM 2.5.0-rc1 ============================ -- Added Prolog and Epilog Guide (web page). Based upon work by Jason Sollom, Cray Inc. and used by permission. -- Restore gang scheduling functionality. Preemptor was not being scheduled. Fix for bugzilla #3. -- Add "cpu_load" to node information. Populate CPULOAD in node information reported to Moab cluster manager. -- Preempt jobs only when insufficient idle resources exist to start job, regardless of the node weight. -- Added priority/multifactor2 plugin based upon ticket distribution system. Work by Janne Blomqvist, Aalto University. -- Add SLURM_NODELIST to environment variables available to Prolog and Epilog. -- Permit reservations to allow or deny access by account and/or user. -- Add ReconfigFlags value of KeepPartState. See "man slurm.conf" for details. -- Modify the task/cgroup plugin adding a task_pre_launch_priv function and move slurmstepd outside of the step's cgroup. Work by Matthieu Hautreux. -- Intel MIC processor support added using gres/mic plugin. BIG thanks to Olli-Pekka Lehto, CSC-IT Center for Science Ltd. -- Accounting - Change empty jobacctinfo structs to not actually be used instead of putting 0's into the database we put NO_VALS and have sacct figure out jobacct_gather wasn't used. -- Cray - Prevent calling basil_confirm more than once per job using a flag. -- Fix bug with topology/tree and job with min-max node count. Now try to get max node count rather than minimizing leaf switches used. -- Add AccountingStorageEnforce=safe option to provide method to avoid jobs launching that wouldn't be able to run to completion because of a GrpCPUMins limit. -- Add support for RFC 5424 timestamps in logfiles. Disable with configuration option of "--disable-rfc5424time". By Janne Blomqvist, Aalto University. -- CRAY - Replace srun.pl with launch/aprun plugin to use srun to wrap the aprun process instead of a perl script. -- srun - Rename --runjob-opts to --launcher-opts to be used on systems other than BGQ. -- Added new DebugFlags - Energy for AcctGatherEnergy plugins. -- start deprecation of sacct --dump --fdump -- BGQ - added --verbose=OFF when srun --quiet is used -- Added acct_gather_energy/rapl plugin to record power consumption by job. Work by Yiannis Georgiou, Martin Perry, et. al., Bull * Changes in SLURM 2.5.0.pre3 ============================= -- Add Google search to all web pages. -- Add sinfo -T option to print reservation information. Work by Bill Brophy, Bull. -- Force slurmd exit after 2 minute wait, even if threads are hung. -- Change node_req field in struct job_resources from 8 to 32 bits so we can run more than 256 jobs per node. -- sched/backfill: Improve accuracy of expected job start with respect to reservations. -- sinfo partition field size will be set the the length of the longest partition name by default. -- Make it so the parse_time will return a valid 0 if given epoch time and set errno == ESLURM_INVALID_TIME_VALUE on error instead. -- Correct srun --no-alloc logic when node count exceeds node list or task task count is not a multiple of the node count. Work by Hongjia Cao, NUDT. -- Completed integration with IBM Parallel Environment including POE and IBM's NRT switch library. * Changes in SLURM 2.5.0.pre2 ============================= -- When running with multiple slurmd daemons per node, enable specifying a range of ports on a single line of the node configuration in slurm.conf. -- Add reservation flag of Part_Nodes to allocate all nodes in a partition to a reservation and automatically change the reservation when nodes are added to or removed from the reservation. Based upon work by Bill Brophy, Bull. -- Add support for advanced reservation for specific cores rather than whole nodes. Current limiations: homogeneous cluster, nodes idle when reservation created, and no more than one reservation per node. Code is still under development. Work by Alejandro Lucero Palau, et. al, BSC. -- Add DebugFlag of Switch to log switch plugin details. -- Correct job node_cnt value in job completion plugin when job fails due to down node. Previously was too low by one. -- Add new srun option --cpu-freq to enable user control over the job's CPU frequency and thus it's power consumption. NOTE: cpu frequency is not currently preserved for jobs being suspended and later resumed. Work by Don Albert, Bull. -- Add node configuration information about "boards" and optimize task placement on minimum number of boards. Work by Rod Schultz, Bull. * Changes in SLURM 2.5.0.pre1 ============================= -- Add new output to "scontrol show configuration" of LicensesUsed. Output is "name:used/total" -- Changed jobacct_gather plugin infrastructure to be cleaner and easier to maintain. -- Change license option count separator from "*" to ":" for consistency with the gres option (e.g. "--licenses=foo:2 --gres=gpu:2"). The "*" will still be accepted, but is no longer documented. -- Permit more than 100 jobs to be scheduled per node (new limit is 250 jobs). -- Restructure of srun code to allow outside programs to utilize existing logic. * Changes in SLURM 2.4.6 ======================== -- Correct WillRun authentication logic when issued for non-job owner. -- BGQ - fix memory leak -- BGQ - Fix to check block for action 'D' if it also has nodes in error. * Changes in SLURM 2.4.5 ======================== -- Cray - On job kill requeust, send SIGCONT, SIGTERM, wait KillWait and send SIGKILL. Previously just sent SIGKILL to tasks. -- BGQ - Fix issue when running srun outside of an allocation and only specifying the number of tasks and not the number of nodes. -- BGQ - validate correct ntasks_per_node -- BGQ - when srun -Q is given make runjob be quiet -- Modify use of OOM (out of memory protection) for Linux 2.6.36 kernel or later. NOTE: If you were setting the environment variable SLURMSTEPD_OOM_ADJ=-17, it should be set to -1000 for Linux 2.6.36 kernel or later. -- BGQ - Fix job step timeout actually happen when done from within an allocation. -- Reset node MAINT state flag when a reservation's nodes or flags change. -- Accounting - Fix issue where QOS usage was being zeroed out on a slurmctld restart. -- BGQ - Add 64 tasks per node as a valid option for srun when used with overcommit. -- BLUEGENE - With Dynamic layout mode - Fix issue where if a larger block was already in error and isn't deallocating and underlying hardware goes bad one could get overlapping blocks in error making the code assert when a new job request comes in. -- BGQ - handle pending actions on a block better when trying to deallocate it. -- Accounting - Fixed issue where if nodenames have changed on a system and you query against that with -N and -E you will get all jobs during that time instead of only the ones running on -N. -- BGP - Fix for HTC mode -- Accounting - If a job start message fails to the SlurmDBD reset the db_inx so it gets sent again. This isn't a major problem since the start will happen when the job ends, but this does make things cleaner. -- If an salloc is waiting for an allocation to happen and is canceled by the user mark the state canceled instead of completed. -- Fix issue in accounting if a user puts a '\' in their job name. -- Accounting - Fix for if asking for users or accounts that were deleted with associations get the deleted associations as well. -- BGQ - Handle shared blocks that need to be removed and have jobs running on them. This should only happen in extreme conditions. -- Fix inconsistency for hostlists that have more than 1 range. -- BGQ - Add mutex around recovery for the Real Time server to avoid hitting DB2 so hard. -- BGQ - If an allocation exists on a block that has a 'D' action on it fail job on future step creation attempts. * Changes in SLURM 2.4.4 ======================== -- BGQ - minor fix to make build work in emulated mode. -- BGQ - Fix if large block goes into error and the next highest priority jobs are planning on using the block. Previously it would fail those jobs erroneously. -- BGQ - Fix issue when a cnode going to an error (not SoftwareError) state with a job running or trying to run on it. -- Execute slurm_spank_job_epilog when there is no system Epilog configured. -- Fix for srun --test-only to work correctly with timelimits -- BGQ - If a job goes away while still trying to free it up in the database, and the job is running on a small block make sure we free up the correct node count. -- BGQ - Logic added to make sure a job has finished on a block before it is purged from the system if its front-end node goes down. -- Modify strigger so that a filter option of "--user=0" is supported. -- Correct --mem-per-cpu logic for core or socket allocations with multiple threads per core. -- Fix for older < glibc 2.4 systems to use euidaccess() instead of eaccess(). -- BLUEGENE - Do not alter a pending job's node count when changing it's partition. -- BGQ - Add functionality to make it so we track the actions on a block. This is needed for when a free request is added to a block but there are jobs finishing up so we don't start new jobs on the block since they will fail on start. -- BGQ - Fixed InactiveLimit to work correctly to avoid scenarios where a user's pending allocation was started with srun and then for some reason the slurmctld was brought down and while it was down the srun was removed. -- Fixed InactiveLimit math to work correctly -- BGQ - Add logic to make it so blocks can't use a midplane with a nodeboard in error for passthrough. -- BGQ - Make it so if a nodeboard goes in error any block using that midplane for passthrough gets removed on a dynamic system. -- BGQ - Fix for printing realtime server debug correctly. -- BGQ - Cleaner handling of cnode failures when reported through the runjob interface instead of through the normal method. -- smap - spread node information across multiple lines for larger systems. -- Cray - Defer salloc until after PrologSlurmctld completes. -- Correction to slurmdbd communications failure handling logic, incorrect error codes returned in some cases. * Changes in SLURM 2.4.3 ======================== -- Accounting - Fix so complete 32 bit numbers can be put in for a priority. -- cgroups - fix if initial directory is non-existent SLURM creates it correctly. Before the errno wasn't being checked correctly -- BGQ - fixed srun when only requesting a task count and not a node count to operate the same way salloc or sbatch did and assign a task per cpu by default instead of task per node. -- Fix salloc --gid to work correctly. Reported by Brian Gilmer -- BGQ - fix smap to set the correct default MloaderImage -- BLUEGENE - updated documentation. -- Close the batch job's environment file when it contains no data to avoid leaking file descriptors. -- Fix sbcast's credential to last till the end of a job instead of the previous 20 minute time limit. The previous behavior would fail for large files 20 minutes into the transfer. -- Return ESLURM_NODES_BUSY rather than ESLURM_NODE_NOT_AVAIL error on job submit when required nodes are up, but completing a job or in exclusive job allocation. -- Add HWLOC_FLAGS so linking to libslurm works correctly -- BGQ - If using backfill and a shared block is running at least one job and a job comes through backfill and can fit on the block without ending jobs don't set an end_time for the running jobs since they don't need to end to start the job. -- Initialize bind_verbose when using task/cgroup. -- BGQ - Fix for handling backfill much better when sharing blocks. -- BGQ - Fix for making small blocks on first pass if not sharing blocks. -- BLUEGENE - Remove force of default conn_type instead of leaving NAV when none are requested. The Block allocator sets it up temporarily so this isn't needed. -- BLUEGENE - Fix deadlock issue when dealing with bad hardware if using static blocks. -- Fix to mysql plugin during rollup to only query suspended table when jobs reported some suspended time. -- Fix compile with glibc 2.16 (Kacper Kowalik) -- BGQ - fix for deadlock where a block has error on it and all jobs running on it are preemptable by scheduling job. -- proctrack/cgroup: Exclude internal threads from "scontrol list pids". Patch from Matthieu Hautreux, CEA. -- Memory leak fixed for select/linear when preempting jobs. -- Fix if updating begin time of a job to update the eligible time in accounting as well. -- BGQ - make it so you can signal steps when signaling the job allocation. -- BGQ - Remove extra overhead if a large block has many cnode failures. -- Priority/Multifactor - Fix issue with age factor when a job is estimated to start in the future but is able to run now. -- CRAY - update to work with ALPS 5.1 -- BGQ - Handle issue of speed and mutexes when polling instead of using the realtime server. -- BGQ - Fix minor sorting issue with sview when sorting by midplanes. -- Accounting - Fix for handling per user max node/cpus limits on a QOS correctly for current job. -- Update documentation for -/+= when updating a reservation's users/accounts/flags -- Update pam module to work if using aliases on nodes instead of actual host names. -- Correction to task layout logic in select/cons_res for job with minimum and maximum node count. -- BGQ - Put final poll after realtime comes back into service to avoid having the realtime server go down over and over again while waiting for the poll to finish. -- task/cgroup/memory - ensure that ConstrainSwapSpace=no is correctly handled. Work by Matthieu Hautreux, CEA. -- CRAY - Fix for sacct -N option to work correctly -- CRAY - Update documentation to describe installation from rpm instead or previous piecemeal method. -- Fix sacct to work with QOS' that have previously been deleted. -- Added all available limits to the output of sacctmgr list qos * Changes in SLURM 2.4.2 ======================== -- BLUEGENE - Correct potential deadlock issue when hardware goes bad and there are jobs running on that hardware. -- If job is submitted to more than one partition, it's partition pointer can be set to an invalid value. This can result in the count of CPUs allocated on a node being bad, resulting in over- or under-allocation of its CPUs. Patch by Carles Fenoy, BSC. -- Fix bug in task layout with select/cons_res plugin and --ntasks-per-node option. Patch by Martin Perry, Bull. -- BLUEGENE - remove race condition where if a block is removed while waiting for a job to finish on it the number of unused cpus wasn't updated correctly. -- BGQ - make sure we have a valid block when creating or finishing a step allocation. -- BLUEGENE - If a large block (> 1 midplane) is in error and underlying hardware is marked bad remove the larger block and create a block over just the bad hardware making the other hardware available to run on. -- BLUEGENE - Handle job completion correctly if an admin removes a block where other blocks on an overlapping midplane are running jobs. -- BLUEGENE - correctly remove running jobs when freeing a block. -- BGQ - correct logic to place multiple (< 1 midplane) steps inside a multi midplane block allocation. -- BGQ - Make it possible for a multi midplane allocation to run on more than 1 midplane but not the entire allocation. -- BGL - Fix for syncing users on block from Tim Wickberg -- Fix initialization of protocol_version for some messages to make sure it is always set when sending or receiving a message. -- Reset backfilled job counter only when explicitly cleared using scontrol. Patch from Alejandro Lucero Palau, BSC. -- BLUEGENE - Fix for handling blocks when a larger block will not free and while it is attempting to free underlying hardware is marked in error making small blocks overlapping with the freeing block. This only applies to dynamic layout mode. -- Cray and BlueGene - Do not treat lack of usable front-end nodes when slurmctld deamon starts as a fatal error. Also preserve correct front-end node for jobs when there is more than one front-end node and the slurmctld daemon restarts. -- Correct parsing of srun/sbatch input/output/error file names so that only the name "none" is mapped to /dev/null and not any file name starting with "none" (e.g. "none.o"). -- BGQ - added version string to the load of the runjob_mux plugin to verify the current plugin has been loaded when using runjob_mux_refresh_config -- CGROUPS - Use system mount/umount function calls instead of doing fork exec of mount/umount from Janne Blomqvist. -- BLUEGENE - correct start time setup when no jobs are blocking the way from Mark Nelson -- Fixed sacct --state=S query to return information about suspended jobs current or in the past. -- FRONTEND - Made error warning more apparent if a frontend node isn't configured correctly. -- BGQ - update documentation about runjob_mux_refresh_config which works correctly as of IBM driver V1R1M1 efix 008. * Changes in SLURM 2.4.1 ======================== -- Fix bug for job state change from 2.3 -> 2.4 job state can now be preserved correctly when transitioning. This also applies for 2.4.0 -> 2.4.1, no state will be lost. (Thanks to Carles Fenoy) * Changes in SLURM 2.4.0 ======================== -- Cray - Improve support for zero compute note resource allocations. Partition used can now be configured with no nodes nodes. -- BGQ - make it so srun -i works correctly. -- Fix parse_uint32/16 to complain if a non-digit is given. -- Add SUBMITHOST to job state passed to Moab vial sched/wiki2. Patch by Jon Bringhurst (LANL). -- BGQ - Fix issue when running with AllowSubBlockAllocations=Yes without compiling with --enable-debug -- Modify scontrol to require "-dd" option to report batch job's script. Patch from Don Albert, Bull. -- Modify SchedulerParamters option to match documentation: "bf_res=" changed to "bf_resolution=". Patch from Rod Schultz, Bull. -- Fix bug that clears job pending reason field. Patch fron Don Lipari, LLNL. -- In etc/init.d/slurm move check for scontrol after sourcing /etc/sysconfig/slurm. Patch from Andy Wettstein, University of Chicago. -- Fix in scheduling logic that can delay jobs with min/max node counts. -- BGQ - fix issue where if a step uses the entire allocation and then the next step in the allocation only uses part of the allocation it gets the correct cnodes. -- BGQ - Fix checking for IO on a block with new IBM driver V1R1M1 previous function didn't always work correctly. -- BGQ - Fix issue when a nodeboard goes down and you want to combine blocks to make a larger small block and are running with sub-blocks. -- BLUEGENE - Better logic for making small blocks around bad nodeboard/card. -- BGQ - When using an old IBM driver cnodes that go into error because of a job kill timeout aren't always reported to the system. This is now handled by the runjob_mux plugin. -- BGQ - Added information on how to setup the runjob_mux to run as SlurmUser. -- Improve memory consumption on step layouts with high task count. -- BGQ - quiter debug when the real time server comes back but there are still messages we find when we poll but haven't given it back to the real time yet. -- BGQ - fix for if a request comes in smaller than the smallest block and we must use a small block instead of a shared midplane block. -- Fix issues on large jobs (>64k tasks) to have the correct counter type when packing the step layout structure. -- BGQ - fix issue where if a user was asking for tasks and ntasks-per-node but not node count the node count is correctly figured out. -- Move logic to always use the 1st alphanumeric node as the batch host for batch jobs. -- BLUEGENE - fix race condition where if a nodeboard/card goes down at the same time a block is destroyed and that block just happens to be the smallest overlapping block over the bad hardware. -- Fix bug when querying accounting looking for a job node size. -- BLUEGENE - fix possible race condition if cleaning up a block and the removal of the job on the block failed. -- BLUEGENE - fix issue if a cable was in an error state make it so we can check if a block is still makable if the cable wasn't in error. -- Put nodes names in alphabetic order in node table. -- If preempted job should have a grace time and preempt mode is not cancel but job is going to be canceled because it is interactive or other reason it now receives the grace time. -- BGQ - Modified documents to explain new plugin_flags needed in bg.properties in order for the runjob_mux to run correctly. -- BGQ - change linking from libslurm.o to libslurmhelper.la to avoid warning. * Changes in SLURM 2.4.0.rc1 ============================= -- Improve task binding logic by making fuller use of HWLOC library, especially with respect to Opteron 6000 series processors. Work contributed by Komoto Masahiro. -- Add new configuration parameter PriorityFlags, based upon work by Carles Fenoy (Barcelona Supercomputer Center). -- Modify the step completion RPC between slurmd and slurmstepd in order to eliminate a possible deadlock. Based on work by Matthieu Hautreux, CEA. -- Change the owner of slurmctld and slurmdbd log files to the appropriate user. Without this change the files will be created by and owned by the user starting the daemons (likely user root). -- Reorganize the slurmstepd logic in order to better support NFS and Kerberos credentials via the AUKS plugin. Work by Matthieu Hautreux, CEA. -- Fix bug in allocating GRES that are associated with specific CPUs. In some cases the code allocated first available GRES to job instead of allocating GRES accessible to the specific CPUs allocated to the job. -- spank: Add callbacks in slurmd: slurm_spank_slurmd_{init,exit} and job epilog/prolog: slurm_spank_job_{prolog,epilog} -- spank: Add spank_option_getopt() function to api -- Change resolution of switch wait time from minutes to seconds. -- Added CrpCPUMins to the output of sshare -l for those using hard limit accounting. Work contributed by Mark Nelson. -- Added mpi/pmi2 plugin for complete support of pmi2 including acquiring additional resources for newly launched tasks. Contributed by Hongjia Cao, NUDT. -- BGQ - fixed issue where if a user asked for a specific node count and more tasks than possible without overcommit the request would be allowed on more nodes than requested. -- Add support for new SchedulerParameters of bf_max_job_user, maximum number of jobs to attempt backfilling per user. Work by Bjørn-Helge Mevik, University of Oslo. -- BLUEGENE - fixed issue where MaxNodes limit on a partition only limited larger than midplane jobs. -- Added cpu_run_min to the output of sshare --long. Work contributed by Mark Nelson. -- BGQ - allow regular users to resolve Rack-Midplane to AXYZ coords. -- Add sinfo output format option of "%R" for partition name without "*" appended for default partition. -- Cray - Add support for zero compute note resource allocation to run batch script on front-end node with no ALPS reservation. Useful for pre- or post- processing. -- Support for cyclic distribution of cpus in task/cgroup plugin from Martin Perry, Bull. -- GrpMEM limit for QOSes and associations added Patch from Bjørn-Helge Mevik, University of Oslo. -- Various performance improvements for up to 500% higher throughput depending upon configuration. Work supported by the Oak Ridge National Laboratory Extreme Scale Systems Center. -- Added jobacct_gather/cgroup plugin. It is not advised to use this in production as it isn't currently complete and doesn't provide an equivalent substitution for jobacct_gather/linux yet. Work by Martin Perry, Bull. * Changes in SLURM 2.4.0.pre4 ============================= -- Add logic to cache GPU file information (bitmap index mapping to device file number) in the slurmd daemon and transfer that information to the slurmstepd whenever a job step is initiated. This is needed to set the appropriate CUDA_VISIBLE_DEVICES environment variable value when the devices are not in strict numeric order (e.g. some GPUs are skipped). Based upon work by Nicolas Bigaouette. -- BGQ - Remove ability to make a sub-block with a geometry with one or more of it's dimensions of length 3. There is a limitation in the IBM I/O subsystem that is problematic with multiple sub-blocks with a dimension of length 3, so we will disallow them to be able to be created. This mean you if you ask the system for an allocation of 12 c-nodes you will be given 16. If this is ever fix in BGQ you can remove this patch. -- BLUEGENE - Better handling blocks that go into error state or deallocate while jobs are running on them. -- BGQ - fix for handling mix of steps running at same time some of which are full allocation jobs, and others that are smaller. -- BGQ - fix for core dump after running multiple sub-block jobs on static blocks. -- BGQ - fixed sync issue where if a job finishes in SLURM but not in mmcs for a long time after the SLURM job has been flushed from the system we don't have to worry about rebooting the block to sync the system. -- BGQ - In scontrol/sview node counts are now displayed with CnodeCount/CnodeErrCount so to point out there are cnodes in an error state on the block. Draining the block and having it reboot when all jobs are gone will clear up the cnodes in Software Failure. -- Change default SchedulerParameters max_switch_wait field value from 60 to 300 seconds. -- BGQ - catch errors from the kill option of the runjob client. -- BLUEGENE - make it so the epilog runs until slurmctld tells it the job is gone. Previously it had a timelimit which has proven to not be the right thing. -- FRONTEND - fix issue where if a compute node was in a down state and an admin updates the node to idle/resume the compute nodes will go instantly to idle instead of idle* which means no response. -- Fix regression in 2.4.0.pre3 where number of submitted jobs limit wasn't being honored for QOS. -- Cray - Enable logging of BASIL communications with environment variables. Set XML_LOG to enable logging. Set XML_LOG_LOC to specify path to log file or "SLURM" to write to SlurmctldLogFile or unset for "slurm_basil_xml.log". Patch from Steve Tronfinoff, CSCS. -- FRONTEND - if a front end unexpectedly reboots kill all jobs but don't mark front end node down. -- FRONTEND - don't down a front end node if you have an epilog error -- BLUEGENE - if a job has an epilog error don't down the midplane it was running on. -- BGQ - added new DebugFlag (NoRealTime) for only printing debug from state change while the realtime server is running. -- Fix multi-cluster mode with sview starting on a non-bluegene cluster going to a bluegene cluster. -- BLUEGENE - ability to show Rack Midplane name of midplanes in sview and scontrol. * Changes in SLURM 2.4.0.pre3 ============================= -- Let a job be submitted even if it exceeds a QOS limit. Job will be left in a pending state until the QOS limit or job parameters change. Patch by Phil Eckert, LLNL. -- Add sacct support for the option "--name". Work by Yuri D'Elia, Center for Biomedicine, EURAC Research, Italy. -- BGQ - handle preemption. -- Add an srun shepard process to cancel a job and/or step of the srun process is killed abnormally (e.g. SIGKILL). -- BGQ - handle deadlock issue when a nodeboard goes into an error state. -- BGQ - more thorough handling of blocks with multiple jobs running on them. -- Fix man2html process to compile in the build directory instead of the source dir. -- Behavior of srun --multi-prog modified so that any program arguments specified on the command line will be appended to the program arguments specified in the program configuration file. -- Add new command, sdiag, which reports a variety of job scheduling statistics. Based upon work by Alejandro Lucero Palau, BSC. -- BLUEGENE - Added DefaultConnType to the bluegene.conf file. This makes it so you can specify any connection type you would like (TORUS or MESH) as the default in dynamic mode. Previously it always defaulted to TORUS. -- Made squeue -n and -w options more consistent with salloc, sbatch, srun, and scancel. Patch by Don Lipari, LLNL. -- Have sacctmgr remove user records when no associations exist for that user. -- Several header file changes for clean build with NetBSD. Patches from Aleksej Saushev. -- Fix for possible deadlock in accounting logic: Avoid calling jobacct_gather_g_getinfo() until there is data to read from the socket. -- Fix race condition that could generate "job_cnt_comp underflow" errors on front-end architectures. -- BGQ - Fix issue where a system with missing cables could cause core dump. * Changes in SLURM 2.4.0.pre2 ============================= -- CRAY - Add support for GPU memory allocation using SLURM GRES (Generic RESource) support. Work by Steve Trofinoff, CSCS. -- Add support for job allocations with multiple job constraint counts. For example: salloc -C "[rack1*2&rack2*4]" ... will allocate the job 2 nodes from rack1 and 4 nodes from rack2. Support for only a single constraint name been added to job step support. -- BGQ - Remove old method for marking cnodes down. -- BGQ - Remove BGP images from view in sview. -- BGQ - print out failed cnodes in scontrol show nodes. -- BGQ - Add srun option of "--runjob-opts" to pass options to the runjob command. -- FRONTEND - handle step launch failure better. -- BGQ - Added a mutex to protect the now changing ba_system pointers. -- BGQ - added new functionality for sub-block allocations - no preemption for this yet though. -- Add --name option to squeue to filter output by job name. Patch from Yuri D'Elia. -- BGQ - Added linking to runjob client libary which gives support to totalview to use srun instead of runjob. -- Add numeric range checks to scontrol update options. Patch from Phil Eckert, LLNL. -- Add ReconfigFlags configuration option to control actions of "scontrol reconfig". Patch from Don Albert, Bull. -- BGQ - handle reboots with multiple jobs running on a block. -- BGQ - Add message handler thread to forward signals to runjob process. * Changes in SLURM 2.4.0.pre1 ============================= -- BGQ - use the ba_geo_tables to figure out the blocks instead of the old algorithm. The improves timing in the worst cases and simplifies the code greatly. -- BLUEGENE - Change to output tools labels from BP to Midplane (i.e. BP List -> MidplaneList). -- BLUEGENE - read MPs and BPs from the bluegene.conf -- Modify srun's SIGINT handling logic timer (two SIGINTs within one second) to be based microsecond rather than second timer. -- Modify advance reservation to accept multiple specific block sizes rather than a single node count. -- Permit administrator to change a job's QOS to any value without validating the job's owner has permission to use that QOS. Based upon patch by Phil Eckert (LLNL). -- Add trigger flag for a permanent trigger. The trigger will NOT be purged after an event occurs, but only when explicitly deleted. -- Interpret a reservation with Nodes=ALL and a Partition specification as reserving all nodes within the specified partition rather than all nodes on the system. Based upon patch by Phil Eckert (LLNL). -- Add the ability to reboot all compute nodes after they become idle. The RebootProgram configuration parameter must be set and an authorized user must execute the command "scontrol reboot_nodes". Patch from Andriy Grytsenko (Massive Solutions Limited). -- Modify slurmdbd.conf parsing to accept DebugLevel strings (quiet, fatal, info, etc.) in addition to numeric values. The parsing of slurm.conf was modified in the same fashion for SlurmctldDebug and SlurmdDebug values. The output of sview and "scontrol show config" was also modified to report those values as strings rather than numeric values. -- Changed default value of StateSaveLocation configuration parameter from /tmp to /var/spool. -- Prevent associations from being deleted if it has any jobs in running, pending or suspended state. Previous code prevented this only for running jobs. -- If a job can not run due to QOS or association limits, then do not cancel the job, but leave it pending in a system held state (priority = 1). The job will run when its limits or the QOS/association limits change. Based upon a patch by Phil Ekcert (LLNL). -- BGQ - Added logic to keep track of cnodes in an error state inside of a booted block. -- Added the ability to update a node's NodeAddr and NodeHostName with scontrol. Also enable setting a node's state to "future" using scontrol. -- Add a node state flag of CLOUD and save/restore NodeAddr and NodeHostName information for nodes with a flag of CLOUD. -- Cray: Add support for job reservations with node IDs that are not in numeric order. Fix for Bugzilla #5. -- BGQ - Fix issue with smap -R -- Fix association limit support for jobs queued for multiple partitions. -- BLUEGENE - fix issue for sub-midplane systems to create a full system block correctly. -- BLUEGENE - Added option to the bluegene.conf to tell you are running on a sub midplane system. -- Added the UserID of the user issuing the RPC to the job_submit/lua functions. -- Fixed issue where if a job ended with ESLURMD_UID_NOT_FOUND and ESLURMD_GID_NOT_FOUND where slurm would be a little over zealous in treating missing a GID or UID as a fatal error. -- If job time limit exceeds partition maximum, but job's minimum time limit does not, set job's time limit to partition maximum at allocation time. * Changes in SLURM 2.3.6 ======================== -- Fix DefMemPerCPU for partition definitions. -- Fix to create a reservation with licenses and no nodes. -- Fix issue with assoc_mgr if a bad state file is given and the database isn't up at the time the slurmctld starts, not running the priority/multifactor plugin, and then the database is started up later. -- Gres: If a gres has a count of one and an associated file then when doing a reconfiguration, the node's bitmap was not cleared resulting in an underflow upon job termination or removal from scheduling matrix by the backfill scheduler. -- Fix race condition in job dependency logic which can result in invalid memory reference. * Changes in SLURM 2.3.5 ======================== -- Improve support for overlapping advanced reservations. Patch from Bill Brophy, Bull. -- Modify Makefiles for support of Debian hardening flags. Patch from Simon Ruderich. -- CRAY: Fix support for configuration with SlurmdTimeout=0 (never mark node that is DOWN in ALPS as DOWN in SLURM). -- Fixed the setting of SLURM_SUBMIT_DIR for jobs submitted by Moab (BZ#1467). Patch by Don Lipari, LLNL. -- Correction to init.d/slurmdbd exit code for status option. Patch by Bill Brophy, Bull. -- When the optional max_time is not specified for --switches=count, the site max (SchedulerParameters=max_switch_wait=seconds) is used for the job. Based on patch from Rod Schultz. -- Fix bug in select/cons_res plugin when used with topology/tree and a node range count in job allocation request. -- Fixed moab_2_slurmdb.pl script to correctly work for end records. -- Add support for new SchedulerParameters of max_depend_depth defining the maximum number of jobs to test for circular dependencies (i.e. job A waits for job B to start and job B waits for job A to start). Default value is 10 jobs. -- Fix potential race condition if MinJobAge is very low (i.e. 1) and using slurmdbd accounting and running large amounts of jobs (>50 sec). Job information could be corrupted before it had a chance to reach the DBD. -- Fix state restore of job limit set from admin value for min_cpus. -- Fix clearing of limit values if an admin removes the limit for max cpus and time limit where it was previously set by an admin. -- Fix issue where log message is more than 256 chars and then has a format. -- Fix sched/wiki2 to support job account name, gres, partition name, wckey, or working directory that contains "#" (a job record separator). Also fix for wckey or working directory that contains a double quote '\"'. -- CRAY - fix for handling memory requests from user for an allocation. -- Add support for switches parameter to the job_submit/lua plugin. Work by Par Andersson, NSC. -- Fix to job preemption logic to preempt multiple jobs at the same time. -- Fix minor issue where uid and gid were switched in sview for submitting batch jobs. -- Fix possible illegal memory reference in slurmctld for job step with relative option. Work by Matthieu Hautreux (CEA). -- Reset priority of system held jobs when dependency is satisfied. Work by Don Lipari, LLNL. * Changes in SLURM 2.3.4 ======================== -- Set DEFAULT flag in partition structure when slurmctld reads the configuration file. Patch from Rémi Palancher. -- Fix for possible deadlock in accounting logic: Avoid calling jobacct_gather_g_getinfo() until there is data to read from the socket. -- Fix typo in accounting when using reservations. Patch from Alejandro Lucero Palau. -- Fix to the multifactor priority plugin to calculate effective usage earlier to give a correct priority on the first decay cycle after a restart of the slurmctld. Patch from Martin Perry, Bull. -- Permit user root to run a job step for any job as any user. Patch from Didier Gazen, Laboratoire d'Aerologie. -- BLUEGENE - fix for not allowing jobs if all midplanes are drained and all blocks are in an error state. -- Avoid slurmctld abort due to bad pointer when setting an advanced reservation MAINT flag if it contains no nodes (only licenses). -- Fix bug when requeued batch job is scheduled to run on a different node zero, but attemts job launch on old node zero. -- Fix bug in step task distribution when nodes are not configured in numeric order. Patch from Hongjia Cao, NUDT. -- Fix for srun allocating running within existing allocation with --exclude option and --nnodes count small enough to remove more nodes. Patch from Phil Eckert, LLNL. -- Work around to handle certain combinations of glibc/kernel (i.e. glibc-2.14/Linux-3.1) to correctly open the pty of the slurmstepd as the job user. Patch from Mark Grondona, LLNL. -- Modify linking to include "-ldl" only when needed. Patch from Aleksej Saushev. -- Fix smap regression to display nodes that are drained or down correctly. -- Several bug fixes and performance improvements with related to batch scripts containing very large numbers of arguments. Patches from Par Andersson, NSC. -- Fixed extremely hard to reproduce threading issue in assoc_mgr. -- Correct "scontrol show daemons" output if there is more than one ControlMachine configured. -- Add node read lock where needed in slurmctld/agent code. -- Added test for LUA library named "liblua5.1.so.0" in addition to "liblua5.1.so" as needed by Debian. Patch by Remi Palancher. -- Added partition default_time field to job_submit LUA plugin. Patch by Remi Palancher. -- Fix bug in cray/srun wrapper stdin/out/err file handling. -- In cray/srun wrapper, only include aprun "-q" option when srun "--quiet" option is used. -- BLUEGENE - fix issue where if a small block was in error it could hold up the queue when trying to place a larger than midplane job. -- CRAY - ignore all interactive nodes and jobs on interactive nodes. -- Add new job state reason of "FrontEndDown" which applies only to Cray and IBM BlueGene systems. -- Cray - Enable configure option of "--enable-salloc-background" to permit the srun and salloc commands to be executed in the background. This does NOT remove the ALPS limitation that only one job reservation can be created for each Linux session ID. -- Cray - For srun wrapper when creating a job allocation, set the default job name to the executable file's name. -- Add support for Cray ALPS 5.0.0 -- FRONTEND - if a front end unexpectedly reboots kill all jobs but don't mark front end node down. -- FRONTEND - don't down a front end node if you have an epilog error. -- Cray - fix for if a frontend slurmd was started after the slurmctld had already pinged it on startup the unresponding flag would be removed from the frontend node. -- Cray - Fix issue on smap not displaying grid correctly. -- Fixed minor memory leak in sview. * Changes in SLURM 2.3.3 ======================== -- Fix task/cgroup plugin error when used with GRES. Patch by Alexander Bersenev (Institute of Mathematics and Mechanics, Russia). -- Permit pending job exceeding a partition limit to run if its QOS flag is modified to permit the partition limit to be exceeded. Patch from Bill Brophy, Bull. -- BLUEGENE - Fixed preemption issue. -- sacct search for jobs using filtering was ignoring wckey filter. -- Fixed issue with QOS preemption when adding new QOS. -- Fixed issue with comment field being used in a job finishing before it starts in accounting. -- Add slashes in front of derived exit code when modifying a job. -- Handle numeric suffix of "T" for terabyte units. Patch from John Thiltges, University of Nebraska-Lincoln. -- Prevent resetting a held job's priority when updating other job parameters. Patch from Alejandro Lucero Palau, BSC. -- Improve logic to import a user's environment. Needed with --get-user-env option used with Moab. Patch from Mark Grondona, LLNL. -- Fix bug in sview layout if node count less than configured grid_x_width. -- Modify PAM module to prefer to use SLURM library with same major release number that it was built with. -- Permit gres count configuration of zero. -- Fix race condition where sbcast command can result in deadlock of slurmd daemon. Patch by Don Albert, Bull. -- Fix bug in srun --multi-prog configuration file to avoid printing duplicate record error when "*" is used at the end of the file for the task ID. -- Let operators see reservation data even if "PrivateData=reservations" flag is set in slurm.conf. Patch from Don Albert, Bull. -- Added new sbatch option "--export-file" as needed for latest version of Moab. Patch from Phil Eckert, LLNL. -- Fix for sacct printing CPUTime(RAW) where the the is greater than a 32 bit number. -- Fix bug in --switch option with topology resulting in bad switch count use. Patch from Alejandro Lucero Palau (Barcelona Supercomputer Center). -- Fix PrivateFlags bug when using Priority Multifactor plugin. If using sprio all jobs would be returned even if the flag was set. Patch from Bill Brophy, Bull. -- Fix for possible invalid memory reference in slurmctld in job dependency logic. Patch from Carles Fenoy (Barcelona Supercomputer Center). * Changes in SLURM 2.3.2 ======================== -- Add configure option of "--without-rpath" which builds SLURM tools without the rpath option, which will work if Munge and BlueGene libraries are in the default library search path and make system updates easier. -- Fixed issue where if a job ended with ESLURMD_UID_NOT_FOUND and ESLURMD_GID_NOT_FOUND where slurm would be a little over zealous in treating missing a GID or UID as a fatal error. -- Backfill scheduling - Add SchedulerParameters configuration parameter of "bf_res" to control the resolution in the backfill scheduler's data about when jobs begin and end. Default value is 60 seconds (used to be 1 second). -- Cray - Remove the "family" specification from the GPU reservation request. -- Updated set_oomadj.c, replacing deprecated oom_adj reference with oom_score_adj -- Fix resource allocation bug, generic resources allocation was ignoring the job's ntasks_per_node and cpus_per_task parameters. Patch from Carles Fenoy, BSC. -- Avoid orphan job step if slurmctld is down when a job step completes. -- Fix Lua link order, patch from Pär Andersson, NSC. -- Set SLURM_CPUS_PER_TASK=1 when user specifies --cpus-per-task=1. -- Fix for fatal error managing GRES. Patch by Carles Fenoy, BSC. -- Fixed race condition when using the DBD in accounting where if a job wasn't started at the time the eligible message was sent but started before the db_index was returned information like start time would be lost. -- Fix issue in accounting where normalized shares could be updated incorrectly when getting fairshare from the parent. -- Fixed if not enforcing associations but want QOS support for a default qos on the cluster to fill that in correctly. -- Fix in select/cons_res for "fatal: cons_res: sync loop not progressing" with some configurations and job option combinations. -- BLUEGNE - Fixed issue with handling HTC modes and rebooting. * Changes in SLURM 2.3.1 ======================== -- Do not remove the backup slurmctld's pid file when it assumes control, only when it actually shuts down. Patch from Andriy Grytsenko (Massive Solutions Limited). -- Avoid clearing a job's reason from JobHeldAdmin or JobHeldUser when it is otherwise updated using scontrol or sview commands. Patch based upon work by Phil Eckert (LLNL). -- BLUEGENE - Fix for if changing the defined blocks in the bluegene.conf and jobs happen to be running on blocks not in the new config. -- Many cosmetic modifications to eliminate warning message from GCC version 4.6 compiler. -- Fix for sview reservation tab when finding correct reservation. -- Fix for handling QOS limits per user on a reconfig of the slurmctld. -- Do not treat the absence of a gres.conf file as a fatal error on systems configured with GRES, but set GRES counts to zero. -- BLUEGENE - Update correctly the state in the reason of a block if an admin sets the state to error. -- BLUEGENE - handle reason of blocks in error more correctly between restarts of the slurmctld. -- BLUEGENE - Fix minor potential memory leak when setting block error reason. -- BLUEGENE - Fix if running in Static/Overlap mode and full system block is in an error state, won't deny jobs. -- Fix for accounting where your cluster isn't numbered in counting order (i.e. 1-9,0 instead of 0-9). The bug would cause 'sacct -N nodename' to not give correct results on these systems. -- Fix to GRES allocation logic when resources are associated with specific CPUs on a node. Patch from Steve Trofinoff, CSCS. -- Fix bugs in sched/backfill with respect to QOS reservation support and job time limits. Patch from Alejandro Lucero Palau (Barcelona Supercomputer Center). -- BGQ - fix to set up corner correctly for sub block jobs. -- Major re-write of the CPU Management User and Administrator Guide (web page) by Martin Perry, Bull. -- BLUEGENE - If removing blocks from system that once existed cleanup of old block happens correctly now. -- Prevent slurmctld crashing with configuration of MaxMemPerCPU=0. -- Prevent job hold by operator or account coordinator of his own job from being an Administrator Hold rather than User Hold by default. -- Cray - Fix for srun.pl parsing to avoid adding spaces between option and argument (e.g. "-N2" parsed properly without changing to "-N 2"). -- Major updates to cgroup support by Mark Grondona (LLNL) and Matthieu Hautreux (CEA) and Sam Lang. Fixes timing problems with respect to the task_epilog. Allows cgroup mount point to be configurable. Added new configuration parameters MaxRAMPercent and MaxSwapPercent. Allow cgroup configuration parameters that are precentages to be floating point. -- Fixed issue where sview wasn't displaying correct nice value for jobs. -- Fixed issue where sview wasn't displaying correct min memory per node/cpu value for jobs. -- Disable some SelectTypeParameters for select/linear that aren't compatible. -- Move slurm_select_init to proper place to avoid loading multiple select plugins in the slurmd. -- BGQ - Include runjob_plugin.so in the bluegene rpm. -- Report correct job "Reason" if needed nodes are DOWN, DRAINED, or NOT_RESPONDING, "Resources" rather than "PartitionNodeLimit". -- BLUEGENE - Fixed issues with running on a sub-midplane system. -- Added some missing calls to allow older versions of SLURM to talk to newer. -- BGQ - allow steps to be ran. -- Do not attempt to run HeathCheckProgram on powered down nodes. Patch from Ramiro Alba, Centre Tecnològic de Tranferència de Calor, Spain. * Changes in SLURM 2.3.0-2 ========================== -- Fix for memory issue inside sview. -- Fix issue where if a job was pending and the slurmctld was restarted a variable wasn't initialized in the job structure making it so that job wouldn't run. * Changes in SLURM 2.3.0 ======================== -- BLUEGENE - make sure we only set the jobinfo_select start_loc on a job when we are on a small block, not a regular one. -- BGQ - fix issue where not copying the correct amount of memory. -- BLUEGENE - fix clean start if jobs were running when the slurmctld was shutdown and then the system size changed. This would probably only happen if you were emulating a system. -- Fix sview for calling a cray system from a non-cray system to get the correct geometry of the system. -- BLUEGENE - fix to correctly import pervious version of block state file. -- BLUEGENE - handle loading better when doing a clean start with static blocks. -- Add sinfo format and sort option "%n" for NodeHostName and "%o" for NodeAddr. -- If a job is deferred due to partition limits, then re-test those limits after a partition is modified. Patch from Don Lipari. -- Fix bug which would crash slurmcld if job's owner (not root) tries to clear a job's licenses by setting value to "". -- Cosmetic fix for printing out debug info in the priority plugin. -- In sview when switching from a bluegene machine to a regular linux cluster and vice versa the node->base partition lists will be displayed if setup in your .slurm/sviewrc file. -- BLUEGENE - Fix for creating full system static block on a BGQ system. -- BLUEGENE - Fix deadlock issue if toggling between Dynamic and Static block allocation with jobs running on blocks that don't exist in the static setup. -- BLUEGENE - Modify code to only give HTC states to BGP systems and not allow them on Q systems. -- BLUEGENE - Make it possible for an admin to define multiple dimension conn_types in a block definition. -- BGQ - Alter tools to output multiple dimensional conn_type. * Changes in SLURM 2.3.0.rc2 ============================ -- With sched/wiki or sched/wiki2 (Maui or Moab scheduler), insure that a requeued job's priority is reset to zero. -- BLUEGENE - fix to run steps correctly in a BGL/P emulated system. -- Fixed issue where if there was a network issue between the slurmctld and the DBD where both remained up but were disconnected the slurmctld would get registered again with the DBD. -- Fixed issue where if the DBD connection from the ctld goes away because of a POLLERR the dbd_fail callback is called. -- BLUEGENE - Fix to smap command-line mode display. -- Change in GRES behavior for job steps: A job step's default generic resource allocation will be set to that of the job. If a job step's --gres value is set to "none" then none of the generic resources which have been allocated to the job will be allocated to the job step. -- Add srun environment value of SLURM_STEP_GRES to set default --gres value for a job step. -- Require SchedulerTimeSlice configuration parameter to be at least 5 seconds to avoid thrashing slurmd daemon. -- Cray - Fix to make nodes state in accounting consistent with state set by ALPS. -- Cray - A node DOWN to ALPS will be marked DOWN to SLURM only after reaching SlurmdTimeout. In the interim, the node state will be NO_RESPOND. This change makes behavior makes SLURM handling of the node DOWN state more consistent with ALPS. This change effects only Cray systems. -- Cray - Fix to work with 4.0.* instead of just 4.0.0 -- Cray - Modify srun/aprun wrapper to map --exclusive to -F exclusive and --share to -F share. Note this does not consider the partition's Shared configuration, so it is an imperfect mapping of options. -- BLUEGENE - Added notice in the print config to tell if you are emulated or not. -- BLUEGENE - Fix job step scalability issue with large task count. -- BGQ - Improved c-node selection when asked for a sub-block job that cannot fit into the available shape. -- BLUEGENE - Modify "scontrol show step" to show I/O nodes (BGL and BGP) or c-nodes (BGQ) allocated to each step. Change field name from "Nodes=" to "BP_List=". -- Code cleanup on step request to get the correct select_jobinfo. -- Memory leak fixed for rolling up accounting with down clusters. -- BGQ - fix issue where if first job step is the entire block and then the next parallel step is ran on a sub block, SLURM won't over subscribe cnodes. -- Treat duplicate switch name in topology.conf as fatal error. Patch from Rod Schultz, Bull -- Minor update to documentation describing the AllowGroups option for a partition in the slurm.conf. -- Fix problem with _job_create() when not using qos's. It makes _job_create() consistent with similar logic in select_nodes(). -- GrpCPURunMins in a QOS flushed out. -- Fix for squeue -t "CONFIGURING" to actually work. -- CRAY - Add cray.conf parameter of SyncTimeout, maximum time to defer job scheduling if SLURM node or job state are out of synchronization with ALPS. -- If salloc was run as interactive, with job control, reset the foreground process group of the terminal to the process group of the parent pid before exiting. Patch from Don Albert, Bull. -- BGQ - set up the corner of a sub block correctly based on a relative position in the block instead of absolute. -- BGQ - make sure the recently added select_jobinfo of a step launch request isn't sent to the slurmd where environment variables would be overwritten incorrectly. * Changes in SLURM 2.3.0.rc1 ============================ -- NOTE THERE HAVE BEEN NEW FIELDS ADDED TO THE JOB AND PARTITION STATE SAVE FILES AND RPCS. PENDING AND RUNNING JOBS WILL BE LOST WHEN UPGRADING FROM EARLIER VERSION 2.3 PRE-RELEASES AND RPCS WILL NOT WORK WITH EARLIER VERSIONS. -- select/cray: Add support for Accelerator information including model and memory options. -- Cray systems: Add support to suspend/resume salloc command to insure that aprun does not get initiated when the job is suspended. Processes suspended and resumed are determined by using process group ID and parent process ID, so some processes may be missed. Since salloc runs as a normal user, it's ability to identify processes associated with a job is limited. -- Cray systems: Modify smap and sview to display all nodes even if multiple nodes exist at each coordinate. -- Improve efficiency of select/linear plugin with topology/tree plugin configured, Patch by Andriy Grytsenko (Massive Solutions Limited). -- For front-end architectures on which job steps are run (emulated Cray and BlueGene systems only), fix bug that would free memory still in use. -- Add squeue support to display a job's license information. Patch by Andy Roosen (University of Deleware). -- Add flag to the select APIs for job suspend/resume indicating if the action is for gang scheduling or an explicit job suspend/resume by the user. Only an explicit job suspend/resume will reset the job's priority and make resources exclusively held by the job available to other jobs. -- Fix possible invalid memory reference in sched/backfill. Patch by Andriy Grytsenko (Massive Solutions Limited). -- Add select_jobinfo to the task launch RPC. Based upon patch by Andriy Grytsenko (Massive Solutions Limited). -- Add DefMemPerCPU/Node and MaxMemPerCPU/Node to partition configuration. This improves flexibility when gang scheduling only specific partitions. -- Added new enums to print out when a job is held by a QOS instead of an association limit. -- Enhancements to sched/backfill performance with select/cons_res plugin. Patch from Bjørn-Helge Mevik, University of Oslo. -- Correct job run time reported by smap for suspended jobs. -- Improve job preemption logic to avoid preempting more jobs than needed. -- Add contribs/arrayrun tool providing support for job arrays. Contributed by Bjørn-Helge Mevik, University of Oslo. NOTE: Not currently packaged as RPM and manual file editing is required. -- When suspending a job, wait 2 seconds instead of 1 second between sending SIGTSTP and SIGSTOP. Some MPI implementation were not stopping within the 1 second delay. -- Add support for managing devices based upon Linux cgroup container. Based upon patch by Yiannis Georgiou, Bull. -- Fix memory buffering bug if a AllowGroups parameter of a partition has 100 or more users. Patch by Andriy Grytsenko (Massive Solutions Limited). -- Fix bug in generic resource tracking of gres associated with specific CPUs. Resources were being over-allocated. -- On systems with front-end nodes (IBM BlueGene and Cray) limit batch jobs to only one CPU of these shared resources. -- Set SLURM_MEM_PER_CPU or SLURM_MEM_PER_NODE environment variables for both interactive (salloc) and batch jobs if the job has a memory limit. For Cray systems also set CRAY_AUTO_APRUN_OPTIONS environment variable with the memory limit. -- Fix bug in select/cons_res task distribution logic when tasks-per-node=0. Patch from Rod Schultz, Bull. -- Restore node configuration information (CPUs, memory, etc.) for powered down when slurmctld daemon restarts rather than waiting for the node to be restored to service and getting the information from the node (NOTE: Only relevent if FastSchedule=0). -- For Cray systems with the srun2aprun wrapper, rebuild the srun man page identifying the srun optioins which are valid on that system. -- BlueGene: Permit users to specify a separate connection type for each dimension (e.g. "--conn-type=torus,mesh,torus"). -- Add the ability for a user to limit the number of leaf switches in a job's allocation using the --switch option of salloc, sbatch and srun. There is also a new SchedulerParameters value of max_switch_wait, which a SLURM administrator can used to set a maximum job delay and prevent a user job from blocking lower priority jobs for too long. Based on work by Rod Schultz, Bull. * Changes in SLURM 2.3.0.pre6 ============================= -- NOTE: THERE HAS BEEN A NEW FIELD ADDED TO THE CONFIGURATION RESPONSE RPC AS SHOWN BY "SCONTROL SHOW CONFIG". THIS FUNCTION WILL ONLY WORK WHEN THE SERVER AND CLIENT ARE BOTH RUNNING SLURM VERSION 2.3.0.pre6 -- Modify job expansion logic to support licenses, generic resources, and currently running job steps. -- Added an rpath if using the --with-munge option of configure. -- Add support for multiple sets of DEFAULT node, partition, and frontend specifications in slurm.conf so that default values can be changed mulitple times as the configuration file is read. -- BLUEGENE - Improved logic to place small blocks in free space before freeing larger blocks. -- Add optional argument to srun's --kill-on-bad-exit so that user can set its value to zero and override a SLURM configuration parameter of KillOnBadExit. -- Fix bug in GraceTime support for preempted jobs that prevented proper operation when more than one job was being preempted. Based on patch from Bill Brophy, Bull. -- Fix for running sview from a non-bluegene cluster to a bluegene cluster. Regression from pre5. -- If job's TMPDIR environment is not set or is not usable, reset to "/tmp". Patch from Andriy Grytsenko (Massive Solutions Limited). -- Remove logic for defunct RPC: DBD_GET_JOBS. -- Propagate DebugFlag changes by scontrol to the plugins. -- Improve accuracy of REQUEST_JOB_WILL_RUN start time with respect to higher priority pending jobs. -- Add -R/--reservation option to squeue command as a job filter. -- Add scancel support for --clusters option. -- Note that scontrol and sprio can only support a single cluster at one time. -- Add support to salloc for a new environment variable SALLOC_KILL_CMD. -- Add scontrol ability to increment or decrement a job or step time limit. -- Add support for SLURM_TIME_FORMAT environment variable to control time stamp output format. Work by Gerrit Renker, CSCS. -- Fix error handling in mvapich plugin that could cause srun to enter an infinite loop under rare circumstances. -- Add support for multiple task plugins. Patch from Andriy Grytsenko (Massive Solutions Limited). -- Addition of per-user node/cpu limits for QOS's. Patch from Aaron Knister, UMBC. -- Fix logic for multiple job resize operations. -- BLUEGENE - many fixes to make things work correctly on a L/P system. -- Fix bug in layout of job step with --nodelist option plus node count. Old code could allocate too few nodes. * Changes in SLURM 2.3.0.pre5 ============================= -- NOTE: THERE HAS BEEN A NEW FIELD ADDED TO THE JOB STATE FILE. UPGRADES FROM VERSION 2.3.0-PRE4 WILL RESULT IN LOST JOBS UNLESS THE "orig_dependency" FIELD IS REMOVED FROM JOB STATE SAVE/RESTORE LOGIC. ON CRAY SYSTEMS A NEW "confirm_cookie" FIELD WAS ADDED AND HAS THE SAME EFFECT OF DISABLING JOB STATE RESTORE. -- BLUEGENE - Improve speed of start up when removing blocks at the beginning. -- Correct init.d/slurm status to have non-zero exit code if ANY Slurm damon that should be running on the node is not running. Patch from Rod Schulz, Bull. -- Improve accuracy of response to "srun --test-only jobid=#". -- Fix bug in front-end configurations which reports job_cnt_comp underflow errors after slurmctld restarts. -- Eliminate "error from _trigger_slurmctld_event in backup.c" due to lack of event triggers. -- Fix logic in BackupController to properly recover front-end node state and avoid purging active jobs. -- Added man pages to html pages and the new cpu_management.html page. Submitted by Martin Perry / Rod Schultz, Bull. -- Job dependency information will only show the currently active dependencies rather than the original dependencies. From Dan Rusak, Bull. -- Add RPCs to get the SPANK environment variables from the slurmctld daemon. Patch from Andrej N. Gritsenko. -- Updated plugins/task/cgroup/task_cgroup_cpuset.c to support newer HWLOC_API_VERSION. -- Do not build select/bluegene plugin if C++ compiler is not installed. -- Add new configure option --with-srun2aprun to build an srun command which is a wrapper over Cray's aprun command and supports many srun options. Without this option, the srun command will advise the user to use the aprun command. -- Change container ID supported by proctrack plugin from 32-bit to 64-bit. -- Added contribs/cray/libalps_test_programs.tar.gz with tools to validate SLURM's logic used to support Cray systems. -- Create RPM for srun command that is a wrapper for the Cray/ALPS aprun command. Dependent upon .rpmmacros parameter of "%_with_srun2aprun". -- Add configuration parameter MaxStepCount to limit effect of bad batch scripts. -- Moving to github -- Fix for handling a 2.3 system talking to a 2.2 slurmctld. -- Add contribs/lua/job_submit.license.lua script. Update job_submit and Lua related documentation. -- Test if _make_batch_script() is called with a NULL script. -- Increase hostlist support from 24k to 64k nodes. -- Renamed the Accounting Storage database's "DerivedExitString" job field to "Comment". Provided backward compatible support for "DerivedExitString" in the sacctmgr tool. -- Added the ability to save the job's comment field to the Accounting Storage db (to the formerly named, "DerivedExitString" job field). This behavior is enabled by a new slurm.conf parameter: AccountingStoreJobComment. -- Test if _make_batch_script() is called with a NULL script. -- Increase hostlist support from 24k to 64k nodes. -- Fix srun to handle signals correctly when waiting for a step creation. -- Preserve the last job ID across slurmctld daemon restarts even if the job state file can not be fully recovered. -- Made the hostlist functions be able to arbitrarily handle any size dimension no matter what the size of the cluster is in dimensions. * Changes in SLURM 2.3.0.pre4 ============================= -- Add GraceTime to Partition and QOS data structures. Preempted jobs will be given this time interval before termination. Work by Bill Brophy, Bull. -- Add the ability for scontrol and sview to modify slurmctld DebugFlags values. -- Various Cray-specific patches: - Fix a bug in distinguishing XT from XE. - Avoids problems with empty nodenames on Cray. - Check whether ALPS is hanging on to nodes, which happens if ALPS has not yet cleaned up the node partition. - Stops select/cray from clobbering node_ptr->reason. - Perform 'safe' release of ALPS reservations using inventory and apkill. - Compile-time sanity check for the apbasil and apkill files. - Changes error handling in do_basil_release() (called by select_g_job_fini()). - Warn that salloc --no-shell option is not supported on Cray systems. -- Add a reservation flag of "License_Only". If set, then jobs using the reservation may use the licenses associated with it plus any compute nodes. Otherwise the job is limited to the compute nodes associated with the reservation. -- Change slurm.conf node configuration parameter from "Procs" to "CPUs". Both parameters will be supported for now. -- BLUEGENE - fix for when user requests only midplane names with no count at job submission time to process the node count correctly. -- Fix job step resource allocation problem when both node and tasks counts are specified. New logic selects nodes with larger CPU counts as needed. -- BGQ - make it so srun wraps runjob (still under construction, but works for most cases) -- Permit a job's QOS and Comment field to both change in a single RPC. This was previously disabled since Moab stored the QOS within the Comment field. -- Add support for jobs to expand in size. Submit additional batch job with the option "--dependency=expand:". See web page "faq.html#job_size" for details. Restrictions to be removed in the future. -- Added --with-alps-emulation to configure, and also an optional cray.conf to setup alps location and database information. -- Modify PMI data types from 16-bits to 32-bits in order to support MPICH2 jobs with more than 65,536 tasks. Patch from Hongjia Cao, NUDT. -- Set slurmd's soft process CPU limit equal to it's hard limit and notify the user if the limit is not infinite. -- Added proctrack/cgroup and task/cgroup plugins from Matthieu Hautreux, CEA. -- Fix slurmctld restart logic that could leave nodes in UNKNOWN state for a longer time than necessary after restart. * Changes in SLURM 2.3.0.pre3 ============================= -- BGQ - Appears to work correctly in emulation mode, no sub blocks just yet. -- Minor typos fixed -- Various bug fixes for Cray systems. -- Fix bug that when setting a compute node to idle state, it was failing to set the systems up_node_bitmap. -- BLUEGENE - code reorder -- BLUEGENE - Now only one select plugin for all Bluegene systems. -- Modify srun to set the SLURM_JOB_NAME environment variable when srun is used to create a new job allocation. Not set when srun is used to create a job step within an existing job allocation. -- Modify init.d/slurm script to start multiple slurmd daemons per compute node if so configured. Patch from Matthieu Hautreux, CEA. -- Change license data structure counters from uint16_t to uint32_t to support larger license counts. * Changes in SLURM 2.3.0.pre2 ============================= -- Log a job's requeue or cancellation due to preemption to that job's stderr: "*** JOB 65547 CANCELLED AT 2011-01-21T12:59:33 DUE TO PREEMPTION ***". -- Added new job termination state of JOB_PREEMPTED, "PR" or "PREEMPTED" to indicate job termination was due to preemption. -- Optimize advanced reservations resource selection for computer topology. The logic has been added to select/linear and select/cons_res, but will not be enabled until the other select plugins are modified. -- Remove checkpoint/xlch plugin. -- Disable deletion of partitions that have unfinished jobs (pending, running or suspended states). Patch from Martin Perry, BULL. -- In sview, disable the sorting of node records by name at startup for clusters over 1000 nodes. Users can enable this by selecting the "Name" tab. This change dramatically improves scalability of sview. -- Report error when trying to change a node's state from scontrol for Cray systems. -- Do not attempt to read the batch script for non-batch jobs. This patch eliminates some inappropriate error messages. -- Preserve NodeHostName when reordering nodes due to system topology. -- On Cray/ALPS systems do node inventory before scheduling jobs. -- Disable some salloc options on Cray systems. -- Disable scontrol's wait_job command on Cray systems. -- Disable srun command on native Cray/ALPS systems. -- Updated configure option "--enable-cray-emulation" (still under development) to emulate a cray XT/XE system, and auto-detect a real Cray XT/XE systems (removed no longer needed --enable-cray configure option). Building on native Cray systems requires the cray-MySQL-devel-enterprise rpm and expat XML parser library/headers. * Changes in SLURM 2.3.0.pre1 ============================= -- Added that when a slurmctld closes the connection to the database it's registered host and port are removed. -- Added flag to slurmdbd.conf TrackSlurmctldDown where if set will mark idle resources as down on a cluster when a slurmctld disconnects or is no longer reachable. -- Added support for more than one front-end node to run slurmd on architectures where the slurmd does not execute on the compute nodes (e.g. BlueGene). New configuration parameters FrontendNode and FrontendAddr added. See "man slurm.conf" for more information. -- With the scontrol show job command when using the --details option, show a batch job's script. -- Add ability to create reservations or partitions and submit batch jobs using sview. Also add the ability to delete reservations and partitions. -- Added new configuration parameter MaxJobId. Once reached, restart job ID values at FirstJobId. -- When restarting slurmctld with priority/basic, increment all job priorities so the highest job priority becomes TOP_PRIORITY. * Changes in SLURM 2.2.8 ======================== -- Prevent background salloc disconnecting terminal at termination. Patch by Don Albert, Bull. -- Fixed issue where preempt mode is skipped when creating a QOS. Patch by Bill Brophy, Bull. -- Fixed documention (html) for PriorityUsageResetPeriod to match that in the man pages. Patch by Nancy Kritkausky, Bull. * Changes in SLURM 2.2.7 ======================== -- Eliminate zombie process created if salloc exits with stopped child process. Patch from Gerrit Renker, CSCS. -- With default configuration on non-Cray systems, enable salloc to be spawned as a background process. Based upon work by Don Albert (Bull) and Gerrit Renker (CSCS). -- Fixed Regression from 2.2.4 in accounting where an inherited limit would not be set correctly in the added child association. -- Fixed issue with accounting when asking for jobs with a hostlist. -- Avoid clearing a node's Arch, OS, BootTime and SlurmdStartTime when "scontrol reconfig" is run. Patch from Martin Perry, Bull. * Changes in SLURM 2.2.6 ======================== -- Fix displaying of account coordinators with sacctmgr. Possiblity to show deleted accounts. Only a cosmetic issue, since the accounts are already deleted, and have no associations. -- Prevent opaque ncurses WINDOW struct on OS X 10.6. -- Fix issue with accounting when using PrivateData=jobs... users would not be able to view there own jobs unless they were admin or coordinators which is obviously wrong. -- Fix bug in node stat if slurmctld is restarted while nodes are in the process of being powered up. Patch from Andriy Grytsenko. -- Change maximum batch script size from 128k to 4M. -- Get slurmd -f option working. Patch from Andriy Grytsenko. -- Fix for linking problem on OSX. Patches from Jon Bringhurst (LANL) and Tyler Strickland. -- Reset a job's priority to zero (suspended) when Moab requeues the job. Patch from Par Andersson, NSC. -- When enforcing accounting, fix polling for unknown uids for users after the slurmctld started. Previously one would have to issue a reconfigure to the slurmctld to have it look for new uids. -- BLUEGENE - if a block goes into an error state. Fix issue where accounting wasn't updated correctly when the block was resumed. -- Synchronize power-save module better with scheduler. Patch from Andriy Grytsenko (Massive Solutions Limited). -- Avoid SEGV in association logic with user=NULL. Patch from Andriy Grytsenko (Massive Solutions Limited). -- Fixed issue in accounting where it was possible for a new association/wckey to be set incorrectly as a default the new object was added after an original default object already existed. Before the slurmctld would need to be restarted to fix the issue. -- Updated the Normalized Usage section in priority_multifactor.shtml. -- Disable use of SQUEUE_FORMAT env var if squeue -l, -o, or -s option is used. Patch from Aaron Knister (UMBC). * Changes in SLURM 2.2.5 ======================== -- Correct init.d/slurm status to have non-zero exit code if ANY Slurm damon that should be running on the node is not running. Patch from Rod Schulz, Bull. -- Improve accuracy of response to "srun --test-only jobid=#". -- Correct logic to properly support --ntasks-per-node option in the select/cons_res plugin. Patch from Rod Schulz, Bull. -- Fix bug in select/cons_res with respect to generic resource (gres) scheduling which prevented some jobs from starting as soon as possible. -- Fix memory leak in select/cons_res when backfill scheduling generic resources (gres). -- Fix for when configuring a node with more resources than in real life and using task/affinity. -- Fix so slurmctld will pack correctly 2.1 step information. (Only needed if a 2.1 client is talking to a 2.2 slurmctld.) -- Set powered down node's state to IDLE+POWER after slurmctld restart instead of leaving in UNKNOWN+POWER. Patch from Andrej Gritsenko. -- Fix bug where is srun's executable is not on it's current search path, but can be found in the user's default search path. Modify slurmstepd to find the executable. Patch from Andrej Gritsenko. -- Make sview display correct cpu count for steps. -- BLUEGENE - when running in overlap mode make sure to check the connection type so you can create overlapping blocks on the exact same nodes with different connection types (i.e. one torus, one mesh). -- Fix memory leak if MPI ports are reserved (for OpenMPI) and srun's --resv-ports option is used. -- Fix some anomalies in select/cons_res task layout when using the --cpus-per-task option. Patch from Martin Perry, Bull. -- Improve backfill scheduling logic when job specifies --ntasks-per-node and --mem-per-cpu options on a heterogeneous cluster. Patch from Bjorn-Helge Mevik, University of Oslo. -- Print warning message if srun specifies --cpus-per-task larger than used to create job allocation. -- Fix issue when changing a users name in accounting, if using wckeys would execute correctly, but bad memcopy would core the DBD. No information would be lost or corrupted, but you would need to restart the DBD. * Changes in SLURM 2.2.4 ======================== -- For batch jobs for which the Prolog fails, substitute the job ID for any "%j" in the job's output or error file specification. -- Add licenses field to the sview reservation information. -- BLUEGENE - Fix for handling extremely overloaded system on Dynamic system dealing with starting jobs on overlapping blocks. Previous fallout was job would be requeued. (happens very rarely) -- In accounting_storage/filetxt plugin, substitute spaces within job names, step names, and account names with an underscore to insure proper parsing. -- When building contribs/perlapi ignore both INSTALL_BASE and PERL_MM_OPT. Use PREFIX instead to avoid build errors from multiple installation specifications. -- Add job_submit/cnode plugin to support resource reservations of less than a full midplane on BlueGene computers. Treat cnodes as liceses which can be reserved and are consumed by jobs. This reservation mechanism for less than an entire midplane is still under development. -- Clear a job's "reason" field when a held job is released. -- When releasing a held job, calculate a new priority for it rather than just setting the priority to 1. -- Fix for sview started on a non-bluegene system to pick colors correctly when talking to a real bluegene system. -- Improve sched/backfill's expected start time calculation. -- Prevent abort of sacctmgr for dump command with invalid (or no) filename. -- Improve handling of job updates when using limits in accounting, and updating jobs as a non-admin user. -- Fix for "squeue --states=all" option. Bug would show no jobs. -- Schedule jobs with reservations before those without reservations. -- Fix squeue/scancel to query correctly against accounts of different case. -- Abort an srun command when it's associated job gets aborted due to a dependency that can not be satisfied. -- In jobcomp plugins, report start time of zeroif pending job is cancelled. Previously may report expected start time. -- Fixed sacctmgr man to state correct variables. -- Select nodes based upon their Weight when job allocation requests include a constraint field with a count (e.g. "srun --constraint=gpu*2 -N4 a.out"). -- Add support for user names that are entirely numeric and do not treat them as UID values. Patch from Dennis Leepow. -- Patch to un/pack double values properly if negative value. Patch from Dennis Leepow -- Do not reset a job's priority when requeued or suspended. -- Fix problemm that could let new jobs start on a node in DRAINED state. -- Fix cosmetic sacctmgr issue where if the user you are trying to add doesn't exist in the /etc/passwd file and the account you are trying to add them to doesn't exist it would print (null) instead of the bad account name. -- Fix associations/qos for when adding back a previously deleted object the object will be cleared of all old limits. -- BLUEGENE - Added back a lock when creating dynamic blocks to be more thread safe on larger systems with heavy load. * Changes in SLURM 2.2.3 ======================== -- Update srun, salloc, and sbatch man page description of --distribution option. Patches from Rod Schulz, Bull. -- Applied patch from Martin Perry to fix "Incorrect results for task/affinity block second distribution and cpus-per-task > 1" bug. -- Avoid setting a job's eligible time while held (priority == 0). -- Substantial performance improvement to backfill scheduling. Patch from Bjorn-Helge Mevik, University of Oslo. -- Make timeout for communications to the slurmctld be based upon the MessageTimeout configuration parameter rather than always 3 seconds. Patch from Matthieu Hautreux, CEA. -- Add new scontrol option of "show aliases" to report every NodeName that is associated with a given NodeHostName when running multiple slurmd daemons per compute node (typically used for testing purposes). Patch from Matthieu Hautreux, CEA. -- Fix for handling job names with a "'" in the name within MySQL accounting. Patch from Gerrit Renker, CSCS. -- Modify condition under which salloc execution delayed until moved to the foreground. Patch from Gerrit Renker, CSCS. Job control for interactive salloc sessions: only if ... a) input is from a terminal (stdin has valid termios attributes), b) controlling terminal exists (non-negative tpgid), c) salloc is not run in allocation-only (--no-shell) mode, d) salloc runs in its own process group (true in interactive shells that support job control), e) salloc has been configured at compile-time to support background execution and is not currently in the background process group. -- Abort salloc if no controlling terminal and --no-shell option is not used ("setsid salloc ..." is disabled). Patch from Gerrit Renker, CSCS. -- Fix to gang scheduling logic which could cause jobs to not be suspended or resumed when appropriate. -- Applied patch from Martin Perry to fix "Slurmd abort when using task affinity with plane distribution" bug. -- Applied patch from Yiannis Georgiou to fix "Problem with cpu binding to sockets option" behaviour. This change causes "--cpu_bind=sockets" to bind tasks only to the CPUs on each socket allocated to the job rather than all CPUs on each socket. -- Advance daily or weekly reservations immediately after termination to avoid having a job start that runs into the reservation when later advanced. -- Fix for enabling users to change there own default account, wckey, or QOS. -- BLUEGENE - If using OVERLAP mode fixed issue with multiple overlapping blocks in error mode. -- Fix for sacctmgr to display correctly default accounts. -- scancel -s SIGKILL will always sent the RPC to the slurmctld rather than the slurmd daemon(s). This insures that tasks in the process of getting spawned are killed. -- BLUEGENE - If using OVERLAP mode fixed issue with jobs getting denied at submit if the only option for their job was overlapping a block in error state. * Changes in SLURM 2.2.2 ======================== -- Correct logic to set correct job hold state (admin or user) when setting the job's priority using scontrol's "update jobid=..." rather than its "hold" or "holdu" commands. -- Modify squeue to report unset --mincores, --minthreads or --extra-node-info values as "*" rather than 65534. Patch from Rod Schulz, BULL. -- Report the StartTime of a job as "Unknown" rather than the year 2106 if its expected start time was too far in the future for the backfill scheduler to compute. -- Prevent a pending job reason field from inappropriately being set to "Priority". -- In sched/backfill with jobs having QOS_FLAG_NO_RESERVE set, then don't consider the job's time limit when attempting to backfill schedule. The job will just be preempted as needed at any time. -- Eliminated a bug in sbatch when no valid target clusters are specified. -- When explicitly sending a signal to a job with the scancel command and that job is in a pending state, then send the request directly to the slurmctld daemon and do not attempt to send the request to slurmd daemons, which are not running the job anyway. -- In slurmctld, properly set the up_node_bitmap when setting it's state to IDLE (in case the previous node state was DOWN). -- Fix smap to process block midplane names correctly when on a bluegene system. -- Fix smap to once again print out the Letter 'ID' for each line of a block/ partition view. -- Corrected the NOTES section of the scancel man page -- Fix for accounting_storage/mysql plugin to correctly query cluster based transactions. -- Fix issue when updating database for clusters that were previously deleted before upgrade to 2.2 database. -- BLUEGENE - Handle mesh torus check better in dynamic mode. -- BLUEGENE - Fixed race condition when freeing block, most likely only would happen in emulation. -- Fix for calculating used QOS limits correctly on a slurmctld reconfig. -- BLUEGENE - Fix for bad conn-type set when running small blocks in HTC mode. -- If salloc's --no-shell option is used, then do not attempt to preserve the terminal's state. -- Add new SLURM configure time parameter of --disable-salloc-background. If set, then salloc can only execute in the foreground. If started in the background, then a message will be printed and the job allocation halted until brought into the foreground. NOTE: THIS IS A CHANGE IN DEFAULT SALLOC BEHAVIOR FROM V2.2.1, BUT IS CONSISTENT WITH V2.1 AND EARLIER. -- Added the Multi-Cluster Operation web page. -- Removed remnant code for enforcing max sockets/cores/threads in the cons_res plugin (see last item in 2.1.0-pre5). This was responsible for a bug reported by Rod Schultz. -- BLUEGENE - Set correct env vars for HTC mode on a P system to get correct block. -- Correct RunTime reported by "scontrol show job" for pending jobs. * Changes in SLURM 2.2.1 ======================== -- Fix setting derived exit code correctly for jobs that happen to have the same jobid. -- Better checking for time overflow when rolling up in accounting. -- Add scancel --reservation option to cancel all jobs associated with a specific reservation. -- Treat reservation with no nodes like one that starts later (let jobs of any size get queued and do not block any pending jobs). -- Fix bug in gang scheduling logic that would temporarily resume to many jobs after a job completed. -- Change srun message about job step being deferred due to SlurmctldProlog running to be more clear and only print when --verbose option is used. -- Made it so you could remove the hold on jobs with sview by setting the priority to infinite. -- BLUEGENE - better checking small blocks in dynamic mode whether a full midplane job could run or not. -- Decrease the maximum sleep time between srun job step creation retry attempts from 60 seconds to 29 seconds. This should eliminate a possible synchronization problem with gang scheduling that could result in job step creation requests only occuring when a job is suspended. -- Fix to prevent changing a held job's state from HELD to DEPENDENCY until the job is released. Patch from Rod Schultz, Bull. -- Fixed sprio -M to reflect PriorityWeight values from remote cluster. -- Fix bug in sview when trying to update arbitrary field on more than one job. Formerly would display information about one job, but update next selected job. -- Made it so QOS with UsageFactor set to 0 would make it so jobs running under that QOS wouldn't add time to fairshare or association/qos limits. -- Fixed issue where QOS priority wasn't re-normalized until a slurmctld restart when a QOS priority was changed. -- Fix sprio to use calculated numbers from slurmctld instead of calulating it own numbers. -- BLUEGENE - fixed race condition with preemption where if the wind blows the right way the slurmctld could lock up when preempting jobs to run others. -- BLUEGENE - fixed epilog to wait until MMCS job is totally complete before finishing. -- BLUEGENE - more robust checking for states when freeing blocks. -- Added correct files to the slurm.spec file for correct perl api rpm creation. -- Added flag "NoReserve" to a QOS to make it so all jobs are created equal within a QOS. So if larger, higher priority jobs are unable to run they don't prevent smaller jobs from running even if running the smaller jobs delay the start of the larger, higher priority jobs. -- BLUEGENE - Check preemptees one by one to preempt lower priority jobs first instead of first fit. -- In select/cons_res, correct handling of the option SelectTypeParameters=CR_ONE_TASK_PER_CORE. -- Fix for checking QOS to override partition limits, previously if not using QOS some limits would be overlooked. -- Fix bug which would terminate a job step if any of the nodes allocated to it were removed from the job's allocation. Now only the tasks on those nodes are terminated. -- Fixed issue when using a storage_accounting plugin directly without the slurmDBD updates weren't always sent correctly to the slurmctld, appears to OS dependent, reported by Fredrik Tegenfeldt. * Changes in SLURM 2.2.0 ======================== -- Change format of Duration field in "scontrol show reservation" output from an integer number of minutes to "[days-]hours:minutes:seconds". -- Add support for changing the reservation of pending or running jobs. -- On Cray systems only, salloc sends SIGKILL to spawned process group when job allocation is revoked. Patch from Gerrit Renker, CSCS. -- Fix for sacctmgr to work correctly when modifying user associations where all the associations contain a partition. -- Minor mods to salloc signal handling logic: forwards more signals and releases allocation on real-time signals. Patch from Gerrit Renker, CSCS. -- Add salloc logic to preserve tty attributes after abnormal exit. Patch from Mark Grondona, LLNL. -- BLUEGENE - Fix for issue in dynamic mode when trying to create a block overlapping a block with no job running on it but in configuring state. -- BLUEGENE - Speedup by skipping blocks that are deallocating for other jobs when starting overlapping jobs in dynamic mode. -- Fix for sacct --state to work correctly when not specifying a start time. -- Fix upgrade process in accounting from 2.1 for clusters named "cluster". -- Export more jobacct_common symbols needed for the slurm api on some systems. * Changes in SLURM 2.2.0.rc4 ============================ -- Correction in logic to spread out over time highly parallel messages to minimize lost messages. Effects slurmd epilog complete messages and PMI key-pair transmissions. Patch from Gerrit Renker, CSCS. -- Fixed issue where if a system has unset messages to the dbd in 2.1 and upgrades to 2.2. Messages are now processed correctly now. -- Fixed issue where assoc_mgr cache wasn't always loaded correctly if the slurmdbd wasn't running when the slurmctld was started. -- Make sure on a pthread create in step launch that the error code is looked at. Improves fault-tolerance of slurmd. -- Fix setting up default acct/wckey when upgrading from 2.1 to 2.2. -- Fix issue with associations attached to a specific partition with no other association, and requesting a different partition. -- Added perlapi to the slurmdb to the slurm.spec. -- In sched/backfill, correct handling of CompleteWait parameter to avoid backfill scheduling while a job is completing. Patch from Gerrit Renker, CSCS. -- Send message back to user when trying to launch job on computing lacking that user ID. Patch from Hongjia Cao, NUDT. -- BLUEGENE - Fix it so 1 midplane clusters will run small block jobs. -- Add Command and WorkDir to the output of "scontrol show job" for job allocations created using srun (not just sbatch). -- Fixed sacctmgr to not add blank defaultqos' when doing a cluster dump. -- Correct processing of memory and disk space specifications in the salloc, sbatch, and srun commands to work properly with a suffix of "MB", "GB", etc. and not only with a single letter (e.g. "M", "G", etc.). -- Prevent nodes with suspended jobs from being powered down by SLURM. -- Normalized the way pidfile are created by the slurm daemons. -- Fixed modifying the root association to no read in it's last value when clearing a limit being set. -- Revert some resent signal handling logic from salloc so that SIGHUP sent after the job allocation will properly release the allocation and cause salloc to exit. -- BLUEGENE - Fix for recreating a block in a ready state. -- Fix debug flags for incorrect logic when dealing with DEBUG_FLAG_WIKI. -- Report reservation's Nodes as a hostlist expression of all nodes rather than using "ALL". -- Fix reporting of nodes in BlueGene reservation (was reporting CPU count rather than cnode count in scontrol output for NodeCnt field). * Changes in SLURM 2.2.0.rc3 ============================ -- Modify sacctmgr command to accept plural versions of options (e.g. "Users" in addition to "User"). Patch from Don Albert, BULL. -- BLUEGENE - make it so reset of boot counter happens only on state change and not when a new job comes along. -- Modify srun and salloc signal handling so they can be interrupted while waiting for an allocation. This was broken in version 2.2.0.rc2. -- Fix NULL pointer reference in sview. Patch from Gerrit Renker, CSCS. -- Fix file descriptor leak in slurmstepd on spank_task_post_fork() failure. Patch from Gerrit Renker, CSCS. -- Fix bug in preserving job state information when upgrading from SLURM version 2.1. Bug introduced in version 2.2.0-pre10. Patch from Par Andersson, NSC. -- Fix bug where if using the slurmdbd if a job wasn't able to start right away some accounting information may be lost. -- BLUEGENE - when a prolog failure happens the offending block is put in an error state. -- Changed the last column heading of the sshare output from "FS Usage" to "FairShare" and added more detail to the sshare man page. -- Fix bug in enforcement of reservation by account name. Used wrong index into an array. Patch from Gerrit Renker, CSCS. -- Modify job_submit/lua plugin to treat any non-zero return code from the job_submit and job_modify functions as an error and the user request should be aborted. -- Fix bug which would permit pending job to be started on completing node when job preemption is configured. * Changes in SLURM 2.2.0.rc2 ============================ -- Fix memory leak in job step allocation logic. Patch from Hongjia Cao, NUDT. -- If a preempted job was submitted with the --no-requeue option then cancel rather than requeue it. -- Fix for problems when adding a user for the first time to a new cluster with a 2.1 sacctmgr without specifying a default account. -- Resend TERMINATE_JOB message only to nodes that the job still has not terminated on. Patch from Hongjia Cao, NUDT. -- Treat time limit specification of "0:300" as a request for 300 seconds (5 minutes) instead of one minute. -- Modify sched/backfill plugin logic to continue working its way down the queue of jobs rather than restarting at the top if there are no changes in job, node, or partition state between runs. Patch from Hongjia Cao, NUDT. -- Improve scalability of select/cons_res logic. Patch from Matthieu Hautreux, CEA. -- Fix for possible deadlock in the slurmstepd when cancelling a job that is also writing a large amount of data to stderr. -- Fix in select/cons_res to eliminate "mem underflow" error when the slurmctld is reconfigured while a job is in completing state. -- Send a message to the a user's job when it's real or virual memory limit is exceeded. : -- Apply rlimits right before execing the users task so to lower the risk of the task exiting because the slurmstepd ran over a limit (log file size, etc.) -- Add scontrol command of "uhold " so that an administrator can hold a job and let the job's owner release it. The scontrol command of "hold " when executed by a SLURM administrator can only be released by a SLURM administrator and not the job owner. -- Change atoi to slurm_atoul in mysql plugin, needed for running on 32-bit systems in some cases. -- If a batch job is found to be missing from a node, make its termination state be NODE_FAIL rather than CANCELLED. -- Fatal error put back if running a bluegene or cray plugin from a controller not of that type. -- Make sure jobacct_gather plugin is not shutdown before messing with the proccess list. -- Modify signal handling in srun and salloc commands to avoid deadlock if the malloc function is interupted and called again. The malloc function is thread safe, but not reentrant, which is a problem when signal handling if the malloc function itself has a lock. Problem fixed by moving signal handling in those commands to a new pthread. -- In srun set job abort flag on completion to handle the case when a user cancels a job while the node is not responding but slurmctld has not yet the node down. Patch from Hongjia Cao, NUDT. -- Streamline the PMI logic if no duplicate keys are included in the key-pairs managed. Substantially improves performance for large numbers of tasks. Adds support for SLURM_PMI_KVS_NO_DUP_KEYS environment variable. Patch from Hongjia Cao, NUDT. -- Fix issues with sview dealing with older versions of sview and saving defaults. -- Remove references to --mincores, --minsockets, and --minthreads from the salloc, sbatch and srun man pages. These options are defunct, Patch from Rod Schultz, Bull. -- Made openssl not be required to build RPMs, it is not required anymore since munge is the default crypto plugin. -- sacctmgr now has smarts to figure out if a qos is a default qos when modifing a user/acct or removing a qos. -- For reservations on BlueGene systems, set and report c-node counts rather than midplane counts. * Changes in SLURM 2.2.0.rc1 ============================ -- Add show_flags parameter to the slurm_load_block_info() function. -- perlapi has been brought up to speed courtesy of Hongjia Coa. (make sure to run 'make clean' if building in a different dir than source) -- Fixed regression in pre12 in crypto/munge when running with --enable-multiple-slurmd which would cause the slurmd's to core. -- Fixed regression where cpu count wasn't figured out correctly for steps. -- Fixed issue when using old mysql that can't handle a '.' in the table name. -- Mysql plugin works correctly without the SlurmDBD -- Added ability to query batch step with sstat. Currently no accounting data is stored for the batch step, but the internals are inplace if we decide to do that in the future. -- Fixed some backwards compatibility issues with 2.2 talking to 2.1. -- Fixed regression where modifying associations didn't get sent to the slurmctld. -- Made sshare sort things the same way saccmgr list assoc does (alphabetically) -- Fixed issue with default accounts being set up correctly. -- Changed sortting in the slurmctld so sshare output is similar to that of sacctmgr list assoc. -- Modify reservation logic so that daily and weekly reservations maintain the same time when daylight savings time starts or ends in the interim. -- Edit to make reservations handle updates to associations. -- Added the derived exit code to the slurmctld job record and the derived exit code and string to the job record in the SLURM db. -- Added slurm-sjobexit RPM for SLURM job exit code management tools. -- Added ability to use sstat/sacct against the batch step. -- Added OnlyDefaults option to sacctmgr list associations. -- Modified the fairshare priority formula to F = 2**(-Ue/S) -- Modify the PMI functions key-pair exchange function to support a 32-bit counter for larger job sizes. Patch from Hongjia Cao, NUDT. -- In sched/builtin - Make the estimated job start time logic faster (borrowed new logic from sched/backfill and added pthread) and more accurate. -- In select/cons_res fix bug that could result in a job being allocated zero CPUs on some nodes. Patch from Hongjia Cao, NUDT. -- Fix bug in sched/backfill that could set expected start time of a job too far in the future. -- Added ability to enforce new limits given to associations/qos on pending jobs. -- Increase max message size for the slurmdbd from 1000000 to 16*1024*1024 -- Increase number of active threads in the slurmdbd from 50 to 100 -- Fixed small bug in src/common/slurmdb_defs.c reported by Bjorn-Helge Mevik -- Fixed sacctmgr's ability to query associations against qos again. -- Fixed sview show config on non-bluegene systems. -- Fixed bug in selecting jobs based on sacct -N option -- Fix bug that prevented job Epilog from running more than once on a node if a job was requeued and started no job steps. -- Fixed issue where node index wasn't stored correcting when using DBD. -- Enable srun's use of the --nodes option with --exclusive (previously the --nodes option was ignored). -- Added UsageThreshold and Flags to the QOS object. -- Patch to improve threadsafeness in the mysql plugins. -- Add support for fair-share scheduling to be based upon resource use at the level of bank accounts and ignore use of individual users. Patch by Par Andersson, National Supercomputer Centre, Sweden. * Changes in SLURM 2.2.0.pre12 ============================== -- Log if Prolog or Epilog run for longer than MessageTimeout / 2. -- Log the RPC number associated with messages from slurmctld that timeout. -- Fix bug in select/cons_res logic when job allocation includes --overcommit and --ntasks-per-node options and the node has fewer CPUs than the count specified by --ntasks-per-node. -- Fix bug in gang scheduling and job preemption logic so that preempted jobs get resumed properly after a slurmctld hot-start. -- Fix bug in select/linear handling of gang scheduled jobs that could result in run_job_cnt underflow error message. -- Fix bug in gang scheduling logic to properly support partitions added using the scontrol command. -- Fix a segmentation fault in sview where the 'excluded_partitions' field was set to NULL, caused by the absence of ~/.slurm/sviewrc. -- Rewrote some calls to is_user_any_coord() in src/plugins/accounting_storage modules to make use of is_user_any_coord()'s return value. -- Add configure option of --with=dimensions=#. -- Modify srun ping logic so that srun would only be considered not responsive if three ping messages were not responded to. Patch from Hongjia Cao (NUDT). -- Preserve a node's ReasonTime field after scontrol reconfig command. Patch from Hongjia Cao (NUDT). -- Added the authority for users with AdminLevel's defined in the SLURM db (Operators and Admins) and account coordinators to invoke commands that affect jobs, reservations, nodes, etc. -- Fix for slurmd restart on completing node with no tasks to get the correct state, completing. Patch from Hongjia Cao (NUDT). -- Prevent scontrol setting a node's Reason="". Patch from Hongjia Cao (NUDT). -- Add new functions hostlist_ranged_string_malloc, hostlist_ranged_string_xmalloc, hostlist_deranged_string_malloc, and hostlist_deranged_string_xmalloc which will allocate memory as needed. -- Make the slurm commands support both the --cluster and --clusters option. Previously, some commands support one of those options, but not the other. -- Fix bug when resizing a job that has steps running on some of those nodes. Avoid killing the job step on remaining nodes. Patch from Rod Schultz (BULL). Also fix bug related to tracking the CPUs allocated to job steps on each node after releasing some nodes from the job's allocation. -- Applied patch from Rod Schultz / Matthieu Hautreux to keep the Node-to-Host cache from becoming corrupted when a hostname cannot be resolved. -- Export more symbols in libslurm for job and node state information translation (numbers to strings). Patch from Hongia Cao, NUDT. -- Add logic to retry sending RESPONSE_LAUNCH_TASKS messages from slurmd to srun. Patch from Hongia Cao, NUDT. -- Modify bit_unfmt_hexmask() and bit_unfmt_binmask() functions to clear the bitmap input before setting the bits indicated in the input string. -- Add SchedulerParameters option of bf_window to control how far into the future that the backfill scheduler will look when considering jobs to start. The default value is one day. See "man slurm.conf" for details. -- Fix bug that can result in duplicate job termination records in accounting for job termination when slurmctld restarts or reconfigures. -- Modify plugin and library logic as needed to support use of the function slurm_job_step_stat() from user commands. -- Fix race condition in which PrologSlurmctld failure could cause slurmctld to abort. -- Fix bug preventing users in secondary user groups from being granted access to partitions configured with AllowGroups. -- Added support for a default account and wckey per cluster within accounting. -- Modified select/cons_res plugin so that if MaxMemPerCPU is configured and a job specifies it's memory requirement, then more CPUs than requested will automatically be allocated to a job to honor the MaxMemPerCPU parameter. -- Added the derived_ec (exit_code) member to job_info_t. exit_code captures the exit code of the job script (or salloc) while derived_ec contains the highest exit code of all the job steps. -- Added SLURM_JOB_EXIT_CODE and SLURM_JOB_DERIVED_EC variables to the EpilogSlurmctld environment -- More work done on the accounting_storage/pgsql plugin, still beta. Patch from Hongjia Cao (NUDT). -- Major updates to sview from Dan Rusak (Bull), including: - Persistent option selections for each tab page - Clean up topology in grids - Leverage AllowGroups and Hidden options - Cascade full-info popups for ease of selection -- Add locks around the MySQL calls for proper operation if the non-thread safe version of the MySQL library is used. -- Remove libslurm.a, libpmi.a and libslurmdb.a from SLURM RPM. These static libraries are not generally usable. -- Fixed bug in sacctmgr when zeroing raw usage reported by Gerrit Renker. * Changes in SLURM 2.2.0.pre11 ============================== -- Permit a regular user to change the partition of a pending job. -- Major re-write of the job_submit/lua plugin to pass pointers to available partitions and use lua metatables to reference the job and partition fields. -- Add support for serveral new trigger types: SlurmDBD failure/restart, Database failure/restart, Slurmctld failure/restart. -- Add support for SLURM_CLUSTERS environment variable in the sbatch, sinfo, squeue commands. -- Modify the sinfo and squeue commands to report state of multiple clusters if the --clusters option is used. -- Added printf __attribute__ qualifiers to info, debug, ... to help prevent bad/incorrect parameters being sent to them. Original patch from Eygene Ryabinkin (Russian Research Centre). -- Fix bug in slurmctld job completion logic when nodes allocated to a completing job are re-booted. Patch from Hongjia Cao (NUDT). -- In slurmctld's node record data structure, rename "hilbert_integer" to "node_rank". -- Add topology/node_rank plugin to sort nodes based upon rank loaded from BASIL on Cray computers. -- Fix memory leak in the auth/munge and crypto/munge plugins in the case of some failure modes. * Changes in SLURM 2.2.0.pre10 ============================== -- Fix issue when EnforcePartLimits=yes in slurm.conf all jobs where no nodecnt was specified the job would be seen to have maxnodes=0 which would not allow jobs to run. -- Fix issue where if not suspending a job the gang scheduler does the correct kill procedure. -- Fixed some issues when dealing with jobs from a 2.1 system so they live after an upgrade. -- In srun, log if --cpu_bind options are specified, but not supported by the current system configuration. -- Various Patchs from Hongjia Cao dealing with bugs found in sacctmgr and the slurmdbd. -- Fix bug in changing the nodes allocated to a running job and some node names specified are invalid, avoid invalid memory reference. -- Fixed filename substitution of %h and %n based on patch from Ralph Bean -- Added better job sorting logic when preempting jobs with qos. -- Log the IP address and port number for some communication errors. -- Fix bug in select/cons_res when --cpus_per_task option is used, could oversubscribe resources. -- In srun, do not implicitly set the job's maximum node count based upon a required hostlist. -- Avoid running the HealthCheckProgram on non-responding nodes rather than DOWN nodes. -- Fix bug in handling of poll() functions on OS X (SLURM was ignoring POLLIN if POLLHUP flag was set at the same time). -- Pulled Cray logic out of common/node_select.c into it's own select/cray plugin cons_res is the default. To use linear add 'Linear' to SelectTypeParameters. -- Fixed bug where resizing jobs didn't correctly set used limits correctly. -- Change sched/backfill default time interval to 30 seconds and defer attempt to backfill schedule if slurmctld has more than 5 active RPCs. General improvements in logic scalability. -- Add SchedulerParameters option of default_sched_depth=# to control how many jobs on queue should be tested for attempted scheduling when a job completes or other routine events. Default value is 100 jobs. The full job queue is tested on a less frequent basis. This option can dramatically improve performance on systems with thousands of queued jobs. -- Gres/gpu now sets the CUDA_VISIBLE_DEVICES environment to control which GPU devices should be used for each job or job step and CUDA version 3.1+ is used. NOTE: SLURM's generic resource support is still under development. -- Modify select/cons_res to pack jobs onto allocated nodes differently and minimize system fragmentation. For example on nodes with 8 CPUs each, a job needing 10 CPUs will now ideally be allocated 8 CPUs on one node and 2 CPUs on another node. Previously the job would have ideally been allocated 5 CPUs on each node, fragmenting the unused resources more. -- Modified the behavior of update_job() in job_mgr.c to return when the first error is encountered instead of continuing with more job updates. -- Removed all references to the following slurm.conf parameters, all of which have been removed or replaced since version 2.0 or earlier: HashBase, HeartbeatInterval, JobAcctFrequency, JobAcctLogFile (instead use AccountingStorageLoc), JobAcctType, KillTree, MaxMemPerTask, and MpichGmDirectSupport. -- Fix bug in slurmctld restart logic that improperly reported jobs had invalid features: "Job 65537 has invalid feature list: fat". -- BLUEGENE - Removed thread pool for destroying blocks. It turns out the memory leak we were concerned about for creating and destroying threads in a plugin doesn't exist anymore. This increases throughput dramatically, allowing multiple jobs to start at the same time. -- BLUEGENE - Removed thread pool for starting and stopping jobs. For similar reasons as noted above. -- BLUEGENE - Handle blocks that never deallocate. * Changes in SLURM 2.2.0.pre9 ============================= -- sbatch can now submit jobs to multiple clusters and run on the earliest available. -- Fix bug introduced in pre8 that prevented job dependencies and job triggers from working without the --enable-debug configure option. -- Replaced slurm_addr with slurm_addr_t -- Replaced slurm_fd with slurm_fd_t -- Skeleton code added for BlueGeneQ. -- Jobs can now be submitted to multiple partitions (job queues) and use the one permitting earliest start time. -- Change slurmdb_coord_table back to acct_coord_table to keep consistant with < 2.1. -- Introduced locking system similar to that in the slurmctld for the assoc_mgr. -- Added ability to change a users name in accounting. -- Restore squeue support for "%G" format (group id) accidentally removed in 2.2.0.pre7. -- Added preempt_mode option to QOS. -- Added a grouping=individual for sreport size reports. -- Added remove_qos logic to jobs running under a QOS that was removed. -- scancel now exits with a 1 if any job is non-existant when canceling. -- Better handling of select plugins that don't exist on various systems for cross cluster communication. Slurmctld, slurmd, and slurmstepd now only load the default select plugin as well. -- Better error handling when loading plugins. -- Prevent scontrol from aborting if getlogin() returns NULL. -- Prevent scontrol segfault when there are hidden nodes. -- Prevent srun segfault after task launch failure. -- Added job_submit/lua plugin. -- Fixed sinfo on a bluegene system to print correctly the output for: sinfo -e -o "%9P %6m %.4c %.22F %f" -- Add scontrol commands "hold" and "release" to simplify setting a job's priority to 0 or 1. Also tests that the job is in pending state. -- Increase maximum node list size (for incoming RPC) from 1024 bytes to 64k. -- In the backup slurmctld, purge triggers before recovering trigger state to avoid duplicate entries. -- Fix bug in sacct processing of --fields= option. -- Fix bug in checkpoint/blcr for jobs spanning multiple nodes introduced when changing some variable names in version 2.2.0.pre5. -- Removed the vestigal set_max_cluster_usage() function from the Priority Plugin API. -- Modify the output of "scontrol show job" for the field ReqS:C:T=. Fields not specified by the user will be reported as "*" instead of 65534. -- Added DefaultQOS option for an association. -- BLUEGENE - Added -B option to the slurmctld to clear created blocks from the system on start. -- BLUEGENE - Added option to scontrol & sview to recreate existing blocks. -- Fixed flags for returning messages to use the correct munge key when going cross-cluster. -- BLUEGENE - Added option to scontrol & sview to resume blocks in an error state instead of just freeing them. -- sview patched to allow multiple row selection of jobs, patch from Dan Rusak -- Lower default slurmctld server thread count from 1024 to 256. Some systems process threads on a last-in first-out basis and the high thread count was causing unexpectedly high delays for some RPCs. -- Added to sacctmgr the ability for admins to reset the raw usage of a user or account -- Improved the efficiency of a few lines in sacctmgr * Changes in SLURM 2.2.0.pre8 ============================= -- Add DebugFlags parameter of "Backfill" for sched/backfill detailed logging. -- Add DebugFlags parameter of "Gang" for detailed logging of gang scheduling activities. -- Add DebugFlags parameter of "Priority" for detailed logging of priority multifactor activities. -- Add DebugFlags parameter of "Reservation" for detailed logging of advanced reservations. -- Add run time to mail message upon job termination and queue time for mail message upon job begin. -- Add email notification option for job requeue. -- Generate a fatal error if the srun --relative option is used when not within an existing job allocation. -- Modify the meaning of InactiveLimit slightly. It will now cancel the job allocation created using the salloc or srun command if those commands cease responding for the InactiveLimit regardless of any running job steps. This parameter will no longer effect jobs spawned using sbatch. -- Remove AccountingStoragePass and JobCompPass from configuration RPC and scontrol show config command output. The use of SlurmDBD is still strongly recommended as SLURM will have limited database functionality or protection otherwise. -- Add sbatch options of --export and SBATCH_EXPORT to control which environment variables (if any) get propagated to the spawned job. This is particularly important for jobs that are submitted on one cluster and run on a different cluster. -- Fix bug in select/linear when used with gang scheduling and there are preempted jobs at the time slurmctld restarts that can result in over- subscribing resources. -- Added keeping track of the qos a job is running with in accounting. -- Fix for handling correctly jobs that resize, and also reporting correct stats on a job after it finishes. -- Modify gang scheduler so with SelectTypeParameter=CR_CPUS and task affinity is enabled, keep track of the individual CPUs allocated to jobs rather than just the count of CPUs allocated (which could overcommit specific CPUs for running jobs). -- Modify select/linear plugin data structures to eliminate underflow errors for the exclusive_cnt and tot_job_cnt variables (previously happened when slurmctld reconfigured while the job was in completing state). -- Change slurmd's working directory (and location of core files) to match that of the slurmctld daemon: the same directory used for log files, SlurmdLogFile (if specified with an absolute pathname) otherwise the directory used to save state, SlurmdSpoolDir. -- Add sattach support for the --pty option. -- Modify slurmctld communications logic to accept incoming messages on more than one port for improved scalability. -- Add SchedulerParameters option of "defer" to avoid trying to schedule a job at submission time, but to attempt scheduling many jobs at once for improved performance under heavy load. -- Correct logic controlling slurmctld thread limit eliminating check of RLIMIT_STACK. -- Make slurmctld's trigger logic more robust in the event that job records get purged before their trigger can be processed (e.g. MinJobAge=1). -- Add support for users to hold/release their own jobs (submit the job with srun/sbatch --hold/-H option or use "scontrol update jobid=# priority=0" to hold and "scontrol update jobid=# priority=1" to release). -- Added ability for sacct to query jobs by qos and a range of timelimits. -- Added ability for sstat to query pids of steps running. -- Support time specification in UTS format with a prefix of "uts" (e.g. "sbatch --begin=uts458389988 my.script"). * Changes in SLURM 2.2.0.pre7 ============================= -- Fixed issue with sacctmgr if querying against non-existent cluster it works the same way as 2.1. -- Added infrastructure to support allocation of generic node resources (gres). -Modified select/linear and select/cons_res plugins to allocate resources at the level of a job without oversubcription. -Get sched/backfill operating with gres allocations. -Get gres configuration changes (reconfiguration) working. -Have job steps allocate resources. -Modified job step credential to include the job's and step's gres allocation details. -Integrate with HWLOC library to identify GPUs and NICs configured on each node. -- SLURM commands (squeue, sinfo, etc...) can now go cross-cluster on like linux systems. Cross-cluster for bluegene to linux and such should work fine, even sview. -- Added the ability to configure PreemptMode on a per-partition basis. -- Change slurmctld's default thread limit count to 1024, but adjust that down as needed based upon the process's resource limits. -- Removed the non-functional "SystemCPU" and "TotalCPU" reporting fields from sstat and updated man page -- Correct location of apbasil command on Cray XT systems. -- Fixed bug in MinCPU and AveCPU calculations in sstat command -- Send message to srun when the Prolog takes too long (MessageTimeout) to complete. -- Change timeout for socket connect() to be half of configured MessageTimeout. -- Added high-throughput computing web page with configuration guidance. -- Use more srun sockets to process incoming PMI (MPICH2) connections for better scalability. -- Added DebugFlags for the select/bluegene plugin: DEBUG_FLAG_BG_PICK, DEBUG_FLAG_BG_WIRES, DEBUG_FLAG_BG_ALGO, and DEBUG_FLAG_BG_ALGO_DEEP. -- Remove vestigial job record field "kill_on_step_done" (internal to the slurmctld daemon only). -- For MPICH2 jobs: Clear PMI state between job steps. * Changes in SLURM 2.2.0.pre6 ============================= -- sview - added ability to see database configuration. -- sview - added ability to add/remove visible tabs. -- sview - change way grid highlighting takes place on selected objects. -- Added infrastructure to support allocation of generic node resources. -Added node configuration parameter of Gres=. -Added ability to view/modify a node's gres using scontrol, sinfo and sview. -Added salloc, sbatch and srun --gres option. -Added ability to view a job or job step's gres using scontrol, squeue and sview. -Added new configuration parameter GresPlugins to define plugins used to manage generic resources. -Added framework for gres plugins. -Added DebugFlags option of "gres" for detailed debugging of gres actions. -- Slurmd modified to log slow slurmstepd startup and note possible file system problem. -- sview - There is now a .slurm/sviewrc created when running sview. Defaults are put in there as to how sview looks when first launched. You can set these by Ctrl-S or Options->Set Default Settings. -- Add scontrol "wait_job " option to wait for nodes to boot as needed. Useful for batch jobs (in Prolog, PrologSlurmctld or the script) if powering down idle nodes. -- Added salloc and sbatch option --wait-all-nodes. If set non-zero, job initiation will be delayed until all allocated nodes have booted. Salloc will log the delay with the messages "Waiting for nodes to boot" and "Nodes are ready for job". -- The Priority/mulitfactor plugin now takes into consideration size of job in cpus as well as size in nodes when looking at the job size factor. Previously only nodes were considered. -- When using the SlurmDBD messages waiting to be sent will be combined and sent in one message. -- Remove srun's --core option. Move the logic to an optional SPANK plugin (currently in the contribs directory, but plan to distribute through http://code.google.com/p/slurm-spank-plugins/). -- Patch for adding CR_CORE_DEFAULT_DIST_BLOCK as a select option to layout jobs using block layout across cores within each node instead of cyclic which was previously the default. -- Accounting - When removing associations if jobs are running, those jobs must be killed before proceeding. Before the jobs were killed automatically thus causing user confusion on what is most likely an admin's mistake. -- sview - color column keeps reference color when highlighting. -- Configuration parameter MaxJobCount changed from 16-bit to 32-bit field. The default MaxJobCount was changed from 5,000 to 10,000. -- SLURM commands (squeue, sinfo, etc...) can now go cross-cluster on like linux systems. Cross-cluster for bluegene to linux and such does not currently work. You can submit jobs with sbatch. Salloc and srun are not cross-cluster compatible, and given their nature to talk to actual compute nodes these will likely never be. -- salloc modified to forward SIGTERM to the spawned program. -- In sched/wiki2 (for Moab support) - Add GRES and WCKEY fields to MODIFYJOBS and GETJOBS commands. Add GRES field to GETNODES command. -- In struct job_descriptor and struct job_info: rename min_sockets to sockets_per_node, min_cores to cores_per_socket, and min_threads to threads_per_core (the values are not minimum, but represent the target values). -- Fixed bug in clearing a partition's DisableRootJobs value reported by Hongjia Cao. -- Purge (or ignore) terminated jobs in a more timely fashion based upon the MinJobAge configuration parameter. Small values for MinJobAge should improve responsiveness for high job throughput. * Changes in SLURM 2.2.0.pre5 ============================= -- Modify commands to accept time format with one or two digit hour value (e.g. 8:00 or 08:00 or 8:00:00 or 08:00:00). -- Modify time parsing logic to accept "minute", "hour", "day", and "week" in addition to the currently accepted "minutes", "hours", etc. -- Add slurmd option of "-C" to print actual hardware configuration and exit. -- Pass EnforcePartLimits configuration parameter from slurmctld for user commands to see the correct value instead of always "NO". -- Modify partition data structures to replace the default_part, disable_root_jobs, hidden and root_only fields with a single field called "flags" populated with the flags PART_FLAG_DEFAULT, PART_FLAG_NO_ROOT PART_FLAG_HIDDEN and/or PART_FLAG_ROOT_ONLY. This is a more flexible solution besides making for smaller data structures. -- Add node state flag of JOB_RESIZING. This will only exist when a job's accounting record is being written immediately before or after it changes size. This permits job accounting records to be written for a job at each size. -- Make calls to jobcomp and accounting_storage plugins before and after a job changes size (with the job state being JOB_RESIZING). All plugins write a record for the job at each size with intermediate job states being JOB_RESIZING. -- When changing a job size using scontrol, generate a script that can be executed by the user to reset SLURM environment variables. -- Modify select/linear and select/cons_res to use resources released by job resizing. -- Added to contribs foundation for Perl extension for slurmdb library. -- Add new configuration parameter JobSubmitPlugins which provides a mechanism to set default job parameters or perform other site-configurable actions at job submit time. -- Better postgres support for accounting, still beta. -- Speed up job start when using the slurmdbd. -- Forward step failure reason back to slurmd before in some cases it would just be SLURM_FAILURE returned. -- Changed squeue to fail when passed invalid -o or -S specifications. * Changes in SLURM 2.2.0.pre4 ============================= -- Add support for a PropagatePrioProcess configuration parameter value of 2 to restrict spawned task nice values to that of the slurmd daemon plus 1. This insures that the slurmd daemon always have a higher scheduling priority than spawned tasks. -- Add support in slurmctld, slurmd and slurmdbd for option of "-n " to reset the daemon's nice value. -- Fixed slurm_load_slurmd_status and slurm_pid2jobid to work correctly when multiple slurmds are in use. -- Altered srun to set max_nodes to min_nodes if not set when doing an allocation to mimic that which salloc and sbatch do. If running a step if the max isn't set it remains unset. -- Applied patch from David Egolf (David.Egolf@Bull.com). Added the ability to purge/archive accounting data on a day or hour basis, previously it was only available on a monthly basis. -- Add support for maximum node count in job step request. -- Fix bug in CPU count logic for job step allocation (used count of CPUS per node rather than CPUs allocated to the job). -- Add new configuration parameters GroupUpdateForce and GroupUpdateTime. See "man slurm.conf" for details about how these control when slurmctld updates its information of which users are in the groups allowed to use partitions. -- Added sacctmgr list events which will list events that have happened on clusters in accounting. -- Permit a running job to shrink in size using a command of "scontrol update JobId=# NumNodes=#" or "scontrol update JobId=# NodeList=". Subsequent job steps must explicitly specify an appropriate node count to work properly. -- Added resize_time field to job record noting the time of the latest job size change (to be used for accounting purposes). -- sview/smap now hides hidden partitions and their jobs by default, with an option to display them. * Changes in SLURM 2.2.0.pre3 ============================= -- Refine support for TotalView partial attach. Add parameter to configure program of "--enable-partial-attach". -- In select/cons_res, the count of CPUs on required nodes was formerly ignored in enforcing the maximum CPU limit. Also enforce maximum CPU limit when the topology/tree plugin is configured (previously ignored). -- In select/cons_res, allocate cores for a job using a best-fit approach. -- In select/cons_res, for jobs that can run on a single node, use a best-fit packing approach. -- Add support for new partition states of DRAIN and INACTIVE and new partition option of "Alternate" (alternate partition to use for jobs submitted to partitions that are currently in a state of DRAIN or INACTIVE). -- Add group membership cache. This can substantially speed up slurmctld startup or reconfiguration if many partitions have AllowGroups configured. -- Added slurmdb api for accessing slurm DB information. -- In select/linear: Modify data structures for better performance and to avoid underflow error messages when slurmctld restarts while jobs are in completing state. -- Added hash for slurm.conf so when nodes check in to the controller it can verify the slurm.conf is the same as the one it is running. If not an error message is displayed. To silence this message add NO_CONF_HASH to DebugFlags in your slurm.conf. -- Added error code ESLURM_CIRCULAR_DEPENDENCY and prevent circular job dependencies (e.g. job 12 dependent upon job 11 AND job 11 is dependent upon job 12). -- Add BootTime and SlurmdStartTime to available node information. -- Fixed moab_2_slurmdb to work correctly under new database schema. -- Slurmd will drain a compute node when the SlurmdSpoolDir is full. * Changes in SLURM 2.2.0.pre2 ============================= -- Add support for spank_get_item() to get S_STEP_ALLOC_CORES and S_STEP_ALLOC_MEM. Support will remain for S_JOB_ALLOC_CORES and S_JOB_ALLOC_MEM. -- Kill individual job steps that exceed their memory limit rather than killing an entire job if one step exceeds its memory limit. -- Added configuration parameter VSizeFactor to enforce virtual memory limits for jobs and job steps as a percentage of their real memory allocation. -- Add scontrol ability to update job step's time limits. -- Add scontrol ability to update job's NumCPUs count. -- Add --time-min options to salloc, sbatch and srun. The scontrol command has been modified to display and modify the new field. sched/backfill plugin has been changed to alter time limits of jobs with the --time-min option if doing so permits earlier job initiation. -- Add support for TotalView symbol MPIR_partial_attach_ok with srun support to release processes which TotalView does not attach to. -- Add new option for SelectTypeParameters of CR_ONE_TASK_PER_CORE. This option will allocate one task per core by default. Without this option, by default one task will be allocated per thread on nodes with more than one ThreadsPerCore configured. -- Avoid accounting separately for a current pid corresponds to a Light Weight Process (Thread POSIX) appearing in the /proc directory. Only account for the original process (pid==tgid) to avoid accounting for memory use more than once. -- Add proctrack/cgroup plugin which uses Linux control groups (aka cgroup) to track processes on Linux systems having this feature enabled (kernel >= 2.6.24). -- Add logging of license transations including job_id. -- Add configuration parameters SlurmSchedLogFile and SlurmSchedLogLevel to support writing scheduling events to a separate log file. -- Added contribs/web_apps/chart_stats.cgi, a web app that invokes sreport to retrieve from the accounting storage db a user's request for job usage or machine utilization statistics and charts the results to a browser. -- Massive change to the schema in the storage_accounting/mysql plugin. When starting the slurmdbd the process of conversion may take a few minutes. You might also see some errors such as 'error: mysql_query failed: 1206 The total number of locks exceeds the lock table size'. If you get this, do not worry, it is because your setting of innodb_buffer_pool_size in your my.cnf file is not set or set too low. A decent value there should be 64M or higher depending on the system you are running on. See RELEASE_NOTES for more information. But setting this and then restarting the mysqld and slurmdbd will put things right. After this change we have noticed 50-75% increase in performance with sreport and sacct. -- Fix for MaxCPUs to honor partitions of 1 node that have more than the maxcpus for a job. -- Add support for "scontrol notify " to work for batch jobs. * Changes in SLURM 2.2.0.pre1 ============================= -- Added RunTime field to scontrol show job report -- Added SLURM_VERSION_NUMBER and removed SLURM_API_VERSION from slurm/slurm.h. -- Added support to handle communication with SLURM 2.1 clusters. Job's should not be lost in the future when upgrading to higher versions of SLURM. -- Added withdeleted options for listing clusters, users, and accounts -- Remove PLPA task affinity functions due to that package being deprecated. -- Preserve current partition state information and node Feature and Weight information rather than use contents of slurm.conf file after slurmctld restart with -R option or SIGHUP. Replace information with contents of slurm.conf after slurmctld restart without -R or "scontrol reconfigure". See RELEASE_NOTES file fore more details. -- Modify SLURM's PMI library (for MPICH2) to properly execute an executable program stand-alone (single MPI task launched without srun). -- Made GrpCPUs and MaxCPUs limits work for select/cons_res. -- Moved all SQL dependant plugins into a seperate rpm slurm-sql. This should be needed only where a connection to a database is needed (i.e. where the slurmdbd is running) -- Add command line option "no_sys_info" to PAM module to supress system logging of "access granted for user ...", access denied and other errors will still be logged. -- sinfo -R now has the user and timestamp in separate fields from the reason. -- Much functionality has been added to account_storage/pgsql. The plugin is still in a very beta state. It is still highly advised to use the mysql plugin, but if you feel like living on the edge or just really like postgres over mysql for some reason here you go. (Work done primarily by Hongjia Cao, NUDT.) * Changes in SLURM 2.1.17 ========================= -- Correct format of --begin reported in salloc, sbatch and srun --help message. -- Correct logic for regular users to increase nice value of their own jobs. * Changes in SLURM 2.1.16 ========================= -- Fixed minor warnings from gcc-4.5 -- Fixed initialization of accounting_stroage_enforce in the slurmctld. -- Fixed bug where if GrpNodes was lowered while pending jobs existed and where above the limit the slurmctld would seg fault. -- Fixed minor memory leak when unpack error happens on an association_shares_object_t. -- Set Lft and Rgt correctly when adding association. Fix for regression caused in 2.1.15, cosmetic fix only. -- Replaced optarg which was undefined in some spots to make sure ENV vars are set up correctly. -- When removing an account from a cluster with sacctmgr you no longer get a list of previously deleted associations. -- Fix to make jobcomp/(pg/my)sql correctly work when the database name is different than the default. * Changes in SLURM 2.1.15 ========================= -- Fix bug in which backup slurmctld can purge job scripts (and kill batch jobs) when it assumes primary control, particularly when this happens multiple times in a short time interval. -- In sched/wiki and sched/wiki2 add IWD (Initial Working Directory) to the information reported about jobs. -- Fix bug in calculating a daily or weekly reservation start time when the reservation is updated. Patch from Per Lundqvist (National Supercomputer Centre, Linköping University, Sweden). -- Fix bug in how job step memory limits are calculated when the --relative option is used. -- Restore operation of srun -X option to forward SIGINT to spawned tasks without killing them. -- Fixed a bug in calculating the root account's raw usage reported by Par Andersson -- Fixed a bug in sshare displaying account hierarchy reported by Per Lundqvist. -- In select/linear plugin, when a node allocated to a running job is removed from a partition, only log the event once. Fixes problem reported by Per Lundqvist. * Changes in SLURM 2.1.14 ========================= -- Fixed coding mistakes in _slurm_rpc_resv_show() and job_alloc_info() found while reviewing the code. -- Fix select/cons_res logic to prevent allocating resources while jobs previously allocated resources on the node are still completing. -- Fixed typo in job_mgr.c dealing with qos instead of associations. -- Make sure associations and qos' are initiated when added. -- Fixed wrong initialization for wckeys in the association manager. -- Added wiki.conf configuration parameter of HidePartitionNodes. See "man wiki.conf" for more information. -- Add "JobAggregationTime=#" field SchedulerParameter configuration parameter output. -- Modify init.d/slurm and slurmdbd scripts to prevent the possible inadvertent inclusion of "." in LD_LIBRARY_PATH environment variable. To fail, the script would need to be executed by user root or SlurmUser without the LD_LIBRARY_PATH environment variable set and there would have to be a maliciously altered library in the working directory. Thanks to Raphael Geissert for identifying the problem. This addresses security vulnerability CVE-2010-3380. * Changes in SLURM 2.1.13 ========================= -- Fix race condition which can set a node state to IDLE on slurmctld startup even if it has running jobs. * Changes in SLURM 2.1.12 ========================= -- Fixes for building on OS X 10.5. -- Fixed a few '-' without a '\' in front of them in the man pages. -- Fixed issues in client tools where a requeued job did get displayed correctly. -- Update typos in doc/html/accounting.shtml doc/html/resource_limits.shtml doc/man/man5/slurmdbd.conf.5 and doc/man/man5/slurm.conf.5 -- Fixed a bug in exitcode:signal display in sacct -- Fix bug when request comes in for consumable resources and the -c option is used in conjunction with -O -- Fixed squeue -o "%h" output formatting -- Change select/linear message "error: job xxx: best_fit topology failure" to debug type. -- BLUEGENE - Fix for sinfo -R to group all midplanes together in a single line for midplanes in an error state instead of 1 line for each midplane. -- Fix srun to work correctly with --uid when getting an allocation and creating a step, also fix salloc to assume identity at the correct time as well. -- BLUEGENE - Fixed issue with jobs being refused when running dynamic mode and every job on the system happens to be the same size. -- Removed bad #define _SLURMD_H from slurmd/get_mach_stat.h. Didn't appear to cause any problems being there, just incorrect syntax. -- Validate the job ID when salloc or srun receive an SRUN_JOB_COMPLETE RPC to avoid killing the wrong job if the original command exits and the port gets re-used by another command right away. -- Fix to node in correct state in accounting when updating it to drain from scontrol/sview. -- BLUEGENE - Removed incorrect unlocking on error cases when starting jobs. -- Improve logging of invalid sinfo and squeue print options. -- BLUEGENE - Added check to libsched_if to allow root to run even outside of SLURM. This is needed when running certain blocks outside of SLURM in HTC mode. * Changes in SLURM 2.1.11-2 =========================== -- BLUEGENE - make it so libsched_if.so is named correctly on 'L' it is libsched_if64.so and on 'P' it is libsched_if.so * Changes in SLURM 2.1.11 ========================= -- BLUEGENE - fix sinfo to not get duplicate entries when running command sinfo -e -o "%9P %6m %.4c %.22F %f" -- Fix bug that caused segv when deleting a partition with pending jobs. -- Better error message for when trying to modify an account's name with sacctmgr. -- Added back removal of #include "src/common/slurm_xlator.h" from select/cons_res. -- Fix incorrect logic in global_accounting in regression tests for setting QOS. -- BLUEGENE - Fixed issue where removing a small block in dynamic mode, and other blocks also in that midplane needed to be removed and were in and error state. They all weren't removed correctly in accounting. -- Prevent scontrol segv with "scontrol show node " command with nodes in a hidden partition. -- Fixed sizing of popup grids in sview. -- Fixed sacct when querying against a jobid the start time is not set. -- Fix configure to get correct version of pkg-config if both 32bit and 64bit libs are installed. -- Fix issue with sshare not sorting correctly the tree of associations. -- Update documentation for sreport. -- BLUEGENE - fix regression in 2.1.10 on assigning multiple jobs to one block. -- Minor memory leak fixed when killing job error happens. -- Fix sacctmgr list assoc when talking to a 2.2 slurmdbd. * Changes in SLURM 2.1.10 ========================= -- Fix memory leak in sched/builtin plugin. -- Fixed sbatch to work correctly when no nodes are specified, but --ntasks-per-node is. -- Make sure account and wckey for a job are lower case before inserting into accounting. -- Added note to squeue documentation about --jobs option displaying jobs even if they are on hidden partitions. -- Fix srun to work correctly with --uid when getting an allocation and creating a step. -- Fix for when removing a limit from a users association inside the fairshare tree the parents limit is now inherited automatically in the slurmctld. Previously the slurmctld would have to be restarted. This problem only exists when setting a users association limit to -1. -- Patch from Matthieu Hautreux (CEA) dealing with possible overflows that could come up with the select/cons_res plugin with uint32_t's being treated as uint16_t's. -- Correct logic for creating a reservation with a Duration=Infinite (used to set reservation end time in the past). -- Correct logic for creating a reservation that properly handles the OVERLAP and IGNORE_JOBS flags (flags were ignored under some conditions). -- Fixed a fair-share calculation bug in the priority/multifactor plugin. -- Make sure a user entry in the database that was previously deleted is restored clean when added back, i.e. remove admin privileges previously given. -- BLUEGENE - Future start time is set correctly when eligible time for a job is in the future, but the job can physically run earlier. -- Updated Documentation for sacctmgr for Wall and CPUMin options stating when the limit is reached running jobs will be killed. -- Fix deadlock issue in the slurmctld when lowering limits in accounting to lower than that of pending jobs. -- Fix bug in salloc, sbatch and srun that could under some conditions process the --threads-per-core, --cores-per-socket and --sockets-per-node options improperly. -- Fix bug in select/cons_res with memory management plus job preemption with job removal (e.g. requeue) which under some conditions failed to preempt jobs. -- Fix deadlock potential when using qos and associations in the slurmctld. -- Update documentation to state --ntasks-per-* is for a maximum value instead of an absolute. -- Get ReturnToService=2 working for front-end configurations (e.g. Cray or BlueGene). -- Do not make a non-responding node available for use after running "scontrol update nodename= state=resume". Wait for node to respond before use. -- Added slurm_xlator.h to jobacct_gather plugins so they resolve symbols correctly when linking to the slurm api. -- You can now update a jobs QOS from scontrol. Previously you could only do this from sview. -- BLUEGENE - Fixed bug where if running in non-dynamic mode sometimes the start time returned for a job when using test-only would not be correct. * Changes in SLURM 2.1.9 ======================== -- In select/linear - Fix logic to prevent over-subscribing memory with shared nodes (Shared=YES or Shared=FORCE). -- Fix for handling -N and --ntasks-per-node without specifying -n with salloc and sbatch. -- Fix jobacct_gather/linux if not polling on tasks to give tasks time to start before doing initial gather. -- When changing priority with the multifactor plugin we make sure we update the last_job_update variable. -- Fixed sview for gtk < 2.10 to display correct debug level at first. -- Fixed sview to not select too fast when using a mouse right click. -- Fixed sacct to display correct timelimits for jobs from accounting. -- Fixed sacct when running as root by default query all users as documented. -- In proctrack/linuxproc, skip over files in /proc that are not really user processes (e.g. "/proc/bus"). -- Fix documentation bug for slurmdbd.conf -- Fix slurmctld to update qos preempt list without restart. -- Fix bug in select/cons_res that in some cases would prevent a preempting job from using of resources already allocated to a preemptable running job. -- Fix for sreport in interactive mode to honor parsable/2 options. -- Fixed minor bugs in sacct and sstat commands -- BLUEGENE - Fixed issue if the slurmd becomes unresponsive and you have blocks in an error state accounting is correct when the slurmd comes back up. -- Corrected documentation for -n option in srun/salloc/sbatch -- BLUEGENE - when running a willrun test along with preemption the bluegene plugin now does the correct thing. -- Fix possible memory corruption issue which can cause slurmctld to abort. -- BLUEGENE - fixed small memory leak when setting up env. -- Fixed deadlock if using accounting and cluster changes size in the database. This can happen if you mistakenly have multiple primary slurmctld's running for a single cluster, which should rarely if ever happen. -- Fixed sacct -c option. -- Critical bug fix in sched/backfill plugin that caused memory corruption. * Changes in SLURM 2.1.8 ======================== -- Update BUILD_NOTES for AIX and bgp systems on how to get sview to build correctly. -- Update man page for scontrol when nodes are in the "MIXED" state. -- Better error messages for sacctmgr. -- Fix bug in allocation of CPUs with select/cons_res and --cpus-per-task option. -- Fix bug in dependency support for afterok and afternotok options to insure that the job's exit status gets checked for dependent jobs prior to puring completed job records. -- Fix bug in sched/backfill that could set an incorrect expected start time for a job. -- BLUEGENE - Fix for systems that have midplanes defined in the database that don't exist. -- Accounting, fixed bug where if removing an object a rollback wasn't possible. -- Fix possible scontrol stack corruption when listing jobs with very a long job or working directory name (over 511 characters). -- Insure that SPANK environment variables set by salloc or sbatch get propagated to the Prolog on all nodes by setting SLURM_SPANK_* environment variables for srun's use. -- In sched/wiki2 - Add support for the MODIFYJOB command to alter a job's comment field -- When a cluster first registers with the SlurmDBD only send nodes in an non-usable state. Before all nodes were sent. -- Alter sacct to be able to query jobs by association id. -- Edit documentation for scontrol stating ExitCode as something not alterable. -- Update documentation about ReturnToService and silently rebooting nodes. -- When combining --ntasks-per-node and --exclusive in an allocation request the correct thing, giving the allocation the entire node but only ntasks-per-node, happens. -- Fix accounting transaction logs when deleting associations to put the ids instead of the lfts which could change over time. -- Fix support for salloc, sbatch and srun's --hint option to avoid allocating a job more sockets per node or more cores per socket than desired. Also when --hint=compute_bound or --hint=memory_bound then avoid allocating more than one task per hyperthread (a change in behavior, but almost certainly a preferable mode of operation). * Changes in SLURM 2.1.7 ======================== -- Modify srun, salloc and sbatch parsing for the --signal option to accept either a signal name in addition to the previously supported signal numbers (e.g. "--signal=USR2@200"). -- BLUEGENE - Fixed sinfo --long --Node output for cpus on a single cnode. -- In sched/wiki2 - Fix another logic bug in support of Moab being able to identify preemptable jobs. -- In sched/wiki2 - For BlueGene systems only: Fix bug preventing Moab from being able to correctly change the node count of pending jobs. -- In select/cons_res - Fix bug preventing job preemption with a configuration of Shared=FORCE:1 and PreemptMode=GANG,SUSPEND. -- In the TaskProlog, add support for an "unset" option to clear environment variables for the user application. Also add support for embedded white- space in the environment variables exported to the user application (everything after the equal sign to the end of the line is included without alteration). -- Do not install /etc/init.d/slurm or /etc/init.d/slurmdbd on AIX systems. -- BLUEGENE - fixed check for small blocks if a node card of a midplane is in an error state other jobs can still run on the midplane on other nodecards. -- BLUEGENE - Check to make sure job killing is in the active job table in DB2 when killing the job. -- Correct logic to support ResvOverRun configuration parameter. -- Get --acctg-freq option working for srun and salloc commands. -- Fix sinfo display of drained nodes correctly with the summarize flag. -- Fix minor memory leaks in slurmd and slurmstepd. -- Better error messages for failed step launch. -- Modify srun to insure compatability of the --relative option with the node count requested. * Changes in SLURM 2.1.6-2 ========================== -- In sched/wiki2 - Fix logic in support of Moab being able to identify preemptable jobs. -- Applied fixes to a debug4 message in priority_multifactor.c sent in by Per Lundqvist -- BLUEGENE - Fixed issue where incorrect nodecards could be picked when looking at combining small blocks to make a larger small block. * Changes in SLURM 2.1.6 ======================== -- For newly submitted jobs, report expected start time in squeue --start as "N/A" rather than current time. -- Correct sched/backfill logic so that it runs in a more timely fashion. -- Fixed issue if running on accounting cache and priority/multifactor to initialize the root association when the database comes back up. -- Emulated BLUEGENE - fixed issue where blocks weren't always created correctly when loading from state. This does not apply to a real bluegene system, only emulated. -- Fixed bug when job is completing and its cpu_cnt would be calculated incorrectly, possibly resulting in an underflow being logged. -- Fixed bug where if there are pending jobs in a partition which was updated to have no nodes in it the slurmctld would dump core. -- Fixed smap and sview to display partitions with no nodes in them. -- Improve configure script's logic to detect LUA libraries. -- Fix bug that could cause slurmctld to abort if select/cons_res is used AND a job is submitted using the --no-kill option AND one of the job's nodes goes DOWN AND slurmctld restarts while that job is still running. -- In jobcomp plugins, job time limit was sometimes recorded improperly if not set by user (recorded huge number rather than partition's time limit). * Changes in SLURM 2.1.5 ======================== -- BLUEGENE - Fixed display of draining nodes for sinfo -R. -- Fixes to scontrol and sview when setting a job to an impossible start time. -- Added -h to srun for help. -- Fix for sacctmgr man page to remove erroneous 'with' statements. -- Fix for unpacking jobs with accounting statistics, previously it appears only steps were unpacked correctly, for the most case sacct would only display this information making this fix a very minor one. -- Changed scontrol and sview output for jobs with unknown end times from 'NONE' to 'Unknown'. -- Fixed mysql plugin to reset classification when adding a previously deleted cluster. -- Permit a batch script to reset umask and have that propagate to tasks spawed by subsequent srun. Previously the umask in effect when sbatch was executed was propagated to tasks spawed by srun. -- Modify slurm_job_cpus_allocated_on_node_id() and slurm_job_cpus_allocated_on_node() functions to not write explanation of failures to stderr. Only return -1 and set errno. -- Correction in configurator.html script. Prolog and Epilog were reversed. -- BLUEGENE - Fixed race condition where if a nodecard has an error on an un-booted block when a job comes to use it before the state checking thread notices it which could cause the slurmctld to lock up on a non-dynamic system. -- In select/cons_res with FastSchedule=0 and Procs=# defined for the node, but no specific socket/core/thread count configured, avoid fatal error if the number of cores on a node is less than the number of Procs configured. -- Added ability for the perlapi to utilize opaque data types returned from the C api. -- BLUEGENE - made the perlapi get correct values for cpus per node, Previously it would give the number of cpus per cnode instead of midplane. -- BLEUGENE - Fixed issue where if a block being selected for a job to use and during the process a hardware failure happens, previously the block would still be allowed to be used which would fail or requeue the job depending on the configuration. -- For SPANK job environment, avoid duplicate "SPANK_" prefix for environment set by sbatch jobs. -- Make squeue select jobs on hidden partitions when requesting more than one. -- Avoid automatically cancelling job steps when all of the tasks on some node have gracefully terminated. * Changes in SLURM 2.1.4 ======================== -- Fix for purge script in accounting to use correct options. -- If SelectType=select/linear and SelectTypeParameters=CR_Memory fix bug that would fail to release memory reserved for a job if "scontrol reconfigure" is executed while the job is in completing state. -- Fix bug in handling event trigger for job time limit while job is still in pending state. -- Fixed display of Ave/MaxCPU in sacct for jobs. Steps were printed correctly. -- When node current features differs from slurm.conf, log the node names using a hostlist expression rather than listing individual node names. -- Improve ability of srun to abort job step for some task launch failures. -- Fix mvapich plugin logic to release the created job allocation on initialization failure (previously the failures would cancel job step, but retain job allocation). -- Fix bug in srun for task count so large that it overflows int data type. -- Fix important bug in select/cons_res handling of ntasks-per-core parameter that was uncovered by a bug fixed in v2.1.3. Bug produced fatal error for slurmctld: "cons_res: cpus computation error". -- Fix bug in select/cons_res handling of partitions configured with Shared=YES. Prior logic failed to support running multiple jobs per node. * Changes in SLURM 2.1.3-2 ========================== -- Modified spec file to obsolete pam_slurm when installing the slurm-pam_slurm rpm. * Changes in SLURM 2.1.3-1 ========================== -- BLUEGENE - Fix issues on static/overlap systems where if a midplane was drained you would not be able to create new blocks on it. -- In sched/wiki2 (for Moab): Add excluded host list to job information using new keyword "EXCLUDE_HOSTLIST". -- Correct slurmd reporting of incorrect socket/core/thread counts. -- For sched/wiki2 (Moab): Do not extend a job's end time for suspend/resume or startup delay due to node boot time. A job's end time will always be its start time plus time limit. -- Added build-time option (to configure program) of --with-pam_dir to specify the directory into which PAM modules get installed, although it should pick the proper directory by default. "make install" and "rpmbuild" should now put the pam_slurm.so file in the proper directory. -- Modify PAM module to link against SLURM API shared library and use exported slurm_hostlist functions. -- Do not block new jobs with --immediate option while another job is in the process of being requeued (which can take a long time for some node failure modes). -- For topology/tree, log invalid hostnames in a single hostlist expression rather than one per line. -- A job step's default time limit will be UNLIMITED rather than partition's default time limit. The step will automatically be cancelled as part of the job termination logic when the job's time limit is reached. -- sacct - fixed bug when checking jobs against a reservation -- In select/cons_res, fix support for job allocation with --ntasks_per_node option. Previously could allocate too few CPUs on some nodes. -- Adjustment made to init message to the slurmdbd to allow backwards compatibility with future 2.2 release. YOU NEED TO UPGRADE SLURMDBD BEFORE ANYTHING ELSE. -- Fix accounting when comment of down/drained node has double quotes in it. * Changes in SLURM 2.1.2 ======================== -- Added nodelist to sview for jobs on non-bluegene systems -- Correction in value of batch job environment variable SLURM_TASKS_PER_NODE under some conditions. -- When a node silently fails which is already drained/down the reason for draining for the node is not changed. -- Srun will ignore SLURM_NNODES environment variable and use the count of currently allocated nodes if that count changes during the job's lifetime (e.g. job allocation uses the --no-kill option and a node goes DOWN, job step would previously always fail). -- Made it so sacctmgr can't add blank user or account. The MySQL plugin will also reject such requests. -- Revert libpmi.so version for compatibility with SLURM version 2.0 and earlier to avoid forcing applications using a specific libpmi.so version to rebuild unnecessarily (revert from libpmi.so.21.0.0 to libpmi.so.0.0.0). -- Restore support for a pending job's constraints (required node features) when slurmctld is restarted (internal structure needed to be rebuilt). -- Removed checkpoint_blcr.so from the plugin rpm in the slurm.spec since it is also in the blcr rpm. -- Fixed issue in sview where you were unable to edit the count of jobs to share resources. -- BLUEGENE - Fixed issue where tasks on steps weren't being displayed correctly with scontrol and sview. -- BLUEGENE - fixed wiki2 plugin to report correct task count for pending jobs. -- BLUEGENE - Added /etc/ld.so.conf.d/slurm.conf to point to the directory holding libsched_if64.so when building rpms. -- Adjust get_wckeys call in slurmdbd to allow operators to list wckeys. * Changes in SLURM 2.1.1 ======================== -- Fix for case sensitive databases when a slurmctld has a mixed case clustername to lower case the string to easy compares. -- Fix squeue if job is completing and failed to print remaining nodes instead of failed message. -- Fix sview core when searching for partitions by state. -- Fixed setting the start time when querying in sacct to the beginning of the day if not set previously. -- Defined slurm_free_reservation_info_msg and slurm_free_topo_info_msg in common/slurm_protocol_defs.h -- Avoid generating error when a job step includes a memory specification and memory is not configured as a consumable resource. -- Patch for small memory leak in src/common/plugstack.c -- Fix sview search on node state. -- Fix bug in which improperly formed job dependency specification can cause slurmctld to abort. -- Fixed issue where slurmctld wouldn't always get a message to send cluster information when registering for the first time with the slurmdbd. -- Add slurm_*_trigger.3 man pages for event trigger APIs. -- Fix bug in job preemption logic that would free allocated memory twice. -- Fix spelling issues (from Gennaro Oliva) -- Fix issue when changing parents of an account in accounting all childern weren't always sent to their respected slurmctlds until a restart. -- Restore support for srun/salloc/sbatch option --hint=nomultithread to bind tasks to cores rather than threads (broken in slurm v2.1.0-pre5). -- Fix issue where a 2.0 sacct could not talk correctly to a 2.1 slurmdbd. -- BLUEGENE - Fix issue where no partitions have any nodes assigned them to alert user no blocks can be created. -- BLUEGENE - Fix smap to put BGP images when using -Dc on a Blue Gene/P system. -- Set SLURM_SUBMIT_DIR environment variable for srun and salloc commands to match behavior of sbatch command. -- Report WorkDir from "scontrol show job" command for jobs launched using salloc and srun. -- Update correctly the wckey when changing it on a pending job. -- Set wckeyid correctly in accounting when cancelling a pending job. -- BLUEGENE - critical fix where jobs would be killed incorrectly. -- BLUEGENE - fix for sview putting multiple ionodes on to nodelists when viewing the jobs tab. * Changes in SLURM 2.1.0 ======================== -- Improve sview layout of blocks in use. -- A user can now change the dimensions of the grid in sview. -- BLUEGENE - improved startup speed further for large numbers of defined blocks -- Fix to _get_job_min_nodes() in wiki2/get_jobs.c suggested by Michal Novotny -- BLUEGENE - fixed issues when updating a pending job when a node count was incorrect for the asked for connection type. -- BLUEGENE - fixed issue when combining blocks that are in ready states to make a larger block from those or make multiple smaller blocks by splitting the larger block. Previously this would only work with block in a free state. -- Fix bug in wiki(2) plugins where if HostFormat=2 and the task list is greater than 64 we don't truncate. Previously this would mess up Moab by sending a truncated task list when doing a get jobs. -- Added update slurmctld debug level to sview when in admin mode. -- Added logic to make sure if enforcing a memory limit when using the jobacct_gather plugin a user can no longer turn off the logic to enforce the limit. -- Replaced many calls to getpwuid() with reentrant uid_to_string() -- The slurmstepd will now refresh it's log file handle on a reconfig, previously if a log was rolled any output from the stepd was lost. slurm-slurm-15-08-7-1/README.rst000066400000000000000000000065401265000126300160210ustar00rootroot00000000000000Slurm Workload Manager -------------------------------------------------------- This is the Slurm Workload Manager. Slurm is an open-source cluster resource management and job scheduling system that strives to be simple, scalable, portable, fault-tolerant, and interconnect agnostic. Slurm currently has been tested only under Linux. As a cluster resource manager, Slurm provides three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work. NOTES FOR GITHUB DEVELOPERS --------------------------- The official issue tracker for Slurm is at http://bugs.schedmd.com/ We welcome code contributions and patches, but **we do not accept Pull Requests through Github at this time.** Please submit patches as attachments to new bugs under "Contributions" category. SOURCE DISTRIBUTION HIERARCHY ----------------------------- The top-level distribution directory contains this README as well as other high-level documentation files, and the scripts used to configure and build Slurm (see INSTALL). Subdirectories contain the source-code for Slurm as well as a DejaGNU test suite and further documentation. A quick description of the subdirectories of the Slurm distribution follows: src/ [ Slurm source ] Slurm source code is further organized into self explanatory subdirectories such as src/api, src/slurmctld, etc. doc/ [ Slurm documentation ] The documentation directory contains some latex, html, and ascii text papers, READMEs, and guides. Manual pages for the Slurm commands and configuration files are also under the doc/ directory. etc/ [ Slurm configuration ] The etc/ directory contains a sample config file, as well as some scripts useful for running Slurm. slurm/ [ Slurm include files ] This directory contains installed include files, such as slurm.h and slurm_errno.h, needed for compiling against the Slurm API. testsuite/ [ Slurm test suite ] The testsuite directory contains the framework for a set of DejaGNU and "make check" type tests for Slurm components. There is also an extensive collection of Expect scripts. auxdir/ [ autotools directory ] Directory for autotools scripts and files used to configure and build Slurm contribs/ [ helpful tools outside of Slurm proper ] Directory for anything that is outside of slurm proper such as a different api or such. To have this build you need to do a make contrib/install-contrib. COMPILING AND INSTALLING THE DISTRIBUTION ----------------------------------------- Please see the instructions at http://slurm.schedmd.com/quickstart_admin.html Extensive documentation is available from our home page at http://slurm.schedmd.com/slurm.html PROBLEMS -------- If you experience problems compiling, installing, or running Slurm, see http://slurm.schedmd.com/help.html LEGAL ----- Slurm is provided "as is" and with no warranty. This software is distributed under the GNU General Public License, please see the files COPYING, DISCLAIMER, and LICENSE.OpenSSL for details. slurm-slurm-15-08-7-1/RELEASE_NOTES000066400000000000000000000613171265000126300163100ustar00rootroot00000000000000RELEASE NOTES FOR SLURM VERSION 15.08 12 May 2015 IMPORTANT NOTES: ANY JOBS WITH A JOB ID ABOVE 2,147,463,647 WILL BE PURGED WHEN SLURM IS UPGRADED FROM AN OLDER VERSION! Reduce your configured MaxJobID value as needed prior to upgrading in order to eliminate these jobs. If using the slurmdbd (Slurm DataBase Daemon) you must update this first. The 15.08 slurmdbd will work with Slurm daemons of version 14.03 and above. You will not need to update all clusters at the same time, but it is very important to update slurmdbd first and having it running before updating any other clusters making use of it. No real harm will come from updating your systems before the slurmdbd, but they will not talk to each other until you do. Also at least the first time running the slurmdbd you need to make sure your my.cnf file has innodb_buffer_pool_size equal to at least 64M. You can accomplish this by adding the line innodb_buffer_pool_size=64M under the [mysqld] reference in the my.cnf file and restarting the mysqld. The buffer pool size must be smaller than the size of the MySQL tmpdir. This is needed when converting large tables over to the new database schema. Slurm can be upgraded from version 14.03 or 14.11 to version 15.08 without loss of jobs or other state information. Upgrading directly from an earlier version of Slurm will result in loss of state information. If using SPANK plugins that use the Slurm APIs, they should be recompiled when upgrading Slurm to a new major release. HIGHLIGHTS ========== -- Added TRES (Trackable resources) to track utilization of memory, GRES, burst buffer, license, and any other configurable resources in the accounting database. -- Add configurable billing weight that takes into consideration any TRES when calculating a job's resource utilization. -- Add configurable prioritization factor that takes into consideration any TRES when calculating a job's resource utilization. -- Add burst buffer support infrastructure. Currently available plugin include burst_buffer/generic (uses administrator supplied programs to manage file staging) and burst_buffer/cray (uses Cray APIs to manage buffers). -- Add power capping support for Cray systems with automatic rebalancing of power allocation between nodes. -- Modify slurmctld outgoing RPC logic to support more parallel tasks (up to 85 RPCs and 256 pthreads; the old logic supported up to 21 RPCs and 256 threads). -- Add support for job dependencies joined with OR operator (e.g. "--depend=afterok:123?afternotok:124"). -- Add advance reservation flag of "replace" that causes allocated resources to be replaced with idle resources. This maintains a pool of available resources that maintains a constant size (to the extent possible). -- Permit PreemptType=qos and PreemptMode=suspend,gang to be used together. A high-priority QOS job will now oversubscribe resources and gang schedule, but only if there are insufficient resources for the job to be started without preemption. NOTE: That with PreemptType=qos, the partition's Shared=FORCE:# configuration option will permit one job more per resource to be run than than specified, but only if started by preemption. -- A partition can now have an associated QOS. This will allow a partition to have all the limits a QOS has. If a limit is set in both QOS the partition QOS will override the job's QOS unless the job's QOS has the 'OverPartQOS' flag set. -- Expanded --cpu-freq parameters to include min-max:governor specifications. --cpu-freq now supported on salloc and sbatch. -- Add support for optimized job allocations with respect to SGI Hypercube topology. NOTE: Only supported with select/linear plugin. NOTE: The program contribs/sgi/netloc_to_topology can be used to build Slurm's topology.conf file. -- Add the ability for a compute node to be allocated to multiple jobs, but restricted to a single user. Added "--exclusive=user" option to salloc, the scontrol and sview commands. Added new partition configuration parameter "ExclusiveUser=yes|no". -- Verify that all plugin version numbers are identical to the component attempting to load them. Without this verification, the plugin can reference Slurm functions in the caller which differ (e.g. the underlying function's arguments could have changed between Slurm versions). NOTE: All plugins (except SPANK) must be built against the identical version of Slurm in order to be used by any Slurm command or daemon. This should eliminate some very difficult to diagnose problems due to use of old plugins. -- Optimize resource allocation for systems with dragonfly networks. -- Added plugin to record job completion information using Elasticsearch. Libcurl is required for build. Configure slurm.conf as follows JobCompType=jobcomp/elasticsearch JobCompLoc=http://YOUR_ELASTICSEARCH_SERVER:9200 -- DATABASE SCHEME HAS CHANGED. WHEN UPDATING THE MIGRATION PROCESS MAY TAKE SOME AMOUNT OF TIME DEPENDING ON HOW LARGE YOUR DATABASE IS. WHILE UPDATING NO RECORDS WILL BE LOST, BUT THE SLURMDBD MAY NOT BE RESPONSIVE DURING THE UPDATE. IT WILL ALSO NOT BE POSSIBLE TO AUTOMATICALLY REVERT THE DATABASE TO THE FORMAT FOR AN EARLIER VERSION OF SLURM. PLAN ACCORDINGLY. -- The performance of Profiling with HDF5 is improved. In addition, internal structures are changed to make it easier to add new profile types, particularly energy sensors. This has introduced an operational issue. See OTHER CHANGES. -- MPI/MVAPICH plugin now requires Munge for authentication. -- In order to support inter-cluster job dependencies, the MaxJobID configuration parameter default value has been reduced from 4,294,901,760 to 2,147,418,112 and it's maximum value is now 2,147,463,647. ANY JOBS WITH A JOB ID ABOVE 2,147,463,647 WILL BE PURGED WHEN SLURM IS UPGRADED FROM AN OLDER VERSION! RPMBUILD CHANGES ================ CONFIGURATION FILE CHANGES (see man appropriate man page for details) ===================================================================== -- Remove DynAllocPort configuration parameter. -- Added new configuration parameters to support burst buffers: BurstBufferParameters, and BurstBufferType. -- Added SchedulerParameters option of "bf_busy_nodes". When selecting resources for pending jobs to reserve for future execution (i.e. the job can not be started immediately), then preferentially select nodes that are in use. This will tend to leave currently idle resources available for backfilling longer running jobs, but may result in allocations having less than optimal network topology. This option is currently only supported by the select/cons_res plugin. -- Added "EioTimeout" parameter to slurm.conf. It is the number of seconds srun waits for slurmstepd to close the TCP/IP connection used to relay data between the user application and srun when the user application terminates. -- Remove the CR_ALLOCATE_FULL_SOCKET configuration option. It is now the default. -- Added DebugFlags values of "CpuFrequency", "Power" and "SICP". -- Added CpuFreqGovernors which lists governors allowed to be set with --cpu-freq on salloc, sbatch, and srun. -- Interpret a partition configuration of "Nodes=ALL" in slurm.conf as including all nodes defined in the cluster. -- Added new configuration parameters PowerParameters and PowerPlugin. -- Add AuthInfo option of "cred_expire=#" to specify the lifetime of a job step credential. The default value was changed from 1200 to 120 seconds. This value also controls how long a requeued job must wait before it can be started again. -- Added LaunchParameters configuration parameter. -- Added new partition configuration parameter "ExclusiveUser=yes|no". -- Add TopologyParam configuration parameter. Optional value of "dragonfly" is supported. -- Added a slurm.conf parameter "PrologEpilogTimeout" to control how long prolog/epilog can run. -- Add PrologFlags option of "Contain" to create a proctrack container at job resource allocation time. DBD CONFIGURATION FILE CHANGES (see "man slurmdbd.conf" for details) ==================================================================== COMMAND CHANGES (see man pages for details) =========================================== -- Added "--cpu_freq" option to salloc and sbatch. -- Add sbcast support for file transfer to resources allocated to a job step rather than a job allocation (e.g. "sbcast -j 123.4 ..."). -- Added new job state of STOPPED indicating processes have been stopped with a SIGSTOP (using scancel or sview), but retain its allocated CPUs. Job state returns to RUNNING when SIGCONT is sent (also using scancel or sview). -- The task_dist_states variable has been split into "flags" and "base" components. Added SLURM_DIST_PACK_NODES and SLURM_DIST_NO_PACK_NODES values to give user greater control over task distribution. The srun --dist options has been modified to accept a "Pack" and "NoPack" option. These options can be used to override the CR_PACK_NODE configuration option. -- Added BurstBuffer specification to advanced reservation. -- For advanced reservation, replace flag "License_only" with flag "Any_Nodes". It can be used to indicate the advanced reservation resources (licenses and/or burst buffers) can be used with any compute nodes. -- Add "--sicp" (available for inter-cluster dependencies) and "--power" (specify power management options) to salloc, sbatch and srun commands. -- Added "--mail=stage_out" option to job submission commands to notify user when burst buffer state out is complete. -- Require a "Reason" when using scontrol to set a node state to DOWN. -- Mail notifications on job BEGIN, END and FAIL now apply to a job array as a whole rather than generating individual email messages for each task in the job array. -- Remove srun --max-launch-time option. The option has not been functional or documented since Slurm version 2.0. -- Add "--thread-spec" option to salloc, sbatch and srun commands. This is the count of threads reserved for system use per node. -- Introduce new sbatch option '--kill-on-invalid-dep=yes|no' which allows users to specify which behavior they want if a job dependency is not satisfied. -- Add scontrol options to view and modify layouts tables. -- Add srun --accel-bind option to control how tasks are bound to GPUs and NIC Generic RESources (GRES). -- Add sreport -T/--tres option to identify Trackable RESources (TRES) to report. OTHER CHANGES ============= -- SPANK naming changes: For environment variables set using the spank_job_control_setenv() function, the values were available in the slurm_spank_job_prolog() and slurm_spank_job_epilog() functions using getenv where the name was given a prefix of "SPANK_". That prefix has been removed for consistency with the environment variables available in the Prolog and Epilog scripts. -- job_submit/lua: Enable reading and writing job environment variables. For example: if (job_desc.environment.LANGUAGE == "en_US") then ... -- The format of HDF5 node-step files has changed, so the sh5util program that merges them into job files has changed. The command line options to sh5util have not changed and will continue to service both versions for the next few releases of Slurm. -- Add environment variables SLURM_ARRAY_TASK_MAX, SLURM_ARRAY_TASK_MIN, SLURM_ARRAY_TASK_STEP for job arrays. API CHANGES =========== Changed members of the following structs ======================================== -- Changed the following fields in struct struct job_descriptor: cpu_freq renamed cpu_freq_max. task_dist changed from 16 to 32 bit. -- Changed the following fields in struct job_info: cpu_freq renamed cpu_freq_max. job_state changed from 16 to 32 bit. -- Changed the following fields in struct slurm_ctl_conf: min_job_age, task_plugin_param changed from 16 to 32 bit. bf_cycle_sum changed from 32 to 64 bit. -- Changed the following fields in struct slurm_step_ctx_params_t: cpu_freq renamed cpu_freq_max. task_dist changed from 16 to 32 bit. -- Changed the following fields in struct slurm_step_launch_params_t: cpu_freq renamed cpu_freq_max. task_dist changed from 16 to 32 bit. -- Changed the following fields in struct slurm_step_layout_t: task_dist changed from 16 to 32 bit. -- Changed the following fields in struct job_step_info_t: cpu_freq renamed cpu_freq_max. state changed from 16 to 32 bit. -- Changed the following fields in struct resource_allocation_response_msg_t: cpu_freq renamed cpu_freq_max. -- Changed the following fields in struct stats_info_response_msg: bf_cycle_sum changed from 32 to 64 bits. -- Changed the following fields in struct acct_gather_energy: base_consumed_energy, consumed_energy, previous_consumed_energy changed from 32 to 64 bits. -- Changed the following fields in struct ext_sensors_data: consumed_energy changed from 32 to 64 bits. Added members to the following struct definitions ================================================= -- Added the following fields to struct acct_gather_node_resp_msg: sensor_cnt -- Added the following fields to struct slurm_ctl_conf: accounting_storage_tres, bb_type, cpu_freq_govs, eio_timeout, launch_params, msg_aggr_params, power_parameters, power_plugin, priority_weight_tres, prolog_epilog_timeout topology_param. -- Added the following fields to struct job_descriptor: bit_flags, burst_buffer, clusters, cpu_freq_min, cpu_freq_gov, power_flags, sicp_mode tres_req_cnt. -- Added the following fields to struct job_info: bitflags, burst_buffer, cpu_freq_min, cpu_freq_gov, billable_tres, power_flags, sicp_mode tres_req_str, tres_alloc_str. -- Added the following fields to struct slurm_step_ctx_params_t: cpu_freq_min, cpu_freq_gov. -- Added the following fields to struct slurm_step_launch_params_t: accel_bind_type, cpu_freq_min, cpu_freq_gov. -- Added the following fields to struct job_step_info_t: cpu_freq_min, cpu_freq_gov task_dist, tres_alloc_str. -- Added the following fields to struct resource_allocation_response_msg_t: account, cpu_freq_min, cpu_freq_gov, env_size, environment, qos resv_name. -- Added the following fields to struct reserve_info_t: burst_buffer, core_cnt, resv_watts, tres_str. -- Added the following fields to struct resv_desc_msg_t: burst_buffer, core_cnt, resv_watts, tres_str. -- Added the following fields to struct node_info_t: free_mem, power, owner, tres_fmt_str. -- Added the following fields to struct partition_info: billing_weights_str, qos_char, tres_fmt_str Added the following struct definitions ====================================== -- Added power_mgmt_data_t: Power managment data stucture -- Added sicp_info_t: sicp data structure -- Added sicp_info_msg_t: sicp data structure message -- Added layout_info_msg_t: layout message data structure -- Added update_layout_msg_t: layout update message data structure -- Added step_alloc_info_msg_t: Step allocation message data structure. -- Added powercap_info_msg_t: Powercap information data structure. -- Added update_powercap_msg_t: Update message for powercap info data structure. -- Added will_run_response_msg_t: Data structure to test if a job can run. -- Added assoc_mgr_info_msg_t: Association manager information data structure. -- Added assoc_mgr_info_request_msg_t: Association manager request message. -- Added network_callerid_msg_t: Network callerid data structure. -- Added burst_buffer_gres_t: Burst buffer gres data structure. -- Added burst_buffer_resv_t: Burst buffer reservation data structure. -- Added burst_buffer_use_t: Burst buffer user information. -- Added burst_buffer_info_t: Burst buffer information data structure. -- Added burst_buffer_info_msg_t: Burst buffer message data structure. Changed the following enums and #defines ======================================== -- Added INFINITE64: 64 bit infinite value. -- Added NO_VAL: 64 bit no val value. -- Added SLURM_EXTERN_CONT: Job step id of external process container. -- Added DEFAULT_EIO_SHUTDOWN_WAIT: Time to wait after eio shutdown signal. -- Added MAIL_JOB_STAGE_OUT: Mail job stage out flag. -- Added CPU_FREQ_GOV_MASK: Mask for all defined cpu-frequency governors. -- Added JOB_LAUNCH_FAILED: Job launch failed state. -- Added JOB_STOPPED: Job stopped state. -- Added SLURM_DIST*: Slurm distribution flags. -- Added CPU_FREQ_GOV_MASK: Cpu frequency gov mask -- Added CPU_FREQ_*_OLD: Vestigial values for transition from v14.11 systems. -- Added PRIORITY_FLAGS_MAX_TRES: Flag for max tres limit. -- Added KILL_INV_DEP and NO_KILL_INV_DEP: Invalid dependency flags. -- Added CORE_SPEC_THREAD: Flag for thread count -- Changed WAIT_QOS/ASSOC_*: changed to tres values. -- Added WAIT_ASSOC/QOS_GRP/MAX_*: Association tres states. -- Added ENERGY_DATA_*: sensor count and node energy added to jobacct_data_type enum -- Added accel_bind_type: enum for accel bind type -- Added SLURM_POWER_FLAGS_LEVEL: Slurm power cap flag. -- Added PART_FLAG_EXCLUSIVE_USER: mask for exclusive allocation of nodes. -- Added PART_FLAG_EXC_USER_CLR: mask to clear exclusive allocation of nodes. -- Added RESERVE_FLAG_ANY_NODES: mask to allow usage for any compute node. -- Added RESERVE_FLAG_NO_ANY_NODES: mask to clear any compute node flag. -- Added RESERVE_FLAG_REPLACE: Replace job resources flag. -- Added DEBUG_FLAG_*: Debug flags for burst buffer, cpu freq, power managment, sicp, DB archive, and tres. -- Added PROLOG_FLAG_CONTAIN: Proctrack plugin container flag -- Added ASSOC_MGR_INFO_FLAG_*: Association manager info flags for association, user, and qos. -- Added BB_FLAG_DISABLE_PERSISTENT: Disable peristant burst buffers. -- Added BB_FLAG_ENABLE_PERSISTENT: Enable peristant burst buffers. -- Added BB_FLAG_EMULATE_CRAY: Using dw_wlm_cli emulator flag -- Added BB_FLAG_PRIVATE_DATA: Flag to allow buffer to be seen by owner. -- Added BB_STATE_*: Burst buffer state masks Added the following API's ========================= -- Added slurm_job_will_run2 to determine of a job could execute immediately. -- Added APIs to load, print, and update layouts: slurm_print_layout_info, slurm_load_layout, slurm_update_layout. -- Added APIs to free and get association manager information: slurm_load_assoc_mgr_info, slurm_free_assoc_mgr_info_msg, slurm_free_assoc_mgr_info_request_msg -- Added APIs to get cpu allocation from node name or id: slurm_job_cpus_allocated_str_on_node_id, slurm_job_cpus_allocated_str_on_node -- Added APIs to load, free, print, and update powercap information slurm_load_powercap, slurm_free_powercap_info_msg, slurm_print_powercap_info_msg, slurm_update_powercap -- Added slurm_burst_buffer_state_string to translate state number to string equivalent -- Added APIs to load, free, and print burst buffer information slurm_load_burst_buffer_info, slurm_free_burst_buffer_info_msg, slurm_print_burst_buffer_info_msg, slurm_print_burst_buffer_record -- Added slurm_network_callerid to get the job id of a job based upon network socket information. Changed the following API's ============================ -- slurm_get_node_energy - Changed argument acct_gather_energy_t **acct_gather_energy to uint16_t *sensors_cnt and acct_gather_energy_t **energy -- slurm_sbcast_lookup - Added step_id argument DBD API Changes =============== Changed members of the following structs ======================================== -- Changed slurmdb_association_cond_t to slurmdb_assoc_cond_t: -- Changed the following fields in struct slurmdb_account_cond_t: slurmdb_association_cond_t *assoc_cond changed to slurmdb_assoc_cond_t *assoc_cond -- Changed the following fields in struct slurmdb_assoc_rec: slurmdb_association_rec *assoc_next to slurmdb_assoc_rec *assoc_next slurmdb_association_rec *assoc_next_id to slurmdb_assoc_rec *assoc_next_id assoc_mgr_association_usage_t *usage to slurmdb_assoc_usage_t *usage -- Changed the following fields in struct slurmdb_cluster_rec_t: slurmdb_association_rec_t *root_assoc to slurmdb_assoc_rec_t *root_assoc -- Changed the following fields in struct slurmdb_job_rec_t: state changed from 16 to 32 bit -- Changed the following fields in struct slurmdb_qos_rec_t: assoc_mgr_qos_usage_t *usage to slurmdb_qos_usage_t *usage -- Changed the following fields in struct slurmdb_step_rec_t: task_dist was changed from 16 to 32 bit -- Changed the following fields in struct slurmdb_wckey_cond_t: slurmdb_association_cond_t *assoc_cond to slurmdb_assoc_cond_t *assoc_cond -- Changed the following fields in struct slurmdb_hierarchical_rec_t: slurmdb_association_cond_t *assoc_cond to slurmdb_assoc_cond_t *assoc_cond Added the following struct definitions ====================================== -- Added slurmdb_tres_rec_t: Tres data structure for the slurmdbd. -- Added slurmdb_assoc_cond_t: Slurmdbd association condition. -- Added slurmdb_tres_cond_t: Tres condition data structure. -- Added slurmdb_assoc_usage: Slurmdbd association usage limits. -- Added slurmdbd_qos_usage_t: slurmdbd qos usage data structure. Added members to the following struct definitions ================================================= -- Added the following fields to struct slurmdb_accounting_rec_t: tres_rec -- Added the following fields to struct slurmdb_assoc_rec_t: accounting_list, grp_tres, grp_tre_ctld, grp_tres_mins, grp_tres_mins_ctld, grp_tres_run_mins, grp_tres_run_mins_ctld, max_tres_mins_pj, max_tres_mins_ctld, max_tres_run_mins, max_tres_run_mins_ctld, max_tres_pj, max_tres_ctld, max_tres_pn, max_tres_pn_ctld -- Added the following fields to struct slurmdb_cluser_rec_t: tres_str -- Added the following fields to struct slurmdb_cluster_accounting_rec_t: tres_rec -- Added the following fields to struct slurmdb_event_rec_t: tres_str -- Added the following fields to struct slurmdb_job_rec_t: tres_alloc_str, tres_req_str -- Added the following fields to struct slurmdb_qos_rec_t: grp_tres, grp_tres_ctld, grp_tres_mins, grp_tres_mins_ctld, grp_tres_run_mins, grp_tres_run_mins_ctld, max_tres_mins_pj, max_tres_mins_pj_ctld, max_tres_pj, max_tres_pj_ctld, max_tres_pn, max_tres_pn_ctld, max_tres_pn, max_tres_pn_ctld, max_tres_pu, max_tres_pu_ctld, max_tres_run_mins_pu, max_tres_run_mins_pu_ctld, min_tres_pj, min_tres_pj_ctld -- Added the following fields to struct slurmdb_reservation_rec_t: tres_str, tres_list -- Added the following fields to struct slurmdb_step_rec_t: req_cpufreq_min, req_cpufreq_max, req_cpufreq_gov, tres_alloc_str -- Added the following fields to struct slurmdb_used_limits_t: tres, tres_run_mins -- Added the following fields to struct slurmdb_report_assoc_rec_t: tres_list -- Added the following fields to struct slurmdb_report_user_rec_t: tres_list -- Added the following fields to struct slurmdb_report_cluster_rec_t: accounting_list, tres_list -- Added the following fields to struct slurmdb_report_job_grouping_t: tres_list -- Added the following fields to struct slurmdb_report_acct_grouping_t: tres_list -- Added the following fields to struct slurmdb_report_cluster_grouping_t: tres_list Added the following enums and #defines ======================================== -- Added QOS_FLAG_PART_QOS: partition qos flag Added the following API's ========================= -- slurmdb_get_first_avail_cluster() - Get the first cluster that will run a job -- slurmdb_destroy_assoc_usage() - Helper function -- slurmdb_destroy_qos_usage() - Helper function -- slurmdb_free_assoc_mgr_state_msg() - Helper function -- slurmdb_free_assoc_rec_members() - Helper function -- slurmdb_destroy_assoc_rec() - Helper function -- slurmdb_free_qos_rec_members() - Helper function -- slurmdb_destroy_tres_rec_noalloc() - Helper function -- slurmdb_destroy_tres_rec() - Helper function -- slurmdb_destroy_tres_cond() - Helper function -- slurmdb_destroy_assoc_cond() - Helper function -- slurmdb_init_assoc_rec() - Helper function -- slurmdb_init_tres_cond() - Helper function -- slurmdb_tres_add() - Add tres to accounting -- slurmdb_tres_get() - Get tres info from accounting Changed the following API's ============================ -- slurmdb_associations_get() - Changed assoc_cond arg type to slurmdb_assoc_cond_t -- slurmdb_associations_modify() - Changed assoc_cond arg type to slurmdb_assoc_cond_t and assoc arg type to slurmdb_assoc_rec_t -- slurmdb_associations_remove() - Changed assoc_cond arg type to slurmdb_assoc_cond_t -- slurmdb_report_cluster_account_by_user(), slurmdb_report_cluster_user_by_account(), slurmdb_problems_get() - Changed assoc_cond arg type to slurmdb_assoc_cond_t -- slurmdb_init_qos_rec() - added init_val argument slurm-slurm-15-08-7-1/aclocal.m4000066400000000000000000002061231265000126300161710ustar00rootroot00000000000000# generated automatically by aclocal 1.14.1 -*- Autoconf -*- # Copyright (C) 1996-2013 Free Software Foundation, Inc. # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun([AC_CONFIG_MACRO_DIRS], [_AM_CONFIG_MACRO_DIRS($@)])]) m4_ifndef([AC_AUTOCONF_VERSION], [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl m4_if(m4_defn([AC_AUTOCONF_VERSION]), [2.69],, [m4_warning([this file was generated for autoconf 2.69. You have another version of autoconf. It may work, but is not guaranteed to. If you have problems, you may need to regenerate the build system entirely. To do so, use the procedure documented by the package, typically 'autoreconf'.])]) # Configure paths for GLIB # Owen Taylor 1997-2001 dnl AM_PATH_GLIB_2_0([MINIMUM-VERSION, [ACTION-IF-FOUND [, ACTION-IF-NOT-FOUND [, MODULES]]]]) dnl Test for GLIB, and define GLIB_CFLAGS and GLIB_LIBS, if gmodule, gobject, dnl gthread, or gio is specified in MODULES, pass to pkg-config dnl AC_DEFUN([AM_PATH_GLIB_2_0], [dnl dnl Get the cflags and libraries from pkg-config dnl AC_ARG_ENABLE(glibtest, [ --disable-glibtest do not try to compile and run a test GLIB program], , enable_glibtest=yes) pkg_config_args=glib-2.0 for module in . $4 do case "$module" in gmodule) pkg_config_args="$pkg_config_args gmodule-2.0" ;; gmodule-no-export) pkg_config_args="$pkg_config_args gmodule-no-export-2.0" ;; gobject) pkg_config_args="$pkg_config_args gobject-2.0" ;; gthread) pkg_config_args="$pkg_config_args gthread-2.0" ;; gio*) pkg_config_args="$pkg_config_args $module-2.0" ;; esac done PKG_PROG_PKG_CONFIG([0.16]) no_glib="" if test "x$PKG_CONFIG" = x ; then no_glib=yes PKG_CONFIG=no fi min_glib_version=ifelse([$1], ,2.0.0,$1) AC_MSG_CHECKING(for GLIB - version >= $min_glib_version) if test x$PKG_CONFIG != xno ; then ## don't try to run the test against uninstalled libtool libs if $PKG_CONFIG --uninstalled $pkg_config_args; then echo "Will use uninstalled version of GLib found in PKG_CONFIG_PATH" enable_glibtest=no fi if $PKG_CONFIG --atleast-version $min_glib_version $pkg_config_args; then : else no_glib=yes fi fi if test x"$no_glib" = x ; then GLIB_GENMARSHAL=`$PKG_CONFIG --variable=glib_genmarshal glib-2.0` GOBJECT_QUERY=`$PKG_CONFIG --variable=gobject_query glib-2.0` GLIB_MKENUMS=`$PKG_CONFIG --variable=glib_mkenums glib-2.0` GLIB_COMPILE_RESOURCES=`$PKG_CONFIG --variable=glib_compile_resources gio-2.0` GLIB_CFLAGS=`$PKG_CONFIG --cflags $pkg_config_args` GLIB_LIBS=`$PKG_CONFIG --libs $pkg_config_args` glib_config_major_version=`$PKG_CONFIG --modversion glib-2.0 | \ sed 's/\([[0-9]]*\).\([[0-9]]*\).\([[0-9]]*\)/\1/'` glib_config_minor_version=`$PKG_CONFIG --modversion glib-2.0 | \ sed 's/\([[0-9]]*\).\([[0-9]]*\).\([[0-9]]*\)/\2/'` glib_config_micro_version=`$PKG_CONFIG --modversion glib-2.0 | \ sed 's/\([[0-9]]*\).\([[0-9]]*\).\([[0-9]]*\)/\3/'` if test "x$enable_glibtest" = "xyes" ; then ac_save_CFLAGS="$CFLAGS" ac_save_LIBS="$LIBS" CFLAGS="$CFLAGS $GLIB_CFLAGS" LIBS="$GLIB_LIBS $LIBS" dnl dnl Now check if the installed GLIB is sufficiently new. (Also sanity dnl checks the results of pkg-config to some extent) dnl rm -f conf.glibtest AC_TRY_RUN([ #include #include #include int main () { unsigned int major, minor, micro; fclose (fopen ("conf.glibtest", "w")); if (sscanf("$min_glib_version", "%u.%u.%u", &major, &minor, µ) != 3) { printf("%s, bad version string\n", "$min_glib_version"); exit(1); } if ((glib_major_version != $glib_config_major_version) || (glib_minor_version != $glib_config_minor_version) || (glib_micro_version != $glib_config_micro_version)) { printf("\n*** 'pkg-config --modversion glib-2.0' returned %d.%d.%d, but GLIB (%d.%d.%d)\n", $glib_config_major_version, $glib_config_minor_version, $glib_config_micro_version, glib_major_version, glib_minor_version, glib_micro_version); printf ("*** was found! If pkg-config was correct, then it is best\n"); printf ("*** to remove the old version of GLib. You may also be able to fix the error\n"); printf("*** by modifying your LD_LIBRARY_PATH enviroment variable, or by editing\n"); printf("*** /etc/ld.so.conf. Make sure you have run ldconfig if that is\n"); printf("*** required on your system.\n"); printf("*** If pkg-config was wrong, set the environment variable PKG_CONFIG_PATH\n"); printf("*** to point to the correct configuration files\n"); } else if ((glib_major_version != GLIB_MAJOR_VERSION) || (glib_minor_version != GLIB_MINOR_VERSION) || (glib_micro_version != GLIB_MICRO_VERSION)) { printf("*** GLIB header files (version %d.%d.%d) do not match\n", GLIB_MAJOR_VERSION, GLIB_MINOR_VERSION, GLIB_MICRO_VERSION); printf("*** library (version %d.%d.%d)\n", glib_major_version, glib_minor_version, glib_micro_version); } else { if ((glib_major_version > major) || ((glib_major_version == major) && (glib_minor_version > minor)) || ((glib_major_version == major) && (glib_minor_version == minor) && (glib_micro_version >= micro))) { return 0; } else { printf("\n*** An old version of GLIB (%u.%u.%u) was found.\n", glib_major_version, glib_minor_version, glib_micro_version); printf("*** You need a version of GLIB newer than %u.%u.%u. The latest version of\n", major, minor, micro); printf("*** GLIB is always available from ftp://ftp.gtk.org.\n"); printf("***\n"); printf("*** If you have already installed a sufficiently new version, this error\n"); printf("*** probably means that the wrong copy of the pkg-config shell script is\n"); printf("*** being found. The easiest way to fix this is to remove the old version\n"); printf("*** of GLIB, but you can also set the PKG_CONFIG environment to point to the\n"); printf("*** correct copy of pkg-config. (In this case, you will have to\n"); printf("*** modify your LD_LIBRARY_PATH enviroment variable, or edit /etc/ld.so.conf\n"); printf("*** so that the correct libraries are found at run-time))\n"); } } return 1; } ],, no_glib=yes,[echo $ac_n "cross compiling; assumed OK... $ac_c"]) CFLAGS="$ac_save_CFLAGS" LIBS="$ac_save_LIBS" fi fi if test "x$no_glib" = x ; then AC_MSG_RESULT(yes (version $glib_config_major_version.$glib_config_minor_version.$glib_config_micro_version)) ifelse([$2], , :, [$2]) else AC_MSG_RESULT(no) if test "$PKG_CONFIG" = "no" ; then echo "*** A new enough version of pkg-config was not found." echo "*** See http://www.freedesktop.org/software/pkgconfig/" else if test -f conf.glibtest ; then : else echo "*** Could not run GLIB test program, checking why..." ac_save_CFLAGS="$CFLAGS" ac_save_LIBS="$LIBS" CFLAGS="$CFLAGS $GLIB_CFLAGS" LIBS="$LIBS $GLIB_LIBS" AC_TRY_LINK([ #include #include ], [ return ((glib_major_version) || (glib_minor_version) || (glib_micro_version)); ], [ echo "*** The test program compiled, but did not run. This usually means" echo "*** that the run-time linker is not finding GLIB or finding the wrong" echo "*** version of GLIB. If it is not finding GLIB, you'll need to set your" echo "*** LD_LIBRARY_PATH environment variable, or edit /etc/ld.so.conf to point" echo "*** to the installed location Also, make sure you have run ldconfig if that" echo "*** is required on your system" echo "***" echo "*** If you have an old version installed, it is best to remove it, although" echo "*** you may also be able to get things to work by modifying LD_LIBRARY_PATH" ], [ echo "*** The test program failed to compile or link. See the file config.log for the" echo "*** exact error that occured. This usually means GLIB is incorrectly installed."]) CFLAGS="$ac_save_CFLAGS" LIBS="$ac_save_LIBS" fi fi GLIB_CFLAGS="" GLIB_LIBS="" GLIB_GENMARSHAL="" GOBJECT_QUERY="" GLIB_MKENUMS="" GLIB_COMPILE_RESOURCES="" ifelse([$3], , :, [$3]) fi AC_SUBST(GLIB_CFLAGS) AC_SUBST(GLIB_LIBS) AC_SUBST(GLIB_GENMARSHAL) AC_SUBST(GOBJECT_QUERY) AC_SUBST(GLIB_MKENUMS) AC_SUBST(GLIB_COMPILE_RESOURCES) rm -f conf.glibtest ]) # Configure paths for GTK+ # Owen Taylor 1997-2001 dnl AM_PATH_GTK_2_0([MINIMUM-VERSION, [ACTION-IF-FOUND [, ACTION-IF-NOT-FOUND [, MODULES]]]]) dnl Test for GTK+, and define GTK_CFLAGS and GTK_LIBS, if gthread is specified in MODULES, dnl pass to pkg-config dnl AC_DEFUN([AM_PATH_GTK_2_0], [dnl dnl Get the cflags and libraries from pkg-config dnl AC_ARG_ENABLE(gtktest, [ --disable-gtktest do not try to compile and run a test GTK+ program], , enable_gtktest=yes) pkg_config_args=gtk+-2.0 for module in . $4 do case "$module" in gthread) pkg_config_args="$pkg_config_args gthread-2.0" ;; esac done no_gtk="" AC_PATH_PROG(PKG_CONFIG, pkg-config, no) if test x$PKG_CONFIG != xno ; then if pkg-config --atleast-pkgconfig-version 0.7 ; then : else echo "*** pkg-config too old; version 0.7 or better required." no_gtk=yes PKG_CONFIG=no fi else no_gtk=yes fi min_gtk_version=ifelse([$1], ,2.0.0,$1) AC_MSG_CHECKING(for GTK+ - version >= $min_gtk_version) if test x$PKG_CONFIG != xno ; then ## don't try to run the test against uninstalled libtool libs if $PKG_CONFIG --uninstalled $pkg_config_args; then echo "Will use uninstalled version of GTK+ found in PKG_CONFIG_PATH" enable_gtktest=no fi if $PKG_CONFIG --atleast-version $min_gtk_version $pkg_config_args; then : else no_gtk=yes fi fi if test x"$no_gtk" = x ; then GTK_CFLAGS=`$PKG_CONFIG $pkg_config_args --cflags` GTK_LIBS=`$PKG_CONFIG $pkg_config_args --libs` gtk_config_major_version=`$PKG_CONFIG --modversion gtk+-2.0 | \ sed 's/\([[0-9]]*\).\([[0-9]]*\).\([[0-9]]*\)/\1/'` gtk_config_minor_version=`$PKG_CONFIG --modversion gtk+-2.0 | \ sed 's/\([[0-9]]*\).\([[0-9]]*\).\([[0-9]]*\)/\2/'` gtk_config_micro_version=`$PKG_CONFIG --modversion gtk+-2.0 | \ sed 's/\([[0-9]]*\).\([[0-9]]*\).\([[0-9]]*\)/\3/'` if test "x$enable_gtktest" = "xyes" ; then ac_save_CFLAGS="$CFLAGS" ac_save_LIBS="$LIBS" CFLAGS="$CFLAGS $GTK_CFLAGS" LIBS="$GTK_LIBS $LIBS" dnl dnl Now check if the installed GTK+ is sufficiently new. (Also sanity dnl checks the results of pkg-config to some extent) dnl rm -f conf.gtktest AC_TRY_RUN([ #include #include #include int main () { int major, minor, micro; char *tmp_version; fclose (fopen ("conf.gtktest", "w")); /* HP/UX 9 (%@#!) writes to sscanf strings */ tmp_version = g_strdup("$min_gtk_version"); if (sscanf(tmp_version, "%d.%d.%d", &major, &minor, µ) != 3) { printf("%s, bad version string\n", "$min_gtk_version"); exit(1); } if ((gtk_major_version != $gtk_config_major_version) || (gtk_minor_version != $gtk_config_minor_version) || (gtk_micro_version != $gtk_config_micro_version)) { printf("\n*** 'pkg-config --modversion gtk+-2.0' returned %d.%d.%d, but GTK+ (%d.%d.%d)\n", $gtk_config_major_version, $gtk_config_minor_version, $gtk_config_micro_version, gtk_major_version, gtk_minor_version, gtk_micro_version); printf ("*** was found! If pkg-config was correct, then it is best\n"); printf ("*** to remove the old version of GTK+. You may also be able to fix the error\n"); printf("*** by modifying your LD_LIBRARY_PATH enviroment variable, or by editing\n"); printf("*** /etc/ld.so.conf. Make sure you have run ldconfig if that is\n"); printf("*** required on your system.\n"); printf("*** If pkg-config was wrong, set the environment variable PKG_CONFIG_PATH\n"); printf("*** to point to the correct configuration files\n"); } else if ((gtk_major_version != GTK_MAJOR_VERSION) || (gtk_minor_version != GTK_MINOR_VERSION) || (gtk_micro_version != GTK_MICRO_VERSION)) { printf("*** GTK+ header files (version %d.%d.%d) do not match\n", GTK_MAJOR_VERSION, GTK_MINOR_VERSION, GTK_MICRO_VERSION); printf("*** library (version %d.%d.%d)\n", gtk_major_version, gtk_minor_version, gtk_micro_version); } else { if ((gtk_major_version > major) || ((gtk_major_version == major) && (gtk_minor_version > minor)) || ((gtk_major_version == major) && (gtk_minor_version == minor) && (gtk_micro_version >= micro))) { return 0; } else { printf("\n*** An old version of GTK+ (%d.%d.%d) was found.\n", gtk_major_version, gtk_minor_version, gtk_micro_version); printf("*** You need a version of GTK+ newer than %d.%d.%d. The latest version of\n", major, minor, micro); printf("*** GTK+ is always available from ftp://ftp.gtk.org.\n"); printf("***\n"); printf("*** If you have already installed a sufficiently new version, this error\n"); printf("*** probably means that the wrong copy of the pkg-config shell script is\n"); printf("*** being found. The easiest way to fix this is to remove the old version\n"); printf("*** of GTK+, but you can also set the PKG_CONFIG environment to point to the\n"); printf("*** correct copy of pkg-config. (In this case, you will have to\n"); printf("*** modify your LD_LIBRARY_PATH enviroment variable, or edit /etc/ld.so.conf\n"); printf("*** so that the correct libraries are found at run-time))\n"); } } return 1; } ],, no_gtk=yes,[echo $ac_n "cross compiling; assumed OK... $ac_c"]) CFLAGS="$ac_save_CFLAGS" LIBS="$ac_save_LIBS" fi fi if test "x$no_gtk" = x ; then AC_MSG_RESULT(yes (version $gtk_config_major_version.$gtk_config_minor_version.$gtk_config_micro_version)) ifelse([$2], , :, [$2]) else AC_MSG_RESULT(no) if test "$PKG_CONFIG" = "no" ; then echo "*** A new enough version of pkg-config was not found." echo "*** See http://pkgconfig.sourceforge.net" else if test -f conf.gtktest ; then : else echo "*** Could not run GTK+ test program, checking why..." ac_save_CFLAGS="$CFLAGS" ac_save_LIBS="$LIBS" CFLAGS="$CFLAGS $GTK_CFLAGS" LIBS="$LIBS $GTK_LIBS" AC_TRY_LINK([ #include #include ], [ return ((gtk_major_version) || (gtk_minor_version) || (gtk_micro_version)); ], [ echo "*** The test program compiled, but did not run. This usually means" echo "*** that the run-time linker is not finding GTK+ or finding the wrong" echo "*** version of GTK+. If it is not finding GTK+, you'll need to set your" echo "*** LD_LIBRARY_PATH environment variable, or edit /etc/ld.so.conf to point" echo "*** to the installed location Also, make sure you have run ldconfig if that" echo "*** is required on your system" echo "***" echo "*** If you have an old version installed, it is best to remove it, although" echo "*** you may also be able to get things to work by modifying LD_LIBRARY_PATH" ], [ echo "*** The test program failed to compile or link. See the file config.log for the" echo "*** exact error that occured. This usually means GTK+ is incorrectly installed."]) CFLAGS="$ac_save_CFLAGS" LIBS="$ac_save_LIBS" fi fi GTK_CFLAGS="" GTK_LIBS="" ifelse([$3], , :, [$3]) fi AC_SUBST(GTK_CFLAGS) AC_SUBST(GTK_LIBS) rm -f conf.gtktest ]) # pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*- # serial 1 (pkg-config-0.24) # # Copyright © 2004 Scott James Remnant . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # PKG_PROG_PKG_CONFIG([MIN-VERSION]) # ---------------------------------- AC_DEFUN([PKG_PROG_PKG_CONFIG], [m4_pattern_forbid([^_?PKG_[A-Z_]+$]) m4_pattern_allow([^PKG_CONFIG(_(PATH|LIBDIR|SYSROOT_DIR|ALLOW_SYSTEM_(CFLAGS|LIBS)))?$]) m4_pattern_allow([^PKG_CONFIG_(DISABLE_UNINSTALLED|TOP_BUILD_DIR|DEBUG_SPEW)$]) AC_ARG_VAR([PKG_CONFIG], [path to pkg-config utility]) AC_ARG_VAR([PKG_CONFIG_PATH], [directories to add to pkg-config's search path]) AC_ARG_VAR([PKG_CONFIG_LIBDIR], [path overriding pkg-config's built-in search path]) if test "x$ac_cv_env_PKG_CONFIG_set" != "xset"; then AC_PATH_TOOL([PKG_CONFIG], [pkg-config]) fi if test -n "$PKG_CONFIG"; then _pkg_min_version=m4_default([$1], [0.9.0]) AC_MSG_CHECKING([pkg-config is at least version $_pkg_min_version]) if $PKG_CONFIG --atleast-pkgconfig-version $_pkg_min_version; then AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) PKG_CONFIG="" fi fi[]dnl ])# PKG_PROG_PKG_CONFIG # PKG_CHECK_EXISTS(MODULES, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND]) # # Check to see whether a particular set of modules exists. Similar # to PKG_CHECK_MODULES(), but does not set variables or print errors. # # Please remember that m4 expands AC_REQUIRE([PKG_PROG_PKG_CONFIG]) # only at the first occurence in configure.ac, so if the first place # it's called might be skipped (such as if it is within an "if", you # have to call PKG_CHECK_EXISTS manually # -------------------------------------------------------------- AC_DEFUN([PKG_CHECK_EXISTS], [AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl if test -n "$PKG_CONFIG" && \ AC_RUN_LOG([$PKG_CONFIG --exists --print-errors "$1"]); then m4_default([$2], [:]) m4_ifvaln([$3], [else $3])dnl fi]) # _PKG_CONFIG([VARIABLE], [COMMAND], [MODULES]) # --------------------------------------------- m4_define([_PKG_CONFIG], [if test -n "$$1"; then pkg_cv_[]$1="$$1" elif test -n "$PKG_CONFIG"; then PKG_CHECK_EXISTS([$3], [pkg_cv_[]$1=`$PKG_CONFIG --[]$2 "$3" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes ], [pkg_failed=yes]) else pkg_failed=untried fi[]dnl ])# _PKG_CONFIG # _PKG_SHORT_ERRORS_SUPPORTED # ----------------------------- AC_DEFUN([_PKG_SHORT_ERRORS_SUPPORTED], [AC_REQUIRE([PKG_PROG_PKG_CONFIG]) if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi[]dnl ])# _PKG_SHORT_ERRORS_SUPPORTED # PKG_CHECK_MODULES(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND], # [ACTION-IF-NOT-FOUND]) # # # Note that if there is a possibility the first call to # PKG_CHECK_MODULES might not happen, you should be sure to include an # explicit call to PKG_PROG_PKG_CONFIG in your configure.ac # # # -------------------------------------------------------------- AC_DEFUN([PKG_CHECK_MODULES], [AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl AC_ARG_VAR([$1][_CFLAGS], [C compiler flags for $1, overriding pkg-config])dnl AC_ARG_VAR([$1][_LIBS], [linker flags for $1, overriding pkg-config])dnl pkg_failed=no AC_MSG_CHECKING([for $1]) _PKG_CONFIG([$1][_CFLAGS], [cflags], [$2]) _PKG_CONFIG([$1][_LIBS], [libs], [$2]) m4_define([_PKG_TEXT], [Alternatively, you may set the environment variables $1[]_CFLAGS and $1[]_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details.]) if test $pkg_failed = yes; then AC_MSG_RESULT([no]) _PKG_SHORT_ERRORS_SUPPORTED if test $_pkg_short_errors_supported = yes; then $1[]_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "$2" 2>&1` else $1[]_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "$2" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$$1[]_PKG_ERRORS" >&AS_MESSAGE_LOG_FD m4_default([$4], [AC_MSG_ERROR( [Package requirements ($2) were not met: $$1_PKG_ERRORS Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. _PKG_TEXT])[]dnl ]) elif test $pkg_failed = untried; then AC_MSG_RESULT([no]) m4_default([$4], [AC_MSG_FAILURE( [The pkg-config script could not be found or is too old. Make sure it is in your PATH or set the PKG_CONFIG environment variable to the full path to pkg-config. _PKG_TEXT To get pkg-config, see .])[]dnl ]) else $1[]_CFLAGS=$pkg_cv_[]$1[]_CFLAGS $1[]_LIBS=$pkg_cv_[]$1[]_LIBS AC_MSG_RESULT([yes]) $3 fi[]dnl ])# PKG_CHECK_MODULES # PKG_INSTALLDIR(DIRECTORY) # ------------------------- # Substitutes the variable pkgconfigdir as the location where a module # should install pkg-config .pc files. By default the directory is # $libdir/pkgconfig, but the default can be changed by passing # DIRECTORY. The user can override through the --with-pkgconfigdir # parameter. AC_DEFUN([PKG_INSTALLDIR], [m4_pushdef([pkg_default], [m4_default([$1], ['${libdir}/pkgconfig'])]) m4_pushdef([pkg_description], [pkg-config installation directory @<:@]pkg_default[@:>@]) AC_ARG_WITH([pkgconfigdir], [AS_HELP_STRING([--with-pkgconfigdir], pkg_description)],, [with_pkgconfigdir=]pkg_default) AC_SUBST([pkgconfigdir], [$with_pkgconfigdir]) m4_popdef([pkg_default]) m4_popdef([pkg_description]) ]) dnl PKG_INSTALLDIR # PKG_NOARCH_INSTALLDIR(DIRECTORY) # ------------------------- # Substitutes the variable noarch_pkgconfigdir as the location where a # module should install arch-independent pkg-config .pc files. By # default the directory is $datadir/pkgconfig, but the default can be # changed by passing DIRECTORY. The user can override through the # --with-noarch-pkgconfigdir parameter. AC_DEFUN([PKG_NOARCH_INSTALLDIR], [m4_pushdef([pkg_default], [m4_default([$1], ['${datadir}/pkgconfig'])]) m4_pushdef([pkg_description], [pkg-config arch-independent installation directory @<:@]pkg_default[@:>@]) AC_ARG_WITH([noarch-pkgconfigdir], [AS_HELP_STRING([--with-noarch-pkgconfigdir], pkg_description)],, [with_noarch_pkgconfigdir=]pkg_default) AC_SUBST([noarch_pkgconfigdir], [$with_noarch_pkgconfigdir]) m4_popdef([pkg_default]) m4_popdef([pkg_description]) ]) dnl PKG_NOARCH_INSTALLDIR # PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE, # [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND]) # ------------------------------------------- # Retrieves the value of the pkg-config variable for the given module. AC_DEFUN([PKG_CHECK_VAR], [AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl AC_ARG_VAR([$1], [value of $3 for $2, overriding pkg-config])dnl _PKG_CONFIG([$1], [variable="][$3]["], [$2]) AS_VAR_COPY([$1], [pkg_cv_][$1]) AS_VAR_IF([$1], [""], [$5], [$4])dnl ])# PKG_CHECK_VAR # Copyright (C) 2002-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_AUTOMAKE_VERSION(VERSION) # ---------------------------- # Automake X.Y traces this macro to ensure aclocal.m4 has been # generated from the m4 files accompanying Automake X.Y. # (This private macro should not be called outside this file.) AC_DEFUN([AM_AUTOMAKE_VERSION], [am__api_version='1.14' dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to dnl require some minimum version. Point them to the right macro. m4_if([$1], [1.14.1], [], [AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl ]) # _AM_AUTOCONF_VERSION(VERSION) # ----------------------------- # aclocal traces this macro to find the Autoconf version. # This is a private macro too. Using m4_define simplifies # the logic in aclocal, which can simply ignore this definition. m4_define([_AM_AUTOCONF_VERSION], []) # AM_SET_CURRENT_AUTOMAKE_VERSION # ------------------------------- # Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced. # This function is AC_REQUIREd by AM_INIT_AUTOMAKE. AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION], [AM_AUTOMAKE_VERSION([1.14.1])dnl m4_ifndef([AC_AUTOCONF_VERSION], [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl _AM_AUTOCONF_VERSION(m4_defn([AC_AUTOCONF_VERSION]))]) # AM_AUX_DIR_EXPAND -*- Autoconf -*- # Copyright (C) 2001-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets # $ac_aux_dir to '$srcdir/foo'. In other projects, it is set to # '$srcdir', '$srcdir/..', or '$srcdir/../..'. # # Of course, Automake must honor this variable whenever it calls a # tool from the auxiliary directory. The problem is that $srcdir (and # therefore $ac_aux_dir as well) can be either absolute or relative, # depending on how configure is run. This is pretty annoying, since # it makes $ac_aux_dir quite unusable in subdirectories: in the top # source directory, any form will work fine, but in subdirectories a # relative path needs to be adjusted first. # # $ac_aux_dir/missing # fails when called from a subdirectory if $ac_aux_dir is relative # $top_srcdir/$ac_aux_dir/missing # fails if $ac_aux_dir is absolute, # fails when called from a subdirectory in a VPATH build with # a relative $ac_aux_dir # # The reason of the latter failure is that $top_srcdir and $ac_aux_dir # are both prefixed by $srcdir. In an in-source build this is usually # harmless because $srcdir is '.', but things will broke when you # start a VPATH build or use an absolute $srcdir. # # So we could use something similar to $top_srcdir/$ac_aux_dir/missing, # iff we strip the leading $srcdir from $ac_aux_dir. That would be: # am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"` # and then we would define $MISSING as # MISSING="\${SHELL} $am_aux_dir/missing" # This will work as long as MISSING is not called from configure, because # unfortunately $(top_srcdir) has no meaning in configure. # However there are other variables, like CC, which are often used in # configure, and could therefore not use this "fixed" $ac_aux_dir. # # Another solution, used here, is to always expand $ac_aux_dir to an # absolute PATH. The drawback is that using absolute paths prevent a # configured tree to be moved without reconfiguration. AC_DEFUN([AM_AUX_DIR_EXPAND], [dnl Rely on autoconf to set up CDPATH properly. AC_PREREQ([2.50])dnl # expand $ac_aux_dir to an absolute path am_aux_dir=`cd $ac_aux_dir && pwd` ]) # AM_CONDITIONAL -*- Autoconf -*- # Copyright (C) 1997-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_CONDITIONAL(NAME, SHELL-CONDITION) # ------------------------------------- # Define a conditional. AC_DEFUN([AM_CONDITIONAL], [AC_PREREQ([2.52])dnl m4_if([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])], [$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl AC_SUBST([$1_TRUE])dnl AC_SUBST([$1_FALSE])dnl _AM_SUBST_NOTMAKE([$1_TRUE])dnl _AM_SUBST_NOTMAKE([$1_FALSE])dnl m4_define([_AM_COND_VALUE_$1], [$2])dnl if $2; then $1_TRUE= $1_FALSE='#' else $1_TRUE='#' $1_FALSE= fi AC_CONFIG_COMMANDS_PRE( [if test -z "${$1_TRUE}" && test -z "${$1_FALSE}"; then AC_MSG_ERROR([[conditional "$1" was never defined. Usually this means the macro was only invoked conditionally.]]) fi])]) # Copyright (C) 1999-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # There are a few dirty hacks below to avoid letting 'AC_PROG_CC' be # written in clear, in which case automake, when reading aclocal.m4, # will think it sees a *use*, and therefore will trigger all it's # C support machinery. Also note that it means that autoscan, seeing # CC etc. in the Makefile, will ask for an AC_PROG_CC use... # _AM_DEPENDENCIES(NAME) # ---------------------- # See how the compiler implements dependency checking. # NAME is "CC", "CXX", "OBJC", "OBJCXX", "UPC", or "GJC". # We try a few techniques and use that to set a single cache variable. # # We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was # modified to invoke _AM_DEPENDENCIES(CC); we would have a circular # dependency, and given that the user is not expected to run this macro, # just rely on AC_PROG_CC. AC_DEFUN([_AM_DEPENDENCIES], [AC_REQUIRE([AM_SET_DEPDIR])dnl AC_REQUIRE([AM_OUTPUT_DEPENDENCY_COMMANDS])dnl AC_REQUIRE([AM_MAKE_INCLUDE])dnl AC_REQUIRE([AM_DEP_TRACK])dnl m4_if([$1], [CC], [depcc="$CC" am_compiler_list=], [$1], [CXX], [depcc="$CXX" am_compiler_list=], [$1], [OBJC], [depcc="$OBJC" am_compiler_list='gcc3 gcc'], [$1], [OBJCXX], [depcc="$OBJCXX" am_compiler_list='gcc3 gcc'], [$1], [UPC], [depcc="$UPC" am_compiler_list=], [$1], [GCJ], [depcc="$GCJ" am_compiler_list='gcc3 gcc'], [depcc="$$1" am_compiler_list=]) AC_CACHE_CHECK([dependency style of $depcc], [am_cv_$1_dependencies_compiler_type], [if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named 'D' -- because '-MD' means "put the output # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_$1_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n ['s/^#*\([a-zA-Z0-9]*\))$/\1/p'] < ./depcomp` fi am__universal=false m4_case([$1], [CC], [case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac], [CXX], [case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac]) for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with # Solaris 10 /bin/sh. echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle '-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # After this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_$1_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_$1_dependencies_compiler_type=none fi ]) AC_SUBST([$1DEPMODE], [depmode=$am_cv_$1_dependencies_compiler_type]) AM_CONDITIONAL([am__fastdep$1], [ test "x$enable_dependency_tracking" != xno \ && test "$am_cv_$1_dependencies_compiler_type" = gcc3]) ]) # AM_SET_DEPDIR # ------------- # Choose a directory name for dependency files. # This macro is AC_REQUIREd in _AM_DEPENDENCIES. AC_DEFUN([AM_SET_DEPDIR], [AC_REQUIRE([AM_SET_LEADING_DOT])dnl AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl ]) # AM_DEP_TRACK # ------------ AC_DEFUN([AM_DEP_TRACK], [AC_ARG_ENABLE([dependency-tracking], [dnl AS_HELP_STRING( [--enable-dependency-tracking], [do not reject slow dependency extractors]) AS_HELP_STRING( [--disable-dependency-tracking], [speeds up one-time build])]) if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' am__nodep='_no' fi AM_CONDITIONAL([AMDEP], [test "x$enable_dependency_tracking" != xno]) AC_SUBST([AMDEPBACKSLASH])dnl _AM_SUBST_NOTMAKE([AMDEPBACKSLASH])dnl AC_SUBST([am__nodep])dnl _AM_SUBST_NOTMAKE([am__nodep])dnl ]) # Generate code to set up dependency tracking. -*- Autoconf -*- # Copyright (C) 1999-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_OUTPUT_DEPENDENCY_COMMANDS # ------------------------------ AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS], [{ # Older Autoconf quotes --file arguments for eval, but not when files # are listed without --file. Let's play safe and only enable the eval # if we detect the quoting. case $CONFIG_FILES in *\'*) eval set x "$CONFIG_FILES" ;; *) set x $CONFIG_FILES ;; esac shift for mf do # Strip MF so we end up with the name of the file. mf=`echo "$mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile or not. # We used to match only the files named 'Makefile.in', but # some people rename them; so instead we look at the file content. # Grep'ing the first line is not enough: some people post-process # each Makefile.in and add a new line on top of each file to say so. # Grep'ing the whole file is not good either: AIX grep has a line # limit of 2048, but all sed's we know have understand at least 4000. if sed -n 's,^#.*generated by automake.*,X,p' "$mf" | grep X >/dev/null 2>&1; then dirpart=`AS_DIRNAME("$mf")` else continue fi # Extract the definition of DEPDIR, am__include, and am__quote # from the Makefile without running 'make'. DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` test -z "$DEPDIR" && continue am__include=`sed -n 's/^am__include = //p' < "$mf"` test -z "$am__include" && continue am__quote=`sed -n 's/^am__quote = //p' < "$mf"` # Find all dependency output files, they are included files with # $(DEPDIR) in their names. We invoke sed twice because it is the # simplest approach to changing $(DEPDIR) to its actual value in the # expansion. for file in `sed -n " s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g'`; do # Make sure the directory exists. test -f "$dirpart/$file" && continue fdir=`AS_DIRNAME(["$file"])` AS_MKDIR_P([$dirpart/$fdir]) # echo "creating $dirpart/$file" echo '# dummy' > "$dirpart/$file" done done } ])# _AM_OUTPUT_DEPENDENCY_COMMANDS # AM_OUTPUT_DEPENDENCY_COMMANDS # ----------------------------- # This macro should only be invoked once -- use via AC_REQUIRE. # # This code is only required when automatic dependency tracking # is enabled. FIXME. This creates each '.P' file that we will # need in order to bootstrap the dependency handling code. AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS], [AC_CONFIG_COMMANDS([depfiles], [test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS], [AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir"]) ]) # Do all the work for Automake. -*- Autoconf -*- # Copyright (C) 1996-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This macro actually does too much. Some checks are only needed if # your package does certain things. But this isn't really a big deal. dnl Redefine AC_PROG_CC to automatically invoke _AM_PROG_CC_C_O. m4_define([AC_PROG_CC], m4_defn([AC_PROG_CC]) [_AM_PROG_CC_C_O ]) # AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE]) # AM_INIT_AUTOMAKE([OPTIONS]) # ----------------------------------------------- # The call with PACKAGE and VERSION arguments is the old style # call (pre autoconf-2.50), which is being phased out. PACKAGE # and VERSION should now be passed to AC_INIT and removed from # the call to AM_INIT_AUTOMAKE. # We support both call styles for the transition. After # the next Automake release, Autoconf can make the AC_INIT # arguments mandatory, and then we can depend on a new Autoconf # release and drop the old call support. AC_DEFUN([AM_INIT_AUTOMAKE], [AC_PREREQ([2.65])dnl dnl Autoconf wants to disallow AM_ names. We explicitly allow dnl the ones we care about. m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl AC_REQUIRE([AC_PROG_INSTALL])dnl if test "`cd $srcdir && pwd`" != "`pwd`"; then # Use -I$(srcdir) only when $(srcdir) != ., so that make's output # is not polluted with repeated "-I." AC_SUBST([am__isrc], [' -I$(srcdir)'])_AM_SUBST_NOTMAKE([am__isrc])dnl # test to see if srcdir already configured if test -f $srcdir/config.status; then AC_MSG_ERROR([source directory already configured; run "make distclean" there first]) fi fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi AC_SUBST([CYGPATH_W]) # Define the identity of the package. dnl Distinguish between old-style and new-style calls. m4_ifval([$2], [AC_DIAGNOSE([obsolete], [$0: two- and three-arguments forms are deprecated.]) m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl AC_SUBST([PACKAGE], [$1])dnl AC_SUBST([VERSION], [$2])], [_AM_SET_OPTIONS([$1])dnl dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT. m4_if( m4_ifdef([AC_PACKAGE_NAME], [ok]):m4_ifdef([AC_PACKAGE_VERSION], [ok]), [ok:ok],, [m4_fatal([AC_INIT should be called with package and version arguments])])dnl AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl _AM_IF_OPTION([no-define],, [AC_DEFINE_UNQUOTED([PACKAGE], ["$PACKAGE"], [Name of package]) AC_DEFINE_UNQUOTED([VERSION], ["$VERSION"], [Version number of package])])dnl # Some tools Automake needs. AC_REQUIRE([AM_SANITY_CHECK])dnl AC_REQUIRE([AC_ARG_PROGRAM])dnl AM_MISSING_PROG([ACLOCAL], [aclocal-${am__api_version}]) AM_MISSING_PROG([AUTOCONF], [autoconf]) AM_MISSING_PROG([AUTOMAKE], [automake-${am__api_version}]) AM_MISSING_PROG([AUTOHEADER], [autoheader]) AM_MISSING_PROG([MAKEINFO], [makeinfo]) AC_REQUIRE([AM_PROG_INSTALL_SH])dnl AC_REQUIRE([AM_PROG_INSTALL_STRIP])dnl AC_REQUIRE([AC_PROG_MKDIR_P])dnl # For better backward compatibility. To be removed once Automake 1.9.x # dies out for good. For more background, see: # # AC_SUBST([mkdir_p], ['$(MKDIR_P)']) # We need awk for the "check" target. The system "awk" is bad on # some platforms. AC_REQUIRE([AC_PROG_AWK])dnl AC_REQUIRE([AC_PROG_MAKE_SET])dnl AC_REQUIRE([AM_SET_LEADING_DOT])dnl _AM_IF_OPTION([tar-ustar], [_AM_PROG_TAR([ustar])], [_AM_IF_OPTION([tar-pax], [_AM_PROG_TAR([pax])], [_AM_PROG_TAR([v7])])]) _AM_IF_OPTION([no-dependencies],, [AC_PROVIDE_IFELSE([AC_PROG_CC], [_AM_DEPENDENCIES([CC])], [m4_define([AC_PROG_CC], m4_defn([AC_PROG_CC])[_AM_DEPENDENCIES([CC])])])dnl AC_PROVIDE_IFELSE([AC_PROG_CXX], [_AM_DEPENDENCIES([CXX])], [m4_define([AC_PROG_CXX], m4_defn([AC_PROG_CXX])[_AM_DEPENDENCIES([CXX])])])dnl AC_PROVIDE_IFELSE([AC_PROG_OBJC], [_AM_DEPENDENCIES([OBJC])], [m4_define([AC_PROG_OBJC], m4_defn([AC_PROG_OBJC])[_AM_DEPENDENCIES([OBJC])])])dnl AC_PROVIDE_IFELSE([AC_PROG_OBJCXX], [_AM_DEPENDENCIES([OBJCXX])], [m4_define([AC_PROG_OBJCXX], m4_defn([AC_PROG_OBJCXX])[_AM_DEPENDENCIES([OBJCXX])])])dnl ]) AC_REQUIRE([AM_SILENT_RULES])dnl dnl The testsuite driver may need to know about EXEEXT, so add the dnl 'am__EXEEXT' conditional if _AM_COMPILER_EXEEXT was seen. This dnl macro is hooked onto _AC_COMPILER_EXEEXT early, see below. AC_CONFIG_COMMANDS_PRE(dnl [m4_provide_if([_AM_COMPILER_EXEEXT], [AM_CONDITIONAL([am__EXEEXT], [test -n "$EXEEXT"])])])dnl # POSIX will say in a future version that running "rm -f" with no argument # is OK; and we want to be able to make that assumption in our Makefile # recipes. So use an aggressive probe to check that the usage we want is # actually supported "in the wild" to an acceptable degree. # See automake bug#10828. # To make any issue more visible, cause the running configure to be aborted # by default if the 'rm' program in use doesn't match our expectations; the # user can still override this though. if rm -f && rm -fr && rm -rf; then : OK; else cat >&2 <<'END' Oops! Your 'rm' program seems unable to run without file operands specified on the command line, even when the '-f' option is present. This is contrary to the behaviour of most rm programs out there, and not conforming with the upcoming POSIX standard: Please tell bug-automake@gnu.org about your system, including the value of your $PATH and any error possibly output before this message. This can help us improve future automake versions. END if test x"$ACCEPT_INFERIOR_RM_PROGRAM" = x"yes"; then echo 'Configuration will proceed anyway, since you have set the' >&2 echo 'ACCEPT_INFERIOR_RM_PROGRAM variable to "yes"' >&2 echo >&2 else cat >&2 <<'END' Aborting the configuration process, to ensure you take notice of the issue. You can download and install GNU coreutils to get an 'rm' implementation that behaves properly: . If you want to complete the configuration process using your problematic 'rm' anyway, export the environment variable ACCEPT_INFERIOR_RM_PROGRAM to "yes", and re-run configure. END AC_MSG_ERROR([Your 'rm' program is bad, sorry.]) fi fi ]) dnl Hook into '_AC_COMPILER_EXEEXT' early to learn its expansion. Do not dnl add the conditional right here, as _AC_COMPILER_EXEEXT may be further dnl mangled by Autoconf and run in a shell conditional statement. m4_define([_AC_COMPILER_EXEEXT], m4_defn([_AC_COMPILER_EXEEXT])[m4_provide([_AM_COMPILER_EXEEXT])]) # When config.status generates a header, we must update the stamp-h file. # This file resides in the same directory as the config header # that is generated. The stamp files are numbered to have different names. # Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the # loop where config.status creates the headers, so we can generate # our stamp files there. AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK], [# Compute $1's index in $config_headers. _am_arg=$1 _am_stamp_count=1 for _am_header in $config_headers :; do case $_am_header in $_am_arg | $_am_arg:* ) break ;; * ) _am_stamp_count=`expr $_am_stamp_count + 1` ;; esac done echo "timestamp for $_am_arg" >`AS_DIRNAME(["$_am_arg"])`/stamp-h[]$_am_stamp_count]) # Copyright (C) 2001-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_PROG_INSTALL_SH # ------------------ # Define $install_sh. AC_DEFUN([AM_PROG_INSTALL_SH], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl if test x"${install_sh}" != xset; then case $am_aux_dir in *\ * | *\ *) install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;; *) install_sh="\${SHELL} $am_aux_dir/install-sh" esac fi AC_SUBST([install_sh])]) # Copyright (C) 2003-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # Check whether the underlying file-system supports filenames # with a leading dot. For instance MS-DOS doesn't. AC_DEFUN([AM_SET_LEADING_DOT], [rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null AC_SUBST([am__leading_dot])]) # Add --enable-maintainer-mode option to configure. -*- Autoconf -*- # From Jim Meyering # Copyright (C) 1996-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_MAINTAINER_MODE([DEFAULT-MODE]) # ---------------------------------- # Control maintainer-specific portions of Makefiles. # Default is to disable them, unless 'enable' is passed literally. # For symmetry, 'disable' may be passed as well. Anyway, the user # can override the default with the --enable/--disable switch. AC_DEFUN([AM_MAINTAINER_MODE], [m4_case(m4_default([$1], [disable]), [enable], [m4_define([am_maintainer_other], [disable])], [disable], [m4_define([am_maintainer_other], [enable])], [m4_define([am_maintainer_other], [enable]) m4_warn([syntax], [unexpected argument to AM@&t@_MAINTAINER_MODE: $1])]) AC_MSG_CHECKING([whether to enable maintainer-specific portions of Makefiles]) dnl maintainer-mode's default is 'disable' unless 'enable' is passed AC_ARG_ENABLE([maintainer-mode], [AS_HELP_STRING([--]am_maintainer_other[-maintainer-mode], am_maintainer_other[ make rules and dependencies not useful (and sometimes confusing) to the casual installer])], [USE_MAINTAINER_MODE=$enableval], [USE_MAINTAINER_MODE=]m4_if(am_maintainer_other, [enable], [no], [yes])) AC_MSG_RESULT([$USE_MAINTAINER_MODE]) AM_CONDITIONAL([MAINTAINER_MODE], [test $USE_MAINTAINER_MODE = yes]) MAINT=$MAINTAINER_MODE_TRUE AC_SUBST([MAINT])dnl ] ) # Check to see how 'make' treats includes. -*- Autoconf -*- # Copyright (C) 2001-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_MAKE_INCLUDE() # ----------------- # Check to see how make treats includes. AC_DEFUN([AM_MAKE_INCLUDE], [am_make=${MAKE-make} cat > confinc << 'END' am__doit: @echo this is the am__doit target .PHONY: am__doit END # If we don't find an include directive, just comment out the code. AC_MSG_CHECKING([for style of include used by $am_make]) am__include="#" am__quote= _am_result=none # First try GNU make style include. echo "include confinc" > confmf # Ignore all kinds of additional output from 'make'. case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=include am__quote= _am_result=GNU ;; esac # Now try BSD make style include. if test "$am__include" = "#"; then echo '.include "confinc"' > confmf case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=.include am__quote="\"" _am_result=BSD ;; esac fi AC_SUBST([am__include]) AC_SUBST([am__quote]) AC_MSG_RESULT([$_am_result]) rm -f confinc confmf ]) # Fake the existence of programs that GNU maintainers use. -*- Autoconf -*- # Copyright (C) 1997-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_MISSING_PROG(NAME, PROGRAM) # ------------------------------ AC_DEFUN([AM_MISSING_PROG], [AC_REQUIRE([AM_MISSING_HAS_RUN]) $1=${$1-"${am_missing_run}$2"} AC_SUBST($1)]) # AM_MISSING_HAS_RUN # ------------------ # Define MISSING if not defined so far and test if it is modern enough. # If it is, set am_missing_run to use it, otherwise, to nothing. AC_DEFUN([AM_MISSING_HAS_RUN], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl AC_REQUIRE_AUX_FILE([missing])dnl if test x"${MISSING+set}" != xset; then case $am_aux_dir in *\ * | *\ *) MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;; *) MISSING="\${SHELL} $am_aux_dir/missing" ;; esac fi # Use eval to expand $SHELL if eval "$MISSING --is-lightweight"; then am_missing_run="$MISSING " else am_missing_run= AC_MSG_WARN(['missing' script is too old or missing]) fi ]) # Helper functions for option handling. -*- Autoconf -*- # Copyright (C) 2001-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_MANGLE_OPTION(NAME) # ----------------------- AC_DEFUN([_AM_MANGLE_OPTION], [[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])]) # _AM_SET_OPTION(NAME) # -------------------- # Set option NAME. Presently that only means defining a flag for this option. AC_DEFUN([_AM_SET_OPTION], [m4_define(_AM_MANGLE_OPTION([$1]), [1])]) # _AM_SET_OPTIONS(OPTIONS) # ------------------------ # OPTIONS is a space-separated list of Automake options. AC_DEFUN([_AM_SET_OPTIONS], [m4_foreach_w([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])]) # _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET]) # ------------------------------------------- # Execute IF-SET if OPTION is set, IF-NOT-SET otherwise. AC_DEFUN([_AM_IF_OPTION], [m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])]) # Copyright (C) 1999-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_PROG_CC_C_O # --------------- # Like AC_PROG_CC_C_O, but changed for automake. We rewrite AC_PROG_CC # to automatically call this. AC_DEFUN([_AM_PROG_CC_C_O], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl AC_REQUIRE_AUX_FILE([compile])dnl AC_LANG_PUSH([C])dnl AC_CACHE_CHECK( [whether $CC understands -c and -o together], [am_cv_prog_cc_c_o], [AC_LANG_CONFTEST([AC_LANG_PROGRAM([])]) # Make sure it works both with $CC and with simple cc. # Following AC_PROG_CC_C_O, we do the test twice because some # compilers refuse to overwrite an existing .o file with -o, # though they will create one. am_cv_prog_cc_c_o=yes for am_i in 1 2; do if AM_RUN_LOG([$CC -c conftest.$ac_ext -o conftest2.$ac_objext]) \ && test -f conftest2.$ac_objext; then : OK else am_cv_prog_cc_c_o=no break fi done rm -f core conftest* unset am_i]) if test "$am_cv_prog_cc_c_o" != yes; then # Losing compiler, so override with the script. # FIXME: It is wrong to rewrite CC. # But if we don't then we get into trouble of one sort or another. # A longer-term fix would be to have automake use am__CC in this case, # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)" CC="$am_aux_dir/compile $CC" fi AC_LANG_POP([C])]) # For backward compatibility. AC_DEFUN_ONCE([AM_PROG_CC_C_O], [AC_REQUIRE([AC_PROG_CC])]) # Copyright (C) 2001-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_RUN_LOG(COMMAND) # ------------------- # Run COMMAND, save the exit status in ac_status, and log it. # (This has been adapted from Autoconf's _AC_RUN_LOG macro.) AC_DEFUN([AM_RUN_LOG], [{ echo "$as_me:$LINENO: $1" >&AS_MESSAGE_LOG_FD ($1) >&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD (exit $ac_status); }]) # Check to make sure that the build environment is sane. -*- Autoconf -*- # Copyright (C) 1996-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_SANITY_CHECK # --------------- AC_DEFUN([AM_SANITY_CHECK], [AC_MSG_CHECKING([whether build environment is sane]) # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' ' case `pwd` in *[[\\\"\#\$\&\'\`$am_lf]]*) AC_MSG_ERROR([unsafe absolute working directory name]);; esac case $srcdir in *[[\\\"\#\$\&\'\`$am_lf\ \ ]]*) AC_MSG_ERROR([unsafe srcdir value: '$srcdir']);; esac # Do 'set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( am_has_slept=no for am_try in 1 2; do echo "timestamp, slept: $am_has_slept" > conftest.file set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` if test "$[*]" = "X"; then # -L didn't work. set X `ls -t "$srcdir/configure" conftest.file` fi if test "$[*]" != "X $srcdir/configure conftest.file" \ && test "$[*]" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken alias in your environment]) fi if test "$[2]" = conftest.file || test $am_try -eq 2; then break fi # Just in case. sleep 1 am_has_slept=yes done test "$[2]" = conftest.file ) then # Ok. : else AC_MSG_ERROR([newly created file is older than distributed files! Check your system clock]) fi AC_MSG_RESULT([yes]) # If we didn't sleep, we still need to ensure time stamps of config.status and # generated files are strictly newer. am_sleep_pid= if grep 'slept: no' conftest.file >/dev/null 2>&1; then ( sleep 1 ) & am_sleep_pid=$! fi AC_CONFIG_COMMANDS_PRE( [AC_MSG_CHECKING([that generated files are newer than configure]) if test -n "$am_sleep_pid"; then # Hide warnings about reused PIDs. wait $am_sleep_pid 2>/dev/null fi AC_MSG_RESULT([done])]) rm -f conftest.file ]) # Copyright (C) 2009-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_SILENT_RULES([DEFAULT]) # -------------------------- # Enable less verbose build rules; with the default set to DEFAULT # ("yes" being less verbose, "no" or empty being verbose). AC_DEFUN([AM_SILENT_RULES], [AC_ARG_ENABLE([silent-rules], [dnl AS_HELP_STRING( [--enable-silent-rules], [less verbose build output (undo: "make V=1")]) AS_HELP_STRING( [--disable-silent-rules], [verbose build output (undo: "make V=0")])dnl ]) case $enable_silent_rules in @%:@ ((( yes) AM_DEFAULT_VERBOSITY=0;; no) AM_DEFAULT_VERBOSITY=1;; *) AM_DEFAULT_VERBOSITY=m4_if([$1], [yes], [0], [1]);; esac dnl dnl A few 'make' implementations (e.g., NonStop OS and NextStep) dnl do not support nested variable expansions. dnl See automake bug#9928 and bug#10237. am_make=${MAKE-make} AC_CACHE_CHECK([whether $am_make supports nested variables], [am_cv_make_support_nested_variables], [if AS_ECHO([['TRUE=$(BAR$(V)) BAR0=false BAR1=true V=1 am__doit: @$(TRUE) .PHONY: am__doit']]) | $am_make -f - >/dev/null 2>&1; then am_cv_make_support_nested_variables=yes else am_cv_make_support_nested_variables=no fi]) if test $am_cv_make_support_nested_variables = yes; then dnl Using '$V' instead of '$(V)' breaks IRIX make. AM_V='$(V)' AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)' else AM_V=$AM_DEFAULT_VERBOSITY AM_DEFAULT_V=$AM_DEFAULT_VERBOSITY fi AC_SUBST([AM_V])dnl AM_SUBST_NOTMAKE([AM_V])dnl AC_SUBST([AM_DEFAULT_V])dnl AM_SUBST_NOTMAKE([AM_DEFAULT_V])dnl AC_SUBST([AM_DEFAULT_VERBOSITY])dnl AM_BACKSLASH='\' AC_SUBST([AM_BACKSLASH])dnl _AM_SUBST_NOTMAKE([AM_BACKSLASH])dnl ]) # Copyright (C) 2001-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_PROG_INSTALL_STRIP # --------------------- # One issue with vendor 'install' (even GNU) is that you can't # specify the program used to strip binaries. This is especially # annoying in cross-compiling environments, where the build's strip # is unlikely to handle the host's binaries. # Fortunately install-sh will honor a STRIPPROG variable, so we # always use install-sh in "make install-strip", and initialize # STRIPPROG with the value of the STRIP variable (set by the user). AC_DEFUN([AM_PROG_INSTALL_STRIP], [AC_REQUIRE([AM_PROG_INSTALL_SH])dnl # Installed binaries are usually stripped using 'strip' when the user # run "make install-strip". However 'strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the 'STRIP' environment variable to overrule this program. dnl Don't test for $cross_compiling = yes, because it might be 'maybe'. if test "$cross_compiling" != no; then AC_CHECK_TOOL([STRIP], [strip], :) fi INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" AC_SUBST([INSTALL_STRIP_PROGRAM])]) # Copyright (C) 2006-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_SUBST_NOTMAKE(VARIABLE) # --------------------------- # Prevent Automake from outputting VARIABLE = @VARIABLE@ in Makefile.in. # This macro is traced by Automake. AC_DEFUN([_AM_SUBST_NOTMAKE]) # AM_SUBST_NOTMAKE(VARIABLE) # -------------------------- # Public sister of _AM_SUBST_NOTMAKE. AC_DEFUN([AM_SUBST_NOTMAKE], [_AM_SUBST_NOTMAKE($@)]) # Check how to create a tarball. -*- Autoconf -*- # Copyright (C) 2004-2013 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_PROG_TAR(FORMAT) # -------------------- # Check how to create a tarball in format FORMAT. # FORMAT should be one of 'v7', 'ustar', or 'pax'. # # Substitute a variable $(am__tar) that is a command # writing to stdout a FORMAT-tarball containing the directory # $tardir. # tardir=directory && $(am__tar) > result.tar # # Substitute a variable $(am__untar) that extract such # a tarball read from stdin. # $(am__untar) < result.tar # AC_DEFUN([_AM_PROG_TAR], [# Always define AMTAR for backward compatibility. Yes, it's still used # in the wild :-( We should find a proper way to deprecate it ... AC_SUBST([AMTAR], ['$${TAR-tar}']) # We'll loop over all known methods to create a tar archive until one works. _am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none' m4_if([$1], [v7], [am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -'], [m4_case([$1], [ustar], [# The POSIX 1988 'ustar' format is defined with fixed-size fields. # There is notably a 21 bits limit for the UID and the GID. In fact, # the 'pax' utility can hang on bigger UID/GID (see automake bug#8343 # and bug#13588). am_max_uid=2097151 # 2^21 - 1 am_max_gid=$am_max_uid # The $UID and $GID variables are not portable, so we need to resort # to the POSIX-mandated id(1) utility. Errors in the 'id' calls # below are definitely unexpected, so allow the users to see them # (that is, avoid stderr redirection). am_uid=`id -u || echo unknown` am_gid=`id -g || echo unknown` AC_MSG_CHECKING([whether UID '$am_uid' is supported by ustar format]) if test $am_uid -le $am_max_uid; then AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) _am_tools=none fi AC_MSG_CHECKING([whether GID '$am_gid' is supported by ustar format]) if test $am_gid -le $am_max_gid; then AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) _am_tools=none fi], [pax], [], [m4_fatal([Unknown tar format])]) AC_MSG_CHECKING([how to create a $1 tar archive]) # Go ahead even if we have the value already cached. We do so because we # need to set the values for the 'am__tar' and 'am__untar' variables. _am_tools=${am_cv_prog_tar_$1-$_am_tools} for _am_tool in $_am_tools; do case $_am_tool in gnutar) for _am_tar in tar gnutar gtar; do AM_RUN_LOG([$_am_tar --version]) && break done am__tar="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$$tardir"' am__tar_="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$tardir"' am__untar="$_am_tar -xf -" ;; plaintar) # Must skip GNU tar: if it does not support --format= it doesn't create # ustar tarball either. (tar --version) >/dev/null 2>&1 && continue am__tar='tar chf - "$$tardir"' am__tar_='tar chf - "$tardir"' am__untar='tar xf -' ;; pax) am__tar='pax -L -x $1 -w "$$tardir"' am__tar_='pax -L -x $1 -w "$tardir"' am__untar='pax -r' ;; cpio) am__tar='find "$$tardir" -print | cpio -o -H $1 -L' am__tar_='find "$tardir" -print | cpio -o -H $1 -L' am__untar='cpio -i -H $1 -d' ;; none) am__tar=false am__tar_=false am__untar=false ;; esac # If the value was cached, stop now. We just wanted to have am__tar # and am__untar set. test -n "${am_cv_prog_tar_$1}" && break # tar/untar a dummy directory, and stop if the command works. rm -rf conftest.dir mkdir conftest.dir echo GrepMe > conftest.dir/file AM_RUN_LOG([tardir=conftest.dir && eval $am__tar_ >conftest.tar]) rm -rf conftest.dir if test -s conftest.tar; then AM_RUN_LOG([$am__untar /dev/null 2>&1 && break fi done rm -rf conftest.dir AC_CACHE_VAL([am_cv_prog_tar_$1], [am_cv_prog_tar_$1=$_am_tool]) AC_MSG_RESULT([$am_cv_prog_tar_$1])]) AC_SUBST([am__tar]) AC_SUBST([am__untar]) ]) # _AM_PROG_TAR m4_include([auxdir/ax_lib_hdf5.m4]) m4_include([auxdir/ax_pthread.m4]) m4_include([auxdir/libtool.m4]) m4_include([auxdir/ltoptions.m4]) m4_include([auxdir/ltsugar.m4]) m4_include([auxdir/ltversion.m4]) m4_include([auxdir/lt~obsolete.m4]) m4_include([auxdir/slurm.m4]) m4_include([auxdir/x_ac__system_configuration.m4]) m4_include([auxdir/x_ac_affinity.m4]) m4_include([auxdir/x_ac_aix.m4]) m4_include([auxdir/x_ac_blcr.m4]) m4_include([auxdir/x_ac_bluegene.m4]) m4_include([auxdir/x_ac_cflags.m4]) m4_include([auxdir/x_ac_cray.m4]) m4_include([auxdir/x_ac_curl.m4]) m4_include([auxdir/x_ac_databases.m4]) m4_include([auxdir/x_ac_debug.m4]) m4_include([auxdir/x_ac_dlfcn.m4]) m4_include([auxdir/x_ac_env.m4]) m4_include([auxdir/x_ac_freeipmi.m4]) m4_include([auxdir/x_ac_gpl_licensed.m4]) m4_include([auxdir/x_ac_hwloc.m4]) m4_include([auxdir/x_ac_iso.m4]) m4_include([auxdir/x_ac_json.m4]) m4_include([auxdir/x_ac_lua.m4]) m4_include([auxdir/x_ac_man2html.m4]) m4_include([auxdir/x_ac_munge.m4]) m4_include([auxdir/x_ac_ncurses.m4]) m4_include([auxdir/x_ac_netloc.m4]) m4_include([auxdir/x_ac_nrt.m4]) m4_include([auxdir/x_ac_ofed.m4]) m4_include([auxdir/x_ac_pam.m4]) m4_include([auxdir/x_ac_printf_null.m4]) m4_include([auxdir/x_ac_ptrace.m4]) m4_include([auxdir/x_ac_readline.m4]) m4_include([auxdir/x_ac_rrdtool.m4]) m4_include([auxdir/x_ac_setpgrp.m4]) m4_include([auxdir/x_ac_setproctitle.m4]) m4_include([auxdir/x_ac_sgi_job.m4]) m4_include([auxdir/x_ac_slurm_ssl.m4]) m4_include([auxdir/x_ac_sun_const.m4]) slurm-slurm-15-08-7-1/autogen.sh000077500000000000000000000057051265000126300163350ustar00rootroot00000000000000#!/bin/sh # # $Id$ # $Source$ # # Run this script to generate aclocal.m4, config.h.in, # Makefile.in's, and ./configure... # # To specify extra flags to aclocal (include dirs for example), # set ACLOCAL_FLAGS # DIE=0 # minimum required versions of autoconf/automake/libtool: ACMAJOR=2 ACMINOR=59 AMMAJOR=1 AMMINOR=10 AMPATCH=2 LTMAJOR=1 LTMINOR=5 LTPATCH=8 (autoconf --version 2>&1 | \ perl -n0e "(/(\d+)\.(\d+)/ && \$1>=$ACMAJOR && \$2>=$ACMINOR) || exit 1") || { echo echo "Error: You must have 'autoconf' version $ACMAJOR.$ACMINOR or greater" echo "installed to run $0. Get the latest version from" echo "ftp://ftp.gnu.org/pub/gnu/autoconf/" echo NO_AUTOCONF=yes DIE=1 } amtest=" if (/(\d+)\.(\d+)((-p|\.)(\d+))*/) { exit 1 if (\$1 < $AMMAJOR || \$2 < $AMMINOR); exit 0 if (\$2 > $AMMINOR); exit 1 if (\$5 < $AMPATCH); }" (automake --version 2>&1 | perl -n0e "$amtest" ) || { echo echo "Error: You must have 'automake' version $AMMAJOR.$AMMINOR.$AMPATCH or greater" echo "installed to run $0. Get the latest version from" echo "ftp://ftp.gnu.org/pub/gnu/automake/" echo NO_AUTOCONF=yes DIE=1 } lttest=" if (/(\d+)\.(\d+)((-p|\.)(\d+))*/) { exit 1 if (\$1 < $LTMAJOR); exit 1 if (\$1 == $LTMAJOR && \$2 < $LTMINOR); exit 1 if (\$1 == $LTMAJOR && \$2 == $LTMINOR && \$5 < $LTPATCH); }" (libtool --version 2>&1 | perl -n0e "$lttest" ) || { echo echo "Error: You must have 'libtool' version $LTMAJOR.$LTMINOR.$LTPATCH or greater" echo "installed to run $0. Get the latest version from" echo "ftp://ftp.gnu.org/pub/gnu/libtool/" echo DIE=1 } test -n "$NO_AUTOMAKE" || (aclocal --version) < /dev/null > /dev/null 2>&1 || { echo echo "Error: \`aclocal' appears to be missing. The installed version of" echo "\`automake' may be too old. Get the most recent version from" echo "ftp://ftp.gnu.org/pub/gnu/automake/" NO_ACLOCAL=yes DIE=1 } if test $DIE -eq 1; then exit 1 fi # make sure that auxdir exists mkdir auxdir 2>/dev/null # Remove config.h.in to make sure it is rebuilt rm -f config.h.in set -x rm -fr autom4te*.cache ${ACLOCAL:-aclocal} -I auxdir $ACLOCAL_FLAGS || exit 1 ${LIBTOOLIZE:-libtoolize} --automake --copy --force || exit 1 ${AUTOHEADER:-autoheader} || exit 1 ${AUTOMAKE:-automake} --add-missing --copy --force-missing || exit 1 #${AUTOCONF:-autoconf} --force --warnings=all || exit 1 ${AUTOCONF:-autoconf} --force --warnings=no-obsolete || exit 1 set +x if [ -e config.status ]; then echo "removing stale config.status." rm -f config.status fi if [ -e config.log ]; then echo "removing old config.log." rm -f config.log fi echo "now run ./configure to configure slurm for your environment." echo echo "NOTE: This script has most likely just modified files that are under" echo " version control. Make sure that you really want these changes" echo " applied to the repository before you run \"git commit\"." slurm-slurm-15-08-7-1/auxdir/000077500000000000000000000000001265000126300156215ustar00rootroot00000000000000slurm-slurm-15-08-7-1/auxdir/Makefile.am000066400000000000000000000021021265000126300176500ustar00rootroot00000000000000##**************************************************************************** ## $Id$ ##**************************************************************************** ## Process this file with automake to produce Makefile.in. ##**************************************************************************** EXTRA_DIST = \ ax_pthread.m4 \ slurm.m4 \ test-driver \ type_socklen_t.m4 \ x_ac__system_configuration.m4 \ x_ac_affinity.m4 \ x_ac_aix.m4 \ x_ac_blcr.m4 \ x_ac_bluegene.m4 \ x_ac_cflags.m4 \ x_ac_cray.m4 \ x_ac_curl.m4 \ x_ac_databases.m4 \ x_ac_debug.m4 \ x_ac_dlfcn.m4 \ x_ac_elan.m4 \ x_ac_env.m4 \ x_ac_federation.m4 \ x_ac_gpl_licensed.m4 \ x_ac_hwloc.m4 \ x_ac_iso.m4 \ x_ac_json.m4 \ x_ac_lua.m4 \ x_ac_man2html.m4 \ x_ac_munge.m4 \ x_ac_ncurses.m4 \ x_ac_netloc.m4 \ x_ac_nrt.m4 \ x_ac_pam.m4 \ x_ac_printf_null.m4 \ x_ac_ptrace.m4 \ x_ac_readline.m4 \ x_ac_setproctitle.m4 \ x_ac_sgi_job.m4 \ x_ac_slurm_ssl.m4 \ x_ac_sun_const.m4 slurm-slurm-15-08-7-1/auxdir/Makefile.in000066400000000000000000000433561265000126300177010ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = auxdir DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am compile \ config.guess config.sub depcomp install-sh missing ltmain.sh ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ EXTRA_DIST = \ ax_pthread.m4 \ slurm.m4 \ test-driver \ type_socklen_t.m4 \ x_ac__system_configuration.m4 \ x_ac_affinity.m4 \ x_ac_aix.m4 \ x_ac_blcr.m4 \ x_ac_bluegene.m4 \ x_ac_cflags.m4 \ x_ac_cray.m4 \ x_ac_curl.m4 \ x_ac_databases.m4 \ x_ac_debug.m4 \ x_ac_dlfcn.m4 \ x_ac_elan.m4 \ x_ac_env.m4 \ x_ac_federation.m4 \ x_ac_gpl_licensed.m4 \ x_ac_hwloc.m4 \ x_ac_iso.m4 \ x_ac_json.m4 \ x_ac_lua.m4 \ x_ac_man2html.m4 \ x_ac_munge.m4 \ x_ac_ncurses.m4 \ x_ac_netloc.m4 \ x_ac_nrt.m4 \ x_ac_pam.m4 \ x_ac_printf_null.m4 \ x_ac_ptrace.m4 \ x_ac_readline.m4 \ x_ac_setproctitle.m4 \ x_ac_sgi_job.m4 \ x_ac_slurm_ssl.m4 \ x_ac_sun_const.m4 all: all-am .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu auxdir/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu auxdir/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile installdirs: install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: .MAKE: install-am install-strip .PHONY: all all-am check check-am clean clean-generic clean-libtool \ cscopelist-am ctags-am distclean distclean-generic \ distclean-libtool distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags-am uninstall uninstall-am # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/auxdir/ax_lib_hdf5.m4000066400000000000000000000245751265000126300202440ustar00rootroot00000000000000# =========================================================================== # http://www.gnu.org/software/autoconf-archive/ax_lib_hdf5.html # =========================================================================== # # SYNOPSIS # # AX_LIB_HDF5([serial/parallel]) # # DESCRIPTION # # This macro provides tests of the availability of HDF5 library. # # The optional macro argument should be either 'serial' or 'parallel'. The # former only looks for serial HDF5 installations via h5cc. The latter # only looks for parallel HDF5 installations via h5pcc. If the optional # argument is omitted, serial installations will be preferred over # parallel ones. # # The macro adds a --with-hdf5 option accepting one of three values: # # no - do not check for the HDF5 library. # yes - do check for HDF5 library in standard locations. # path - complete path to the HDF5 helper script h5cc or h5pcc. # # If HDF5 is successfully found, this macro calls # # AC_SUBST(HDF5_VERSION) # AC_SUBST(HDF5_CC) # AC_SUBST(HDF5_CFLAGS) # AC_SUBST(HDF5_CPPFLAGS) # AC_SUBST(HDF5_LDFLAGS) # AC_SUBST(HDF5_LIBS) # AC_SUBST(HDF5_FC) # AC_SUBST(HDF5_FFLAGS) # AC_SUBST(HDF5_FLIBS) # AC_DEFINE(HAVE_HDF5) # # and sets with_hdf5="yes". Additionally, the macro sets # with_hdf5_fortran="yes" if a matching Fortran wrapper script is found. # Note that Autconf's Fortran support is not used to perform this check. # H5CC and H5FC will contain the appropriate serial or parallel HDF5 # wrapper script locations. # # If HDF5 is disabled or not found, this macros sets with_hdf5="no" and # with_hdf5_fortran="no". # # Your configuration script can test $with_hdf to take any further # actions. HDF5_{C,CPP,LD}FLAGS may be used when building with C or C++. # HDF5_F{FLAGS,LIBS} should be used when building Fortran applications. # # To use the macro, one would code one of the following in "configure.ac" # before AC_OUTPUT: # # 1) dnl Check for HDF5 support # AX_LIB_HDF5() # # 2) dnl Check for serial HDF5 support # AX_LIB_HDF5([serial]) # # 3) dnl Check for parallel HDF5 support # AX_LIB_HDF5([parallel]) # # One could test $with_hdf5 for the outcome or display it as follows # # echo "HDF5 support: $with_hdf5" # # You could also for example, override the default CC in "configure.ac" to # enforce compilation with the compiler that HDF5 uses: # # AX_LIB_HDF5([parallel]) # if test "$with_hdf5" = "yes"; then # CC="$HDF5_CC" # else # AC_MSG_ERROR([Unable to find HDF5, we need parallel HDF5.]) # fi # # LICENSE # # Copyright (c) 2009 Timothy Brown # Copyright (c) 2010 Rhys Ulerich # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 8 AC_DEFUN([AX_LIB_HDF5], [ AC_REQUIRE([AC_PROG_SED]) AC_REQUIRE([AC_PROG_AWK]) AC_REQUIRE([AC_PROG_GREP]) dnl Check first argument is one of the recognized values. dnl Fail eagerly if is incorrect as this simplifies case statements below. if test "m4_normalize(m4_default([$1],[]))" = "" ; then : # Recognized value elif test "m4_normalize(m4_default([$1],[]))" = "serial" ; then : # Recognized value elif test "m4_normalize(m4_default([$1],[]))" = "parallel"; then : # Recognized value else AC_MSG_ERROR([ Unrecognized value for AX[]_LIB_HDF5 within configure.ac. If supplied, argument 1 must be either 'serial' or 'parallel'. ]) fi dnl Add a default --with-hdf5 configuration option. AC_ARG_WITH([hdf5], AS_HELP_STRING( [--with-hdf5=[yes/no/PATH]], m4_case(m4_normalize([$1]), [serial], [location of h5cc for serial HDF5 configuration], [parallel], [location of h5pcc for parallel HDF5 configuration], [location of h5cc or h5pcc for HDF5 configuration]) ), [if test "$withval" = "no"; then with_hdf5="no" elif test "$withval" = "yes"; then with_hdf5="yes" else with_hdf5="yes" H5CC="$withval" fi], [with_hdf5="yes"] ) dnl Set defaults to blank HDF5_CC="" HDF5_VERSION="" HDF5_CFLAGS="" HDF5_CPPFLAGS="" HDF5_LDFLAGS="" HDF5_LIBS="" HDF5_FC="" HDF5_FFLAGS="" HDF5_FLIBS="" dnl Try and find hdf5 compiler tools and options. if test "$with_hdf5" = "yes"; then if test -z "$H5CC"; then dnl Check to see if H5CC is in the path. AC_PATH_PROGS( [H5CC], m4_case(m4_normalize([$1]), [serial], [h5cc], [parallel], [h5pcc], [h5cc h5pcc]), []) else AC_MSG_CHECKING([Using provided HDF5 C wrapper]) AC_MSG_RESULT([$H5CC]) fi AC_MSG_CHECKING([for HDF5 libraries]) if test ! -f "$H5CC" || test ! -x "$H5CC"; then AC_MSG_RESULT([no]) AC_MSG_WARN(m4_case(m4_normalize([$1]), [serial], [ Unable to locate serial HDF5 compilation helper script 'h5cc'. Please specify --with-hdf5= as the full path to h5cc. HDF5 support is being disabled (equivalent to --with-hdf5=no). ], [parallel],[ Unable to locate parallel HDF5 compilation helper script 'h5pcc'. Please specify --with-hdf5= as the full path to h5pcc. HDF5 support is being disabled (equivalent to --with-hdf5=no). ], [ Unable to locate HDF5 compilation helper scripts 'h5cc' or 'h5pcc'. Please specify --with-hdf5= as the full path to h5cc or h5pcc. HDF5 support is being disabled (equivalent to --with-hdf5=no). ])) with_hdf5="no" with_hdf5_fortran="no" else dnl Get the h5cc output HDF5_SHOW=$(eval $H5CC -show) dnl Get the actual compiler used HDF5_CC=$(eval $H5CC -show | $AWK '{print $[]1}') dnl h5cc provides both AM_ and non-AM_ options dnl depending on how it was compiled either one of dnl these are empty. Lets roll them both into one. dnl Look for "HDF5 Version: X.Y.Z" HDF5_VERSION=$(eval $H5CC -showconfig | $GREP 'HDF5 Version:' \ | $AWK '{print $[]3}') dnl A ideal situation would be where everything we needed was dnl in the AM_* variables. However most systems are not like this dnl and seem to have the values in the non-AM variables. dnl dnl We try the following to find the flags: dnl (1) Look for "NAME:" tags dnl (2) Look for "H5_NAME:" tags dnl (3) Look for "AM_NAME:" tags dnl HDF5_tmp_flags=$(eval $H5CC -showconfig \ | $GREP 'FLAGS\|Extra libraries:' \ | $AWK -F: '{printf("%s "), $[]2}' ) dnl Find the installation directory and append include/ HDF5_tmp_inst=$(eval $H5CC -showconfig \ | $GREP 'Installation point:' \ | $AWK -F: '{print $[]2}' ) dnl Add this to the CPPFLAGS HDF5_CPPFLAGS="-I${HDF5_tmp_inst}/include" dnl Now sort the flags out based upon their prefixes for arg in $HDF5_SHOW $HDF5_tmp_flags ; do case "$arg" in -I*) echo $HDF5_CPPFLAGS | $GREP -e "$arg" 2>&1 >/dev/null \ || HDF5_CPPFLAGS="$arg $HDF5_CPPFLAGS" ;; -L*) echo $HDF5_LDFLAGS | $GREP -e "$arg" 2>&1 >/dev/null \ || HDF5_LDFLAGS="$arg $HDF5_LDFLAGS" ;; -l*) echo $HDF5_LIBS | $GREP -e "$arg" 2>&1 >/dev/null \ || HDF5_LIBS="$arg $HDF5_LIBS" ;; esac done HDF5_LIBS="$HDF5_LIBS -lhdf5" AC_MSG_RESULT([yes (version $[HDF5_VERSION])]) dnl See if we can compile ax_lib_hdf5_save_CC=$CC ax_lib_hdf5_save_CPPFLAGS=$CPPFLAGS ax_lib_hdf5_save_LIBS=$LIBS ax_lib_hdf5_save_LDFLAGS=$LDFLAGS CC=$HDF5_CC CPPFLAGS=$HDF5_CPPFLAGS LIBS=$HDF5_LIBS LDFLAGS=$HDF5_LDFLAGS AC_CHECK_HEADER([hdf5.h], [ac_cv_hadf5_h=yes], [ac_cv_hadf5_h=no]) AC_CHECK_LIB([hdf5], [H5Fcreate], [ac_cv_libhdf5=yes], [ac_cv_libhdf5=no]) if test "$ac_cv_hadf5_h" = "no" && test "$ac_cv_libhdf5" = "no" ; then AC_MSG_WARN([Unable to compile HDF5 test program]) fi dnl Look for HDF5's high level library AC_HAVE_LIBRARY([hdf5_hl], [HDF5_LIBS="$HDF5_LIBS -lhdf5_hl"], [], []) CC=$ax_lib_hdf5_save_CC LIBS=$ax_lib_hdf5_save_LIBS LDFLAGS=$ax_lib_hdf5_save_LDFLAGS AC_MSG_CHECKING([for matching HDF5 Fortran wrapper]) dnl Presume HDF5 Fortran wrapper is just a name variant from H5CC H5FC=$(eval echo -n $H5CC | $SED -n 's/cc$/fc/p') if test -x "$H5FC"; then AC_MSG_RESULT([$H5FC]) with_hdf5_fortran="yes" AC_SUBST([H5FC]) dnl Again, pry any remaining -Idir/-Ldir from compiler wrapper for arg in `$H5FC -show` do case "$arg" in #( -I*) echo $HDF5_FFLAGS | $GREP -e "$arg" >/dev/null \ || HDF5_FFLAGS="$arg $HDF5_FFLAGS" ;;#( -L*) echo $HDF5_FFLAGS | $GREP -e "$arg" >/dev/null \ || HDF5_FFLAGS="$arg $HDF5_FFLAGS" dnl HDF5 installs .mod files in with libraries, dnl but some compilers need to find them with -I echo $HDF5_FFLAGS | $GREP -e "-I${arg#-L}" >/dev/null \ || HDF5_FFLAGS="-I${arg#-L} $HDF5_FFLAGS" ;; esac done dnl Make Fortran link line by inserting Fortran libraries for arg in $HDF5_LIBS do case "$arg" in #( -lhdf5_hl) HDF5_FLIBS="$HDF5_FLIBS -lhdf5hl_fortran $arg" ;; #( -lhdf5) HDF5_FLIBS="$HDF5_FLIBS -lhdf5_fortran $arg" ;; #( *) HDF5_FLIBS="$HDF5_FLIBS $arg" ;; esac done else AC_MSG_RESULT([no]) with_hdf5_fortran="no" fi AC_SUBST([HDF5_VERSION]) AC_SUBST([HDF5_CC]) AC_SUBST([HDF5_CFLAGS]) AC_SUBST([HDF5_CPPFLAGS]) AC_SUBST([HDF5_LDFLAGS]) AC_SUBST([HDF5_LIBS]) AC_SUBST([HDF5_FC]) AC_SUBST([HDF5_FFLAGS]) AC_SUBST([HDF5_FLIBS]) AC_DEFINE([HAVE_HDF5], [1], [Defined if you have HDF5 support]) fi fi ]) slurm-slurm-15-08-7-1/auxdir/ax_pthread.m4000066400000000000000000000312671265000126300202130ustar00rootroot00000000000000# =========================================================================== # http://www.gnu.org/software/autoconf-archive/ax_pthread.html # =========================================================================== # # SYNOPSIS # # AX_PTHREAD([ACTION-IF-FOUND[, ACTION-IF-NOT-FOUND]]) # # DESCRIPTION # # This macro figures out how to build C programs using POSIX threads. It # sets the PTHREAD_LIBS output variable to the threads library and linker # flags, and the PTHREAD_CFLAGS output variable to any special C compiler # flags that are needed. (The user can also force certain compiler # flags/libs to be tested by setting these environment variables.) # # Also sets PTHREAD_CC to any special C compiler that is needed for # multi-threaded programs (defaults to the value of CC otherwise). (This # is necessary on AIX to use the special cc_r compiler alias.) # # NOTE: You are assumed to not only compile your program with these flags, # but also link it with them as well. e.g. you should link with # $PTHREAD_CC $CFLAGS $PTHREAD_CFLAGS $LDFLAGS ... $PTHREAD_LIBS $LIBS # # If you are only building threads programs, you may wish to use these # variables in your default LIBS, CFLAGS, and CC: # # LIBS="$PTHREAD_LIBS $LIBS" # CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # CC="$PTHREAD_CC" # # In addition, if the PTHREAD_CREATE_JOINABLE thread-attribute constant # has a nonstandard name, defines PTHREAD_CREATE_JOINABLE to that name # (e.g. PTHREAD_CREATE_UNDETACHED on AIX). # # Also HAVE_PTHREAD_PRIO_INHERIT is defined if pthread is found and the # PTHREAD_PRIO_INHERIT symbol is defined when compiling with # PTHREAD_CFLAGS. # # ACTION-IF-FOUND is a list of shell commands to run if a threads library # is found, and ACTION-IF-NOT-FOUND is a list of commands to run it if it # is not found. If ACTION-IF-FOUND is not specified, the default action # will define HAVE_PTHREAD. # # Please let the authors know if this macro fails on any platform, or if # you have any other suggestions or comments. This macro was based on work # by SGJ on autoconf scripts for FFTW (http://www.fftw.org/) (with help # from M. Frigo), as well as ac_pthread and hb_pthread macros posted by # Alejandro Forero Cuervo to the autoconf macro repository. We are also # grateful for the helpful feedback of numerous users. # # Updated for Autoconf 2.68 by Daniel Richard G. # # LICENSE # # Copyright (c) 2008 Steven G. Johnson # Copyright (c) 2011 Daniel Richard G. # # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by the # Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General # Public License for more details. # # You should have received a copy of the GNU General Public License along # with this program. If not, see . # # As a special exception, the respective Autoconf Macro's copyright owner # gives unlimited permission to copy, distribute and modify the configure # scripts that are the output of Autoconf when processing the Macro. You # need not follow the terms of the GNU General Public License when using # or distributing such scripts, even though portions of the text of the # Macro appear in them. The GNU General Public License (GPL) does govern # all other use of the material that constitutes the Autoconf Macro. # # This special exception to the GPL applies to versions of the Autoconf # Macro released by the Autoconf Archive. When you make and distribute a # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. #serial 20 AU_ALIAS([ACX_PTHREAD], [AX_PTHREAD]) AC_DEFUN([AX_PTHREAD], [ AC_REQUIRE([AC_CANONICAL_HOST]) AC_LANG_PUSH([C]) ax_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on True64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test x"$PTHREAD_LIBS$PTHREAD_CFLAGS" != x; then save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" AC_MSG_CHECKING([for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS]) AC_TRY_LINK_FUNC(pthread_join, ax_pthread_ok=yes) AC_MSG_RESULT($ax_pthread_ok) if test x"$ax_pthread_ok" = xno; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items starting with a "-" are # C compiler flags, and other items are library names, except for "none" # which indicates that we try without any flags at all, and "pthread-config" # which is a program returning the flags for the Pth emulation library. ax_pthread_flags="pthreads none -Kthread -kthread lthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads) # -pthreads: Solaris/gcc # -mthreads: Mingw32/gcc, Lynx/gcc # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads too; # also defines -D_REENTRANT) # ... -mt is also the pthreads flag for HP/aCC # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case ${host_os} in solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (We need to link with -pthreads/-mt/ # -lpthread.) (The stubs are missing pthread_cleanup_push, or rather # a function called by this macro, so we could check for that, but # who knows whether they'll stub that too in a future libc.) So, # we'll just look for -pthreads and -lpthread first: ax_pthread_flags="-pthreads pthread -mt -pthread $ax_pthread_flags" ;; darwin*) ax_pthread_flags="-pthread $ax_pthread_flags" ;; esac if test x"$ax_pthread_ok" = xno; then for flag in $ax_pthread_flags; do case $flag in none) AC_MSG_CHECKING([whether pthreads work without any flags]) ;; -*) AC_MSG_CHECKING([whether pthreads work with $flag]) PTHREAD_CFLAGS="$flag" ;; pthread-config) AC_CHECK_PROG(ax_pthread_config, pthread-config, yes, no) if test x"$ax_pthread_config" = xno; then continue; fi PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) AC_MSG_CHECKING([for the pthreads library -l$flag]) PTHREAD_LIBS="-l$flag" ;; esac save_LIBS="$LIBS" save_CFLAGS="$CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. AC_LINK_IFELSE([AC_LANG_PROGRAM([#include static void routine(void *a) { a = 0; } static void *start_routine(void *a) { return a; }], [pthread_t th; pthread_attr_t attr; pthread_create(&th, 0, start_routine, 0); pthread_join(th, 0); pthread_attr_init(&attr); pthread_cleanup_push(routine, 0); pthread_cleanup_pop(0) /* ; */])], [ax_pthread_ok=yes], []) LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" AC_MSG_RESULT($ax_pthread_ok) if test "x$ax_pthread_ok" = xyes; then break; fi PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Various other checks: if test "x$ax_pthread_ok" = xyes; then save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. AC_MSG_CHECKING([for joinable pthread attribute]) attr_name=unknown for attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do AC_LINK_IFELSE([AC_LANG_PROGRAM([#include ], [int attr = $attr; return attr /* ; */])], [attr_name=$attr; break], []) done AC_MSG_RESULT($attr_name) if test "$attr_name" != PTHREAD_CREATE_JOINABLE; then AC_DEFINE_UNQUOTED(PTHREAD_CREATE_JOINABLE, $attr_name, [Define to necessary symbol if this constant uses a non-standard name on your system.]) fi AC_MSG_CHECKING([if more special flags are required for pthreads]) flag=no case ${host_os} in aix* | freebsd* | darwin*) flag="-D_THREAD_SAFE";; osf* | hpux*) flag="-D_REENTRANT";; solaris*) if test "$GCC" = "yes"; then flag="-D_REENTRANT" else flag="-mt -D_REENTRANT" fi ;; esac AC_MSG_RESULT(${flag}) if test "x$flag" != xno; then PTHREAD_CFLAGS="$flag $PTHREAD_CFLAGS" fi AC_CACHE_CHECK([for PTHREAD_PRIO_INHERIT], ax_cv_PTHREAD_PRIO_INHERIT, [ AC_LINK_IFELSE([ AC_LANG_PROGRAM([[#include ]], [[int i = PTHREAD_PRIO_INHERIT;]])], [ax_cv_PTHREAD_PRIO_INHERIT=yes], [ax_cv_PTHREAD_PRIO_INHERIT=no]) ]) AS_IF([test "x$ax_cv_PTHREAD_PRIO_INHERIT" = "xyes"], AC_DEFINE([HAVE_PTHREAD_PRIO_INHERIT], 1, [Have PTHREAD_PRIO_INHERIT.])) LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" # More AIX lossage: compile with *_r variant if test "x$GCC" != xyes; then case $host_os in aix*) AS_CASE(["x/$CC"], [x*/c89|x*/c89_128|x*/c99|x*/c99_128|x*/cc|x*/cc128|x*/xlc|x*/xlc_v6|x*/xlc128|x*/xlc128_v6], [#handle absolute path differently from PATH based program lookup AS_CASE(["x$CC"], [x/*], [AS_IF([AS_EXECUTABLE_P([${CC}_r])],[PTHREAD_CC="${CC}_r"])], [AC_CHECK_PROGS([PTHREAD_CC],[${CC}_r],[$CC])])]) ;; esac fi fi test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" AC_SUBST(PTHREAD_LIBS) AC_SUBST(PTHREAD_CFLAGS) AC_SUBST(PTHREAD_CC) # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test x"$ax_pthread_ok" = xyes; then ifelse([$1],,AC_DEFINE(HAVE_PTHREAD,1,[Define if you have POSIX threads libraries and header files.]),[$1]) : else ax_pthread_ok=no $2 fi AC_LANG_POP ])dnl AX_PTHREAD slurm-slurm-15-08-7-1/auxdir/compile000077500000000000000000000162451265000126300172070ustar00rootroot00000000000000#! /bin/sh # Wrapper for compilers which do not understand '-c -o'. scriptversion=2012-10-14.11; # UTC # Copyright (C) 1999-2013 Free Software Foundation, Inc. # Written by Tom Tromey . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # This file is maintained in Automake, please report # bugs to or send patches to # . nl=' ' # We need space, tab and new line, in precisely that order. Quoting is # there to prevent tools from complaining about whitespace usage. IFS=" "" $nl" file_conv= # func_file_conv build_file lazy # Convert a $build file to $host form and store it in $file # Currently only supports Windows hosts. If the determined conversion # type is listed in (the comma separated) LAZY, no conversion will # take place. func_file_conv () { file=$1 case $file in / | /[!/]*) # absolute file, and not a UNC file if test -z "$file_conv"; then # lazily determine how to convert abs files case `uname -s` in MINGW*) file_conv=mingw ;; CYGWIN*) file_conv=cygwin ;; *) file_conv=wine ;; esac fi case $file_conv/,$2, in *,$file_conv,*) ;; mingw/*) file=`cmd //C echo "$file " | sed -e 's/"\(.*\) " *$/\1/'` ;; cygwin/*) file=`cygpath -m "$file" || echo "$file"` ;; wine/*) file=`winepath -w "$file" || echo "$file"` ;; esac ;; esac } # func_cl_dashL linkdir # Make cl look for libraries in LINKDIR func_cl_dashL () { func_file_conv "$1" if test -z "$lib_path"; then lib_path=$file else lib_path="$lib_path;$file" fi linker_opts="$linker_opts -LIBPATH:$file" } # func_cl_dashl library # Do a library search-path lookup for cl func_cl_dashl () { lib=$1 found=no save_IFS=$IFS IFS=';' for dir in $lib_path $LIB do IFS=$save_IFS if $shared && test -f "$dir/$lib.dll.lib"; then found=yes lib=$dir/$lib.dll.lib break fi if test -f "$dir/$lib.lib"; then found=yes lib=$dir/$lib.lib break fi if test -f "$dir/lib$lib.a"; then found=yes lib=$dir/lib$lib.a break fi done IFS=$save_IFS if test "$found" != yes; then lib=$lib.lib fi } # func_cl_wrapper cl arg... # Adjust compile command to suit cl func_cl_wrapper () { # Assume a capable shell lib_path= shared=: linker_opts= for arg do if test -n "$eat"; then eat= else case $1 in -o) # configure might choose to run compile as 'compile cc -o foo foo.c'. eat=1 case $2 in *.o | *.[oO][bB][jJ]) func_file_conv "$2" set x "$@" -Fo"$file" shift ;; *) func_file_conv "$2" set x "$@" -Fe"$file" shift ;; esac ;; -I) eat=1 func_file_conv "$2" mingw set x "$@" -I"$file" shift ;; -I*) func_file_conv "${1#-I}" mingw set x "$@" -I"$file" shift ;; -l) eat=1 func_cl_dashl "$2" set x "$@" "$lib" shift ;; -l*) func_cl_dashl "${1#-l}" set x "$@" "$lib" shift ;; -L) eat=1 func_cl_dashL "$2" ;; -L*) func_cl_dashL "${1#-L}" ;; -static) shared=false ;; -Wl,*) arg=${1#-Wl,} save_ifs="$IFS"; IFS=',' for flag in $arg; do IFS="$save_ifs" linker_opts="$linker_opts $flag" done IFS="$save_ifs" ;; -Xlinker) eat=1 linker_opts="$linker_opts $2" ;; -*) set x "$@" "$1" shift ;; *.cc | *.CC | *.cxx | *.CXX | *.[cC]++) func_file_conv "$1" set x "$@" -Tp"$file" shift ;; *.c | *.cpp | *.CPP | *.lib | *.LIB | *.Lib | *.OBJ | *.obj | *.[oO]) func_file_conv "$1" mingw set x "$@" "$file" shift ;; *) set x "$@" "$1" shift ;; esac fi shift done if test -n "$linker_opts"; then linker_opts="-link$linker_opts" fi exec "$@" $linker_opts exit 1 } eat= case $1 in '') echo "$0: No command. Try '$0 --help' for more information." 1>&2 exit 1; ;; -h | --h*) cat <<\EOF Usage: compile [--help] [--version] PROGRAM [ARGS] Wrapper for compilers which do not understand '-c -o'. Remove '-o dest.o' from ARGS, run PROGRAM with the remaining arguments, and rename the output as expected. If you are trying to build a whole package this is not the right script to run: please start by reading the file 'INSTALL'. Report bugs to . EOF exit $? ;; -v | --v*) echo "compile $scriptversion" exit $? ;; cl | *[/\\]cl | cl.exe | *[/\\]cl.exe ) func_cl_wrapper "$@" # Doesn't return... ;; esac ofile= cfile= for arg do if test -n "$eat"; then eat= else case $1 in -o) # configure might choose to run compile as 'compile cc -o foo foo.c'. # So we strip '-o arg' only if arg is an object. eat=1 case $2 in *.o | *.obj) ofile=$2 ;; *) set x "$@" -o "$2" shift ;; esac ;; *.c) cfile=$1 set x "$@" "$1" shift ;; *) set x "$@" "$1" shift ;; esac fi shift done if test -z "$ofile" || test -z "$cfile"; then # If no '-o' option was seen then we might have been invoked from a # pattern rule where we don't need one. That is ok -- this is a # normal compilation that the losing compiler can handle. If no # '.c' file was seen then we are probably linking. That is also # ok. exec "$@" fi # Name of file we expect compiler to create. cofile=`echo "$cfile" | sed 's|^.*[\\/]||; s|^[a-zA-Z]:||; s/\.c$/.o/'` # Create the lock directory. # Note: use '[/\\:.-]' here to ensure that we don't use the same name # that we are using for the .o file. Also, base the name on the expected # object file name, since that is what matters with a parallel build. lockdir=`echo "$cofile" | sed -e 's|[/\\:.-]|_|g'`.d while true; do if mkdir "$lockdir" >/dev/null 2>&1; then break fi sleep 1 done # FIXME: race condition here if user kills between mkdir and trap. trap "rmdir '$lockdir'; exit 1" 1 2 15 # Run the compile. "$@" ret=$? if test -f "$cofile"; then test "$cofile" = "$ofile" || mv "$cofile" "$ofile" elif test -f "${cofile}bj"; then test "${cofile}bj" = "$ofile" || mv "${cofile}bj" "$ofile" fi rmdir "$lockdir" exit $ret # Local Variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC" # time-stamp-end: "; # UTC" # End: slurm-slurm-15-08-7-1/auxdir/config.guess000077500000000000000000001235501265000126300201470ustar00rootroot00000000000000#! /bin/sh # Attempt to guess a canonical system name. # Copyright 1992-2014 Free Software Foundation, Inc. timestamp='2014-03-23' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # # Originally written by Per Bothner. # # You can get the latest version of this script from: # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD # # Please send patches with a ChangeLog entry to config-patches@gnu.org. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] Output the configuration name of the system \`$me' is run on. Operation modes: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.guess ($timestamp) Originally written by Per Bothner. Copyright 1992-2014 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try \`$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; * ) break ;; esac done if test $# != 0; then echo "$me: too many arguments$help" >&2 exit 1 fi trap 'exit 1' 1 2 15 # CC_FOR_BUILD -- compiler used by this script. Note that the use of a # compiler to aid in system detection is discouraged as it requires # temporary files to be created and, as you can see below, it is a # headache to deal with in a portable fashion. # Historically, `CC_FOR_BUILD' used to be named `HOST_CC'. We still # use `HOST_CC' if defined, but it is deprecated. # Portable tmp directory creation inspired by the Autoconf team. set_cc_for_build=' trap "exitcode=\$?; (rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null) && exit \$exitcode" 0 ; trap "rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null; exit 1" 1 2 13 15 ; : ${TMPDIR=/tmp} ; { tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } || { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir $tmp) ; } || { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir $tmp) && echo "Warning: creating insecure temp directory" >&2 ; } || { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } ; dummy=$tmp/dummy ; tmpfiles="$dummy.c $dummy.o $dummy.rel $dummy" ; case $CC_FOR_BUILD,$HOST_CC,$CC in ,,) echo "int x;" > $dummy.c ; for c in cc gcc c89 c99 ; do if ($c -c -o $dummy.o $dummy.c) >/dev/null 2>&1 ; then CC_FOR_BUILD="$c"; break ; fi ; done ; if test x"$CC_FOR_BUILD" = x ; then CC_FOR_BUILD=no_compiler_found ; fi ;; ,,*) CC_FOR_BUILD=$CC ;; ,*,*) CC_FOR_BUILD=$HOST_CC ;; esac ; set_cc_for_build= ;' # This is needed to find uname on a Pyramid OSx when run in the BSD universe. # (ghazi@noc.rutgers.edu 1994-08-24) if (test -f /.attbin/uname) >/dev/null 2>&1 ; then PATH=$PATH:/.attbin ; export PATH fi UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown case "${UNAME_SYSTEM}" in Linux|GNU|GNU/*) # If the system lacks a compiler, then just pick glibc. # We could probably try harder. LIBC=gnu eval $set_cc_for_build cat <<-EOF > $dummy.c #include #if defined(__UCLIBC__) LIBC=uclibc #elif defined(__dietlibc__) LIBC=dietlibc #else LIBC=gnu #endif EOF eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^LIBC' | sed 's, ,,g'` ;; esac # Note: order is significant - the case branches are not exclusive. case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in *:NetBSD:*:*) # NetBSD (nbsd) targets should (where applicable) match one or # more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*, # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently # switched to ELF, *-*-netbsd* would select the old # object file format. This provides both forward # compatibility and a consistent mechanism for selecting the # object file format. # # Note: NetBSD doesn't particularly care about the vendor # portion of the name. We always set it to "unknown". sysctl="sysctl -n hw.machine_arch" UNAME_MACHINE_ARCH=`(/sbin/$sysctl 2>/dev/null || \ /usr/sbin/$sysctl 2>/dev/null || echo unknown)` case "${UNAME_MACHINE_ARCH}" in armeb) machine=armeb-unknown ;; arm*) machine=arm-unknown ;; sh3el) machine=shl-unknown ;; sh3eb) machine=sh-unknown ;; sh5el) machine=sh5le-unknown ;; *) machine=${UNAME_MACHINE_ARCH}-unknown ;; esac # The Operating System including object format, if it has switched # to ELF recently, or will in the future. case "${UNAME_MACHINE_ARCH}" in arm*|i386|m68k|ns32k|sh3*|sparc|vax) eval $set_cc_for_build if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ELF__ then # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout). # Return netbsd for either. FIX? os=netbsd else os=netbsdelf fi ;; *) os=netbsd ;; esac # The OS release # Debian GNU/NetBSD machines have a different userland, and # thus, need a distinct triplet. However, they do not need # kernel version information, so it can be replaced with a # suitable tag, in the style of linux-gnu. case "${UNAME_VERSION}" in Debian*) release='-gnu' ;; *) release=`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'` ;; esac # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM: # contains redundant information, the shorter form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. echo "${machine}-${os}${release}" exit ;; *:Bitrig:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'` echo ${UNAME_MACHINE_ARCH}-unknown-bitrig${UNAME_RELEASE} exit ;; *:OpenBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` echo ${UNAME_MACHINE_ARCH}-unknown-openbsd${UNAME_RELEASE} exit ;; *:ekkoBSD:*:*) echo ${UNAME_MACHINE}-unknown-ekkobsd${UNAME_RELEASE} exit ;; *:SolidBSD:*:*) echo ${UNAME_MACHINE}-unknown-solidbsd${UNAME_RELEASE} exit ;; macppc:MirBSD:*:*) echo powerpc-unknown-mirbsd${UNAME_RELEASE} exit ;; *:MirBSD:*:*) echo ${UNAME_MACHINE}-unknown-mirbsd${UNAME_RELEASE} exit ;; alpha:OSF1:*:*) case $UNAME_RELEASE in *4.0) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'` ;; *5.*) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $4}'` ;; esac # According to Compaq, /usr/sbin/psrinfo has been available on # OSF/1 and Tru64 systems produced since 1995. I hope that # covers most systems running today. This code pipes the CPU # types through head -n 1, so we only detect the type of CPU 0. ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1` case "$ALPHA_CPU_TYPE" in "EV4 (21064)") UNAME_MACHINE="alpha" ;; "EV4.5 (21064)") UNAME_MACHINE="alpha" ;; "LCA4 (21066/21068)") UNAME_MACHINE="alpha" ;; "EV5 (21164)") UNAME_MACHINE="alphaev5" ;; "EV5.6 (21164A)") UNAME_MACHINE="alphaev56" ;; "EV5.6 (21164PC)") UNAME_MACHINE="alphapca56" ;; "EV5.7 (21164PC)") UNAME_MACHINE="alphapca57" ;; "EV6 (21264)") UNAME_MACHINE="alphaev6" ;; "EV6.7 (21264A)") UNAME_MACHINE="alphaev67" ;; "EV6.8CB (21264C)") UNAME_MACHINE="alphaev68" ;; "EV6.8AL (21264B)") UNAME_MACHINE="alphaev68" ;; "EV6.8CX (21264D)") UNAME_MACHINE="alphaev68" ;; "EV6.9A (21264/EV69A)") UNAME_MACHINE="alphaev69" ;; "EV7 (21364)") UNAME_MACHINE="alphaev7" ;; "EV7.9 (21364A)") UNAME_MACHINE="alphaev79" ;; esac # A Pn.n version is a patched version. # A Vn.n version is a released version. # A Tn.n version is a released field test version. # A Xn.n version is an unreleased experimental baselevel. # 1.2 uses "1.2" for uname -r. echo ${UNAME_MACHINE}-dec-osf`echo ${UNAME_RELEASE} | sed -e 's/^[PVTX]//' | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` # Reset EXIT trap before exiting to avoid spurious non-zero exit code. exitcode=$? trap '' 0 exit $exitcode ;; Alpha\ *:Windows_NT*:*) # How do we know it's Interix rather than the generic POSIX subsystem? # Should we change UNAME_MACHINE based on the output of uname instead # of the specific Alpha model? echo alpha-pc-interix exit ;; 21064:Windows_NT:50:3) echo alpha-dec-winnt3.5 exit ;; Amiga*:UNIX_System_V:4.0:*) echo m68k-unknown-sysv4 exit ;; *:[Aa]miga[Oo][Ss]:*:*) echo ${UNAME_MACHINE}-unknown-amigaos exit ;; *:[Mm]orph[Oo][Ss]:*:*) echo ${UNAME_MACHINE}-unknown-morphos exit ;; *:OS/390:*:*) echo i370-ibm-openedition exit ;; *:z/VM:*:*) echo s390-ibm-zvmoe exit ;; *:OS400:*:*) echo powerpc-ibm-os400 exit ;; arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) echo arm-acorn-riscix${UNAME_RELEASE} exit ;; arm*:riscos:*:*|arm*:RISCOS:*:*) echo arm-unknown-riscos exit ;; SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*) echo hppa1.1-hitachi-hiuxmpp exit ;; Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*) # akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE. if test "`(/bin/universe) 2>/dev/null`" = att ; then echo pyramid-pyramid-sysv3 else echo pyramid-pyramid-bsd fi exit ;; NILE*:*:*:dcosx) echo pyramid-pyramid-svr4 exit ;; DRS?6000:unix:4.0:6*) echo sparc-icl-nx6 exit ;; DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*) case `/usr/bin/uname -p` in sparc) echo sparc-icl-nx7; exit ;; esac ;; s390x:SunOS:*:*) echo ${UNAME_MACHINE}-ibm-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4H:SunOS:5.*:*) echo sparc-hal-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*) echo sparc-sun-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*) echo i386-pc-auroraux${UNAME_RELEASE} exit ;; i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*) eval $set_cc_for_build SUN_ARCH="i386" # If there is a compiler, see if it is configured for 64-bit objects. # Note that the Sun cc does not turn __LP64__ into 1 like gcc does. # This test works for both compilers. if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then SUN_ARCH="x86_64" fi fi echo ${SUN_ARCH}-pc-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:6*:*) # According to config.sub, this is the proper way to canonicalize # SunOS6. Hard to guess exactly what SunOS6 will be like, but # it's likely to be more like Solaris than SunOS4. echo sparc-sun-solaris3`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:*:*) case "`/usr/bin/arch -k`" in Series*|S4*) UNAME_RELEASE=`uname -v` ;; esac # Japanese Language versions have a version number like `4.1.3-JL'. echo sparc-sun-sunos`echo ${UNAME_RELEASE}|sed -e 's/-/_/'` exit ;; sun3*:SunOS:*:*) echo m68k-sun-sunos${UNAME_RELEASE} exit ;; sun*:*:4.2BSD:*) UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null` test "x${UNAME_RELEASE}" = "x" && UNAME_RELEASE=3 case "`/bin/arch`" in sun3) echo m68k-sun-sunos${UNAME_RELEASE} ;; sun4) echo sparc-sun-sunos${UNAME_RELEASE} ;; esac exit ;; aushp:SunOS:*:*) echo sparc-auspex-sunos${UNAME_RELEASE} exit ;; # The situation for MiNT is a little confusing. The machine name # can be virtually everything (everything which is not # "atarist" or "atariste" at least should have a processor # > m68000). The system name ranges from "MiNT" over "FreeMiNT" # to the lowercase version "mint" (or "freemint"). Finally # the system name "TOS" denotes a system which is actually not # MiNT. But MiNT is downward compatible to TOS, so this should # be no problem. atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*) echo m68k-milan-mint${UNAME_RELEASE} exit ;; hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*) echo m68k-hades-mint${UNAME_RELEASE} exit ;; *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*) echo m68k-unknown-mint${UNAME_RELEASE} exit ;; m68k:machten:*:*) echo m68k-apple-machten${UNAME_RELEASE} exit ;; powerpc:machten:*:*) echo powerpc-apple-machten${UNAME_RELEASE} exit ;; RISC*:Mach:*:*) echo mips-dec-mach_bsd4.3 exit ;; RISC*:ULTRIX:*:*) echo mips-dec-ultrix${UNAME_RELEASE} exit ;; VAX*:ULTRIX*:*:*) echo vax-dec-ultrix${UNAME_RELEASE} exit ;; 2020:CLIX:*:* | 2430:CLIX:*:*) echo clipper-intergraph-clix${UNAME_RELEASE} exit ;; mips:*:*:UMIPS | mips:*:*:RISCos) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #ifdef __cplusplus #include /* for printf() prototype */ int main (int argc, char *argv[]) { #else int main (argc, argv) int argc; char *argv[]; { #endif #if defined (host_mips) && defined (MIPSEB) #if defined (SYSTYPE_SYSV) printf ("mips-mips-riscos%ssysv\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_SVR4) printf ("mips-mips-riscos%ssvr4\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD) printf ("mips-mips-riscos%sbsd\n", argv[1]); exit (0); #endif #endif exit (-1); } EOF $CC_FOR_BUILD -o $dummy $dummy.c && dummyarg=`echo "${UNAME_RELEASE}" | sed -n 's/\([0-9]*\).*/\1/p'` && SYSTEM_NAME=`$dummy $dummyarg` && { echo "$SYSTEM_NAME"; exit; } echo mips-mips-riscos${UNAME_RELEASE} exit ;; Motorola:PowerMAX_OS:*:*) echo powerpc-motorola-powermax exit ;; Motorola:*:4.3:PL8-*) echo powerpc-harris-powermax exit ;; Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*) echo powerpc-harris-powermax exit ;; Night_Hawk:Power_UNIX:*:*) echo powerpc-harris-powerunix exit ;; m88k:CX/UX:7*:*) echo m88k-harris-cxux7 exit ;; m88k:*:4*:R4*) echo m88k-motorola-sysv4 exit ;; m88k:*:3*:R3*) echo m88k-motorola-sysv3 exit ;; AViiON:dgux:*:*) # DG/UX returns AViiON for all architectures UNAME_PROCESSOR=`/usr/bin/uname -p` if [ $UNAME_PROCESSOR = mc88100 ] || [ $UNAME_PROCESSOR = mc88110 ] then if [ ${TARGET_BINARY_INTERFACE}x = m88kdguxelfx ] || \ [ ${TARGET_BINARY_INTERFACE}x = x ] then echo m88k-dg-dgux${UNAME_RELEASE} else echo m88k-dg-dguxbcs${UNAME_RELEASE} fi else echo i586-dg-dgux${UNAME_RELEASE} fi exit ;; M88*:DolphinOS:*:*) # DolphinOS (SVR3) echo m88k-dolphin-sysv3 exit ;; M88*:*:R3*:*) # Delta 88k system running SVR3 echo m88k-motorola-sysv3 exit ;; XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3) echo m88k-tektronix-sysv3 exit ;; Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD) echo m68k-tektronix-bsd exit ;; *:IRIX*:*:*) echo mips-sgi-irix`echo ${UNAME_RELEASE}|sed -e 's/-/_/g'` exit ;; ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX. echo romp-ibm-aix # uname -m gives an 8 hex-code CPU id exit ;; # Note that: echo "'`uname -s`'" gives 'AIX ' i*86:AIX:*:*) echo i386-ibm-aix exit ;; ia64:AIX:*:*) if [ -x /usr/bin/oslevel ] ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} fi echo ${UNAME_MACHINE}-ibm-aix${IBM_REV} exit ;; *:AIX:2:3) if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #include main() { if (!__power_pc()) exit(1); puts("powerpc-ibm-aix3.2.5"); exit(0); } EOF if $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` then echo "$SYSTEM_NAME" else echo rs6000-ibm-aix3.2.5 fi elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then echo rs6000-ibm-aix3.2.4 else echo rs6000-ibm-aix3.2 fi exit ;; *:AIX:*:[4567]) IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` if /usr/sbin/lsattr -El ${IBM_CPU_ID} | grep ' POWER' >/dev/null 2>&1; then IBM_ARCH=rs6000 else IBM_ARCH=powerpc fi if [ -x /usr/bin/oslevel ] ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} fi echo ${IBM_ARCH}-ibm-aix${IBM_REV} exit ;; *:AIX:*:*) echo rs6000-ibm-aix exit ;; ibmrt:4.4BSD:*|romp-ibm:BSD:*) echo romp-ibm-bsd4.4 exit ;; ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and echo romp-ibm-bsd${UNAME_RELEASE} # 4.3 with uname added to exit ;; # report: romp-ibm BSD 4.3 *:BOSX:*:*) echo rs6000-bull-bosx exit ;; DPX/2?00:B.O.S.:*:*) echo m68k-bull-sysv3 exit ;; 9000/[34]??:4.3bsd:1.*:*) echo m68k-hp-bsd exit ;; hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*) echo m68k-hp-bsd4.4 exit ;; 9000/[34678]??:HP-UX:*:*) HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` case "${UNAME_MACHINE}" in 9000/31? ) HP_ARCH=m68000 ;; 9000/[34]?? ) HP_ARCH=m68k ;; 9000/[678][0-9][0-9]) if [ -x /usr/bin/getconf ]; then sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null` sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null` case "${sc_cpu_version}" in 523) HP_ARCH="hppa1.0" ;; # CPU_PA_RISC1_0 528) HP_ARCH="hppa1.1" ;; # CPU_PA_RISC1_1 532) # CPU_PA_RISC2_0 case "${sc_kernel_bits}" in 32) HP_ARCH="hppa2.0n" ;; 64) HP_ARCH="hppa2.0w" ;; '') HP_ARCH="hppa2.0" ;; # HP-UX 10.20 esac ;; esac fi if [ "${HP_ARCH}" = "" ]; then eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #define _HPUX_SOURCE #include #include int main () { #if defined(_SC_KERNEL_BITS) long bits = sysconf(_SC_KERNEL_BITS); #endif long cpu = sysconf (_SC_CPU_VERSION); switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0"); break; case CPU_PA_RISC1_1: puts ("hppa1.1"); break; case CPU_PA_RISC2_0: #if defined(_SC_KERNEL_BITS) switch (bits) { case 64: puts ("hppa2.0w"); break; case 32: puts ("hppa2.0n"); break; default: puts ("hppa2.0"); break; } break; #else /* !defined(_SC_KERNEL_BITS) */ puts ("hppa2.0"); break; #endif default: puts ("hppa1.0"); break; } exit (0); } EOF (CCOPTS= $CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null) && HP_ARCH=`$dummy` test -z "$HP_ARCH" && HP_ARCH=hppa fi ;; esac if [ ${HP_ARCH} = "hppa2.0w" ] then eval $set_cc_for_build # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating # 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler # generating 64-bit code. GNU and HP use different nomenclature: # # $ CC_FOR_BUILD=cc ./config.guess # => hppa2.0w-hp-hpux11.23 # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess # => hppa64-hp-hpux11.23 if echo __LP64__ | (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | grep -q __LP64__ then HP_ARCH="hppa2.0w" else HP_ARCH="hppa64" fi fi echo ${HP_ARCH}-hp-hpux${HPUX_REV} exit ;; ia64:HP-UX:*:*) HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` echo ia64-hp-hpux${HPUX_REV} exit ;; 3050*:HI-UX:*:*) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #include int main () { long cpu = sysconf (_SC_CPU_VERSION); /* The order matters, because CPU_IS_HP_MC68K erroneously returns true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct results, however. */ if (CPU_IS_PA_RISC (cpu)) { switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break; case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break; case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break; default: puts ("hppa-hitachi-hiuxwe2"); break; } } else if (CPU_IS_HP_MC68K (cpu)) puts ("m68k-hitachi-hiuxwe2"); else puts ("unknown-hitachi-hiuxwe2"); exit (0); } EOF $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` && { echo "$SYSTEM_NAME"; exit; } echo unknown-hitachi-hiuxwe2 exit ;; 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:* ) echo hppa1.1-hp-bsd exit ;; 9000/8??:4.3bsd:*:*) echo hppa1.0-hp-bsd exit ;; *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*) echo hppa1.0-hp-mpeix exit ;; hp7??:OSF1:*:* | hp8?[79]:OSF1:*:* ) echo hppa1.1-hp-osf exit ;; hp8??:OSF1:*:*) echo hppa1.0-hp-osf exit ;; i*86:OSF1:*:*) if [ -x /usr/sbin/sysversion ] ; then echo ${UNAME_MACHINE}-unknown-osf1mk else echo ${UNAME_MACHINE}-unknown-osf1 fi exit ;; parisc*:Lites*:*:*) echo hppa1.1-hp-lites exit ;; C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*) echo c1-convex-bsd exit ;; C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit ;; C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*) echo c34-convex-bsd exit ;; C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*) echo c38-convex-bsd exit ;; C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*) echo c4-convex-bsd exit ;; CRAY*Y-MP:*:*:*) echo ymp-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*[A-Z]90:*:*:*) echo ${UNAME_MACHINE}-cray-unicos${UNAME_RELEASE} \ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \ -e 's/\.[^.]*$/.X/' exit ;; CRAY*TS:*:*:*) echo t90-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*T3E:*:*:*) echo alphaev5-cray-unicosmk${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*SV1:*:*:*) echo sv1-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; *:UNICOS/mp:*:*) echo craynv-cray-unicosmp${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*) FUJITSU_PROC=`uname -m | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` FUJITSU_REL=`echo ${UNAME_RELEASE} | sed -e 's/ /_/'` echo "${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" exit ;; 5000:UNIX_System_V:4.*:*) FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` FUJITSU_REL=`echo ${UNAME_RELEASE} | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/ /_/'` echo "sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" exit ;; i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*) echo ${UNAME_MACHINE}-pc-bsdi${UNAME_RELEASE} exit ;; sparc*:BSD/OS:*:*) echo sparc-unknown-bsdi${UNAME_RELEASE} exit ;; *:BSD/OS:*:*) echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE} exit ;; *:FreeBSD:*:*) UNAME_PROCESSOR=`/usr/bin/uname -p` case ${UNAME_PROCESSOR} in amd64) echo x86_64-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; *) echo ${UNAME_PROCESSOR}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; esac exit ;; i*:CYGWIN*:*) echo ${UNAME_MACHINE}-pc-cygwin exit ;; *:MINGW64*:*) echo ${UNAME_MACHINE}-pc-mingw64 exit ;; *:MINGW*:*) echo ${UNAME_MACHINE}-pc-mingw32 exit ;; *:MSYS*:*) echo ${UNAME_MACHINE}-pc-msys exit ;; i*:windows32*:*) # uname -m includes "-pc" on this system. echo ${UNAME_MACHINE}-mingw32 exit ;; i*:PW*:*) echo ${UNAME_MACHINE}-pc-pw32 exit ;; *:Interix*:*) case ${UNAME_MACHINE} in x86) echo i586-pc-interix${UNAME_RELEASE} exit ;; authenticamd | genuineintel | EM64T) echo x86_64-unknown-interix${UNAME_RELEASE} exit ;; IA64) echo ia64-unknown-interix${UNAME_RELEASE} exit ;; esac ;; [345]86:Windows_95:* | [345]86:Windows_98:* | [345]86:Windows_NT:*) echo i${UNAME_MACHINE}-pc-mks exit ;; 8664:Windows_NT:*) echo x86_64-pc-mks exit ;; i*:Windows_NT*:* | Pentium*:Windows_NT*:*) # How do we know it's Interix rather than the generic POSIX subsystem? # It also conflicts with pre-2.0 versions of AT&T UWIN. Should we # UNAME_MACHINE based on the output of uname instead of i386? echo i586-pc-interix exit ;; i*:UWIN*:*) echo ${UNAME_MACHINE}-pc-uwin exit ;; amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*) echo x86_64-unknown-cygwin exit ;; p*:CYGWIN*:*) echo powerpcle-unknown-cygwin exit ;; prep*:SunOS:5.*:*) echo powerpcle-unknown-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; *:GNU:*:*) # the GNU system echo `echo ${UNAME_MACHINE}|sed -e 's,[-/].*$,,'`-unknown-${LIBC}`echo ${UNAME_RELEASE}|sed -e 's,/.*$,,'` exit ;; *:GNU/*:*:*) # other systems with GNU libc and userland echo ${UNAME_MACHINE}-unknown-`echo ${UNAME_SYSTEM} | sed 's,^[^/]*/,,' | tr '[A-Z]' '[a-z]'``echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`-${LIBC} exit ;; i*86:Minix:*:*) echo ${UNAME_MACHINE}-pc-minix exit ;; aarch64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; aarch64_be:Linux:*:*) UNAME_MACHINE=aarch64_be echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; alpha:Linux:*:*) case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' < /proc/cpuinfo` in EV5) UNAME_MACHINE=alphaev5 ;; EV56) UNAME_MACHINE=alphaev56 ;; PCA56) UNAME_MACHINE=alphapca56 ;; PCA57) UNAME_MACHINE=alphapca56 ;; EV6) UNAME_MACHINE=alphaev6 ;; EV67) UNAME_MACHINE=alphaev67 ;; EV68*) UNAME_MACHINE=alphaev68 ;; esac objdump --private-headers /bin/sh | grep -q ld.so.1 if test "$?" = 0 ; then LIBC="gnulibc1" ; fi echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; arc:Linux:*:* | arceb:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; arm*:Linux:*:*) eval $set_cc_for_build if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_EABI__ then echo ${UNAME_MACHINE}-unknown-linux-${LIBC} else if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then echo ${UNAME_MACHINE}-unknown-linux-${LIBC}eabi else echo ${UNAME_MACHINE}-unknown-linux-${LIBC}eabihf fi fi exit ;; avr32*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; cris:Linux:*:*) echo ${UNAME_MACHINE}-axis-linux-${LIBC} exit ;; crisv32:Linux:*:*) echo ${UNAME_MACHINE}-axis-linux-${LIBC} exit ;; frv:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; hexagon:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; i*86:Linux:*:*) echo ${UNAME_MACHINE}-pc-linux-${LIBC} exit ;; ia64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; m32r*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; m68*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; mips:Linux:*:* | mips64:Linux:*:*) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #undef CPU #undef ${UNAME_MACHINE} #undef ${UNAME_MACHINE}el #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL) CPU=${UNAME_MACHINE}el #else #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB) CPU=${UNAME_MACHINE} #else CPU= #endif #endif EOF eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^CPU'` test x"${CPU}" != x && { echo "${CPU}-unknown-linux-${LIBC}"; exit; } ;; openrisc*:Linux:*:*) echo or1k-unknown-linux-${LIBC} exit ;; or32:Linux:*:* | or1k*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; padre:Linux:*:*) echo sparc-unknown-linux-${LIBC} exit ;; parisc64:Linux:*:* | hppa64:Linux:*:*) echo hppa64-unknown-linux-${LIBC} exit ;; parisc:Linux:*:* | hppa:Linux:*:*) # Look for CPU level case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in PA7*) echo hppa1.1-unknown-linux-${LIBC} ;; PA8*) echo hppa2.0-unknown-linux-${LIBC} ;; *) echo hppa-unknown-linux-${LIBC} ;; esac exit ;; ppc64:Linux:*:*) echo powerpc64-unknown-linux-${LIBC} exit ;; ppc:Linux:*:*) echo powerpc-unknown-linux-${LIBC} exit ;; ppc64le:Linux:*:*) echo powerpc64le-unknown-linux-${LIBC} exit ;; ppcle:Linux:*:*) echo powerpcle-unknown-linux-${LIBC} exit ;; s390:Linux:*:* | s390x:Linux:*:*) echo ${UNAME_MACHINE}-ibm-linux-${LIBC} exit ;; sh64*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; sh*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; sparc:Linux:*:* | sparc64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; tile*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; vax:Linux:*:*) echo ${UNAME_MACHINE}-dec-linux-${LIBC} exit ;; x86_64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; xtensa*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; i*86:DYNIX/ptx:4*:*) # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. # earlier versions are messed up and put the nodename in both # sysname and nodename. echo i386-sequent-sysv4 exit ;; i*86:UNIX_SV:4.2MP:2.*) # Unixware is an offshoot of SVR4, but it has its own version # number series starting with 2... # I am not positive that other SVR4 systems won't match this, # I just have to hope. -- rms. # Use sysv4.2uw... so that sysv4* matches it. echo ${UNAME_MACHINE}-pc-sysv4.2uw${UNAME_VERSION} exit ;; i*86:OS/2:*:*) # If we were able to find `uname', then EMX Unix compatibility # is probably installed. echo ${UNAME_MACHINE}-pc-os2-emx exit ;; i*86:XTS-300:*:STOP) echo ${UNAME_MACHINE}-unknown-stop exit ;; i*86:atheos:*:*) echo ${UNAME_MACHINE}-unknown-atheos exit ;; i*86:syllable:*:*) echo ${UNAME_MACHINE}-pc-syllable exit ;; i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*) echo i386-unknown-lynxos${UNAME_RELEASE} exit ;; i*86:*DOS:*:*) echo ${UNAME_MACHINE}-pc-msdosdjgpp exit ;; i*86:*:4.*:* | i*86:SYSTEM_V:4.*:*) UNAME_REL=`echo ${UNAME_RELEASE} | sed 's/\/MP$//'` if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then echo ${UNAME_MACHINE}-univel-sysv${UNAME_REL} else echo ${UNAME_MACHINE}-pc-sysv${UNAME_REL} fi exit ;; i*86:*:5:[678]*) # UnixWare 7.x, OpenUNIX and OpenServer 6. case `/bin/uname -X | grep "^Machine"` in *486*) UNAME_MACHINE=i486 ;; *Pentium) UNAME_MACHINE=i586 ;; *Pent*|*Celeron) UNAME_MACHINE=i686 ;; esac echo ${UNAME_MACHINE}-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION} exit ;; i*86:*:3.2:*) if test -f /usr/options/cb.name; then UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')` (/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486 (/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \ && UNAME_MACHINE=i586 (/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \ && UNAME_MACHINE=i686 (/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \ && UNAME_MACHINE=i686 echo ${UNAME_MACHINE}-pc-sco$UNAME_REL else echo ${UNAME_MACHINE}-pc-sysv32 fi exit ;; pc:*:*:*) # Left here for compatibility: # uname -m prints for DJGPP always 'pc', but it prints nothing about # the processor, so we play safe by assuming i586. # Note: whatever this is, it MUST be the same as what config.sub # prints for the "djgpp" host, or else GDB configury will decide that # this is a cross-build. echo i586-pc-msdosdjgpp exit ;; Intel:Mach:3*:*) echo i386-pc-mach3 exit ;; paragon:*:*:*) echo i860-intel-osf1 exit ;; i860:*:4.*:*) # i860-SVR4 if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then echo i860-stardent-sysv${UNAME_RELEASE} # Stardent Vistra i860-SVR4 else # Add other i860-SVR4 vendors below as they are discovered. echo i860-unknown-sysv${UNAME_RELEASE} # Unknown i860-SVR4 fi exit ;; mini*:CTIX:SYS*5:*) # "miniframe" echo m68010-convergent-sysv exit ;; mc68k:UNIX:SYSTEM5:3.51m) echo m68k-convergent-sysv exit ;; M680?0:D-NIX:5.3:*) echo m68k-diab-dnix exit ;; M68*:*:R3V[5678]*:*) test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;; 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0) OS_REL='' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } ;; 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*) /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4; exit; } ;; NCR*:*:4.2:* | MPRAS*:*:4.2:*) OS_REL='.3' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } ;; m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*) echo m68k-unknown-lynxos${UNAME_RELEASE} exit ;; mc68030:UNIX_System_V:4.*:*) echo m68k-atari-sysv4 exit ;; TSUNAMI:LynxOS:2.*:*) echo sparc-unknown-lynxos${UNAME_RELEASE} exit ;; rs6000:LynxOS:2.*:*) echo rs6000-unknown-lynxos${UNAME_RELEASE} exit ;; PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*) echo powerpc-unknown-lynxos${UNAME_RELEASE} exit ;; SM[BE]S:UNIX_SV:*:*) echo mips-dde-sysv${UNAME_RELEASE} exit ;; RM*:ReliantUNIX-*:*:*) echo mips-sni-sysv4 exit ;; RM*:SINIX-*:*:*) echo mips-sni-sysv4 exit ;; *:SINIX-*:*:*) if uname -p 2>/dev/null >/dev/null ; then UNAME_MACHINE=`(uname -p) 2>/dev/null` echo ${UNAME_MACHINE}-sni-sysv4 else echo ns32k-sni-sysv fi exit ;; PENTIUM:*:4.0*:*) # Unisys `ClearPath HMP IX 4000' SVR4/MP effort # says echo i586-unisys-sysv4 exit ;; *:UNIX_System_V:4*:FTX*) # From Gerald Hewes . # How about differentiating between stratus architectures? -djm echo hppa1.1-stratus-sysv4 exit ;; *:*:*:FTX*) # From seanf@swdc.stratus.com. echo i860-stratus-sysv4 exit ;; i*86:VOS:*:*) # From Paul.Green@stratus.com. echo ${UNAME_MACHINE}-stratus-vos exit ;; *:VOS:*:*) # From Paul.Green@stratus.com. echo hppa1.1-stratus-vos exit ;; mc68*:A/UX:*:*) echo m68k-apple-aux${UNAME_RELEASE} exit ;; news*:NEWS-OS:6*:*) echo mips-sony-newsos6 exit ;; R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*) if [ -d /usr/nec ]; then echo mips-nec-sysv${UNAME_RELEASE} else echo mips-unknown-sysv${UNAME_RELEASE} fi exit ;; BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only. echo powerpc-be-beos exit ;; BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only. echo powerpc-apple-beos exit ;; BePC:BeOS:*:*) # BeOS running on Intel PC compatible. echo i586-pc-beos exit ;; BePC:Haiku:*:*) # Haiku running on Intel PC compatible. echo i586-pc-haiku exit ;; x86_64:Haiku:*:*) echo x86_64-unknown-haiku exit ;; SX-4:SUPER-UX:*:*) echo sx4-nec-superux${UNAME_RELEASE} exit ;; SX-5:SUPER-UX:*:*) echo sx5-nec-superux${UNAME_RELEASE} exit ;; SX-6:SUPER-UX:*:*) echo sx6-nec-superux${UNAME_RELEASE} exit ;; SX-7:SUPER-UX:*:*) echo sx7-nec-superux${UNAME_RELEASE} exit ;; SX-8:SUPER-UX:*:*) echo sx8-nec-superux${UNAME_RELEASE} exit ;; SX-8R:SUPER-UX:*:*) echo sx8r-nec-superux${UNAME_RELEASE} exit ;; Power*:Rhapsody:*:*) echo powerpc-apple-rhapsody${UNAME_RELEASE} exit ;; *:Rhapsody:*:*) echo ${UNAME_MACHINE}-apple-rhapsody${UNAME_RELEASE} exit ;; *:Darwin:*:*) UNAME_PROCESSOR=`uname -p` || UNAME_PROCESSOR=unknown eval $set_cc_for_build if test "$UNAME_PROCESSOR" = unknown ; then UNAME_PROCESSOR=powerpc fi if test `echo "$UNAME_RELEASE" | sed -e 's/\..*//'` -le 10 ; then if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then case $UNAME_PROCESSOR in i386) UNAME_PROCESSOR=x86_64 ;; powerpc) UNAME_PROCESSOR=powerpc64 ;; esac fi fi elif test "$UNAME_PROCESSOR" = i386 ; then # Avoid executing cc on OS X 10.9, as it ships with a stub # that puts up a graphical alert prompting to install # developer tools. Any system running Mac OS X 10.7 or # later (Darwin 11 and later) is required to have a 64-bit # processor. This is not true of the ARM version of Darwin # that Apple uses in portable devices. UNAME_PROCESSOR=x86_64 fi echo ${UNAME_PROCESSOR}-apple-darwin${UNAME_RELEASE} exit ;; *:procnto*:*:* | *:QNX:[0123456789]*:*) UNAME_PROCESSOR=`uname -p` if test "$UNAME_PROCESSOR" = "x86"; then UNAME_PROCESSOR=i386 UNAME_MACHINE=pc fi echo ${UNAME_PROCESSOR}-${UNAME_MACHINE}-nto-qnx${UNAME_RELEASE} exit ;; *:QNX:*:4*) echo i386-pc-qnx exit ;; NEO-?:NONSTOP_KERNEL:*:*) echo neo-tandem-nsk${UNAME_RELEASE} exit ;; NSE-*:NONSTOP_KERNEL:*:*) echo nse-tandem-nsk${UNAME_RELEASE} exit ;; NSR-?:NONSTOP_KERNEL:*:*) echo nsr-tandem-nsk${UNAME_RELEASE} exit ;; *:NonStop-UX:*:*) echo mips-compaq-nonstopux exit ;; BS2000:POSIX*:*:*) echo bs2000-siemens-sysv exit ;; DS/*:UNIX_System_V:*:*) echo ${UNAME_MACHINE}-${UNAME_SYSTEM}-${UNAME_RELEASE} exit ;; *:Plan9:*:*) # "uname -m" is not consistent, so use $cputype instead. 386 # is converted to i386 for consistency with other x86 # operating systems. if test "$cputype" = "386"; then UNAME_MACHINE=i386 else UNAME_MACHINE="$cputype" fi echo ${UNAME_MACHINE}-unknown-plan9 exit ;; *:TOPS-10:*:*) echo pdp10-unknown-tops10 exit ;; *:TENEX:*:*) echo pdp10-unknown-tenex exit ;; KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*) echo pdp10-dec-tops20 exit ;; XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*) echo pdp10-xkl-tops20 exit ;; *:TOPS-20:*:*) echo pdp10-unknown-tops20 exit ;; *:ITS:*:*) echo pdp10-unknown-its exit ;; SEI:*:*:SEIUX) echo mips-sei-seiux${UNAME_RELEASE} exit ;; *:DragonFly:*:*) echo ${UNAME_MACHINE}-unknown-dragonfly`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` exit ;; *:*VMS:*:*) UNAME_MACHINE=`(uname -p) 2>/dev/null` case "${UNAME_MACHINE}" in A*) echo alpha-dec-vms ; exit ;; I*) echo ia64-dec-vms ; exit ;; V*) echo vax-dec-vms ; exit ;; esac ;; *:XENIX:*:SysV) echo i386-pc-xenix exit ;; i*86:skyos:*:*) echo ${UNAME_MACHINE}-pc-skyos`echo ${UNAME_RELEASE}` | sed -e 's/ .*$//' exit ;; i*86:rdos:*:*) echo ${UNAME_MACHINE}-pc-rdos exit ;; i*86:AROS:*:*) echo ${UNAME_MACHINE}-pc-aros exit ;; x86_64:VMkernel:*:*) echo ${UNAME_MACHINE}-unknown-esx exit ;; esac cat >&2 < in order to provide the needed information to handle your system. config.guess timestamp = $timestamp uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null` /bin/uname -X = `(/bin/uname -X) 2>/dev/null` hostinfo = `(hostinfo) 2>/dev/null` /bin/universe = `(/bin/universe) 2>/dev/null` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null` /bin/arch = `(/bin/arch) 2>/dev/null` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null` UNAME_MACHINE = ${UNAME_MACHINE} UNAME_RELEASE = ${UNAME_RELEASE} UNAME_SYSTEM = ${UNAME_SYSTEM} UNAME_VERSION = ${UNAME_VERSION} EOF exit 1 # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: slurm-slurm-15-08-7-1/auxdir/config.sub000077500000000000000000001057751265000126300176230ustar00rootroot00000000000000#! /bin/sh # Configuration validation subroutine script. # Copyright 1992-2014 Free Software Foundation, Inc. timestamp='2014-09-11' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # Please send patches with a ChangeLog entry to config-patches@gnu.org. # # Configuration subroutine to validate and canonicalize a configuration type. # Supply the specified configuration type as an argument. # If it is invalid, we print an error message on stderr and exit with code 1. # Otherwise, we print the canonical config type on stdout and succeed. # You can get the latest version of this script from: # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD # This file is supposed to be the same for all GNU packages # and recognize all the CPU types, system types and aliases # that are meaningful with *any* GNU software. # Each package is responsible for reporting which valid configurations # it does not support. The user should be able to distinguish # a failure to support a valid configuration from a meaningless # configuration. # The goal of this file is to map all the various variations of a given # machine specification into a single specification in the form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM # or in some cases, the newer four-part form: # CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM # It is wrong to echo any other type of specification. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] CPU-MFR-OPSYS $0 [OPTION] ALIAS Canonicalize a configuration name. Operation modes: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.sub ($timestamp) Copyright 1992-2014 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try \`$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" exit 1 ;; *local*) # First pass through any local machine types. echo $1 exit ;; * ) break ;; esac done case $# in 0) echo "$me: missing argument$help" >&2 exit 1;; 1) ;; *) echo "$me: too many arguments$help" >&2 exit 1;; esac # Separate what the user gave into CPU-COMPANY and OS or KERNEL-OS (if any). # Here we must recognize all the valid KERNEL-OS combinations. maybe_os=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'` case $maybe_os in nto-qnx* | linux-gnu* | linux-android* | linux-dietlibc | linux-newlib* | \ linux-musl* | linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | kfreebsd*-gnu* | \ knetbsd*-gnu* | netbsd*-gnu* | \ kopensolaris*-gnu* | \ storm-chaos* | os2-emx* | rtmk-nova*) os=-$maybe_os basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'` ;; android-linux) os=-linux-android basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'`-unknown ;; *) basic_machine=`echo $1 | sed 's/-[^-]*$//'` if [ $basic_machine != $1 ] then os=`echo $1 | sed 's/.*-/-/'` else os=; fi ;; esac ### Let's recognize common machines as not being operating systems so ### that things like config.sub decstation-3100 work. We also ### recognize some manufacturers as not being operating systems, so we ### can provide default operating systems below. case $os in -sun*os*) # Prevent following clause from handling this invalid input. ;; -dec* | -mips* | -sequent* | -encore* | -pc532* | -sgi* | -sony* | \ -att* | -7300* | -3300* | -delta* | -motorola* | -sun[234]* | \ -unicom* | -ibm* | -next | -hp | -isi* | -apollo | -altos* | \ -convergent* | -ncr* | -news | -32* | -3600* | -3100* | -hitachi* |\ -c[123]* | -convex* | -sun | -crds | -omron* | -dg | -ultra | -tti* | \ -harris | -dolphin | -highlevel | -gould | -cbm | -ns | -masscomp | \ -apple | -axis | -knuth | -cray | -microblaze*) os= basic_machine=$1 ;; -bluegene*) os=-cnk ;; -sim | -cisco | -oki | -wec | -winbond) os= basic_machine=$1 ;; -scout) ;; -wrs) os=-vxworks basic_machine=$1 ;; -chorusos*) os=-chorusos basic_machine=$1 ;; -chorusrdb) os=-chorusrdb basic_machine=$1 ;; -hiux*) os=-hiuxwe2 ;; -sco6) os=-sco5v6 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco5) os=-sco3.2v5 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco4) os=-sco3.2v4 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco3.2.[4-9]*) os=`echo $os | sed -e 's/sco3.2./sco3.2v/'` basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco3.2v[4-9]*) # Don't forget version if it is 3.2v4 or newer. basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco5v6*) # Don't forget version if it is 3.2v4 or newer. basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco*) os=-sco3.2v2 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -udk*) basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -isc) os=-isc2.2 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -clix*) basic_machine=clipper-intergraph ;; -isc*) basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -lynx*178) os=-lynxos178 ;; -lynx*5) os=-lynxos5 ;; -lynx*) os=-lynxos ;; -ptx*) basic_machine=`echo $1 | sed -e 's/86-.*/86-sequent/'` ;; -windowsnt*) os=`echo $os | sed -e 's/windowsnt/winnt/'` ;; -psos*) os=-psos ;; -mint | -mint[0-9]*) basic_machine=m68k-atari os=-mint ;; esac # Decode aliases for certain CPU-COMPANY combinations. case $basic_machine in # Recognize the basic CPU types without company name. # Some are omitted here because they have special meanings below. 1750a | 580 \ | a29k \ | aarch64 | aarch64_be \ | alpha | alphaev[4-8] | alphaev56 | alphaev6[78] | alphapca5[67] \ | alpha64 | alpha64ev[4-8] | alpha64ev56 | alpha64ev6[78] | alpha64pca5[67] \ | am33_2.0 \ | arc | arceb \ | arm | arm[bl]e | arme[lb] | armv[2-8] | armv[3-8][lb] | armv7[arm] \ | avr | avr32 \ | be32 | be64 \ | bfin \ | c4x | c8051 | clipper \ | d10v | d30v | dlx | dsp16xx \ | epiphany \ | fido | fr30 | frv \ | h8300 | h8500 | hppa | hppa1.[01] | hppa2.0 | hppa2.0[nw] | hppa64 \ | hexagon \ | i370 | i860 | i960 | ia64 \ | ip2k | iq2000 \ | k1om \ | le32 | le64 \ | lm32 \ | m32c | m32r | m32rle | m68000 | m68k | m88k \ | maxq | mb | microblaze | microblazeel | mcore | mep | metag \ | mips | mipsbe | mipseb | mipsel | mipsle \ | mips16 \ | mips64 | mips64el \ | mips64octeon | mips64octeonel \ | mips64orion | mips64orionel \ | mips64r5900 | mips64r5900el \ | mips64vr | mips64vrel \ | mips64vr4100 | mips64vr4100el \ | mips64vr4300 | mips64vr4300el \ | mips64vr5000 | mips64vr5000el \ | mips64vr5900 | mips64vr5900el \ | mipsisa32 | mipsisa32el \ | mipsisa32r2 | mipsisa32r2el \ | mipsisa32r6 | mipsisa32r6el \ | mipsisa64 | mipsisa64el \ | mipsisa64r2 | mipsisa64r2el \ | mipsisa64r6 | mipsisa64r6el \ | mipsisa64sb1 | mipsisa64sb1el \ | mipsisa64sr71k | mipsisa64sr71kel \ | mipsr5900 | mipsr5900el \ | mipstx39 | mipstx39el \ | mn10200 | mn10300 \ | moxie \ | mt \ | msp430 \ | nds32 | nds32le | nds32be \ | nios | nios2 | nios2eb | nios2el \ | ns16k | ns32k \ | open8 | or1k | or1knd | or32 \ | pdp10 | pdp11 | pj | pjl \ | powerpc | powerpc64 | powerpc64le | powerpcle \ | pyramid \ | riscv32 | riscv64 \ | rl78 | rx \ | score \ | sh | sh[1234] | sh[24]a | sh[24]aeb | sh[23]e | sh[34]eb | sheb | shbe | shle | sh[1234]le | sh3ele \ | sh64 | sh64le \ | sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet | sparclite \ | sparcv8 | sparcv9 | sparcv9b | sparcv9v \ | spu \ | tahoe | tic4x | tic54x | tic55x | tic6x | tic80 | tron \ | ubicom32 \ | v850 | v850e | v850e1 | v850e2 | v850es | v850e2v3 \ | we32k \ | x86 | xc16x | xstormy16 | xtensa \ | z8k | z80) basic_machine=$basic_machine-unknown ;; c54x) basic_machine=tic54x-unknown ;; c55x) basic_machine=tic55x-unknown ;; c6x) basic_machine=tic6x-unknown ;; m6811 | m68hc11 | m6812 | m68hc12 | m68hcs12x | nvptx | picochip) basic_machine=$basic_machine-unknown os=-none ;; m88110 | m680[12346]0 | m683?2 | m68360 | m5200 | v70 | w65 | z8k) ;; ms1) basic_machine=mt-unknown ;; strongarm | thumb | xscale) basic_machine=arm-unknown ;; xgate) basic_machine=$basic_machine-unknown os=-none ;; xscaleeb) basic_machine=armeb-unknown ;; xscaleel) basic_machine=armel-unknown ;; # We use `pc' rather than `unknown' # because (1) that's what they normally are, and # (2) the word "unknown" tends to confuse beginning users. i*86 | x86_64) basic_machine=$basic_machine-pc ;; # Object if more than one company name word. *-*-*) echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 exit 1 ;; # Recognize the basic CPU types with company name. 580-* \ | a29k-* \ | aarch64-* | aarch64_be-* \ | alpha-* | alphaev[4-8]-* | alphaev56-* | alphaev6[78]-* \ | alpha64-* | alpha64ev[4-8]-* | alpha64ev56-* | alpha64ev6[78]-* \ | alphapca5[67]-* | alpha64pca5[67]-* | arc-* | arceb-* \ | arm-* | armbe-* | armle-* | armeb-* | armv*-* \ | avr-* | avr32-* \ | be32-* | be64-* \ | bfin-* | bs2000-* \ | c[123]* | c30-* | [cjt]90-* | c4x-* \ | c8051-* | clipper-* | craynv-* | cydra-* \ | d10v-* | d30v-* | dlx-* \ | elxsi-* \ | f30[01]-* | f700-* | fido-* | fr30-* | frv-* | fx80-* \ | h8300-* | h8500-* \ | hppa-* | hppa1.[01]-* | hppa2.0-* | hppa2.0[nw]-* | hppa64-* \ | hexagon-* \ | i*86-* | i860-* | i960-* | ia64-* \ | ip2k-* | iq2000-* \ | k1om-* \ | le32-* | le64-* \ | lm32-* \ | m32c-* | m32r-* | m32rle-* \ | m68000-* | m680[012346]0-* | m68360-* | m683?2-* | m68k-* \ | m88110-* | m88k-* | maxq-* | mcore-* | metag-* \ | microblaze-* | microblazeel-* \ | mips-* | mipsbe-* | mipseb-* | mipsel-* | mipsle-* \ | mips16-* \ | mips64-* | mips64el-* \ | mips64octeon-* | mips64octeonel-* \ | mips64orion-* | mips64orionel-* \ | mips64r5900-* | mips64r5900el-* \ | mips64vr-* | mips64vrel-* \ | mips64vr4100-* | mips64vr4100el-* \ | mips64vr4300-* | mips64vr4300el-* \ | mips64vr5000-* | mips64vr5000el-* \ | mips64vr5900-* | mips64vr5900el-* \ | mipsisa32-* | mipsisa32el-* \ | mipsisa32r2-* | mipsisa32r2el-* \ | mipsisa32r6-* | mipsisa32r6el-* \ | mipsisa64-* | mipsisa64el-* \ | mipsisa64r2-* | mipsisa64r2el-* \ | mipsisa64r6-* | mipsisa64r6el-* \ | mipsisa64sb1-* | mipsisa64sb1el-* \ | mipsisa64sr71k-* | mipsisa64sr71kel-* \ | mipsr5900-* | mipsr5900el-* \ | mipstx39-* | mipstx39el-* \ | mmix-* \ | mt-* \ | msp430-* \ | nds32-* | nds32le-* | nds32be-* \ | nios-* | nios2-* | nios2eb-* | nios2el-* \ | none-* | np1-* | ns16k-* | ns32k-* \ | open8-* \ | or1k*-* \ | orion-* \ | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \ | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* \ | pyramid-* \ | rl78-* | romp-* | rs6000-* | rx-* \ | sh-* | sh[1234]-* | sh[24]a-* | sh[24]aeb-* | sh[23]e-* | sh[34]eb-* | sheb-* | shbe-* \ | shle-* | sh[1234]le-* | sh3ele-* | sh64-* | sh64le-* \ | sparc-* | sparc64-* | sparc64b-* | sparc64v-* | sparc86x-* | sparclet-* \ | sparclite-* \ | sparcv8-* | sparcv9-* | sparcv9b-* | sparcv9v-* | sv1-* | sx?-* \ | tahoe-* \ | tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \ | tile*-* \ | tron-* \ | ubicom32-* \ | v850-* | v850e-* | v850e1-* | v850es-* | v850e2-* | v850e2v3-* \ | vax-* \ | we32k-* \ | x86-* | x86_64-* | xc16x-* | xps100-* \ | xstormy16-* | xtensa*-* \ | ymp-* \ | z8k-* | z80-*) ;; # Recognize the basic CPU types without company name, with glob match. xtensa*) basic_machine=$basic_machine-unknown ;; # Recognize the various machine names and aliases which stand # for a CPU type and a company and sometimes even an OS. 386bsd) basic_machine=i386-unknown os=-bsd ;; 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc) basic_machine=m68000-att ;; 3b*) basic_machine=we32k-att ;; a29khif) basic_machine=a29k-amd os=-udi ;; abacus) basic_machine=abacus-unknown ;; adobe68k) basic_machine=m68010-adobe os=-scout ;; alliant | fx80) basic_machine=fx80-alliant ;; altos | altos3068) basic_machine=m68k-altos ;; am29k) basic_machine=a29k-none os=-bsd ;; amd64) basic_machine=x86_64-pc ;; amd64-*) basic_machine=x86_64-`echo $basic_machine | sed 's/^[^-]*-//'` ;; amdahl) basic_machine=580-amdahl os=-sysv ;; amiga | amiga-*) basic_machine=m68k-unknown ;; amigaos | amigados) basic_machine=m68k-unknown os=-amigaos ;; amigaunix | amix) basic_machine=m68k-unknown os=-sysv4 ;; apollo68) basic_machine=m68k-apollo os=-sysv ;; apollo68bsd) basic_machine=m68k-apollo os=-bsd ;; aros) basic_machine=i386-pc os=-aros ;; aux) basic_machine=m68k-apple os=-aux ;; balance) basic_machine=ns32k-sequent os=-dynix ;; blackfin) basic_machine=bfin-unknown os=-linux ;; blackfin-*) basic_machine=bfin-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; bluegene*) basic_machine=powerpc-ibm os=-cnk ;; c54x-*) basic_machine=tic54x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c55x-*) basic_machine=tic55x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c6x-*) basic_machine=tic6x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c90) basic_machine=c90-cray os=-unicos ;; cegcc) basic_machine=arm-unknown os=-cegcc ;; convex-c1) basic_machine=c1-convex os=-bsd ;; convex-c2) basic_machine=c2-convex os=-bsd ;; convex-c32) basic_machine=c32-convex os=-bsd ;; convex-c34) basic_machine=c34-convex os=-bsd ;; convex-c38) basic_machine=c38-convex os=-bsd ;; cray | j90) basic_machine=j90-cray os=-unicos ;; craynv) basic_machine=craynv-cray os=-unicosmp ;; cr16 | cr16-*) basic_machine=cr16-unknown os=-elf ;; crds | unos) basic_machine=m68k-crds ;; crisv32 | crisv32-* | etraxfs*) basic_machine=crisv32-axis ;; cris | cris-* | etrax*) basic_machine=cris-axis ;; crx) basic_machine=crx-unknown os=-elf ;; da30 | da30-*) basic_machine=m68k-da30 ;; decstation | decstation-3100 | pmax | pmax-* | pmin | dec3100 | decstatn) basic_machine=mips-dec ;; decsystem10* | dec10*) basic_machine=pdp10-dec os=-tops10 ;; decsystem20* | dec20*) basic_machine=pdp10-dec os=-tops20 ;; delta | 3300 | motorola-3300 | motorola-delta \ | 3300-motorola | delta-motorola) basic_machine=m68k-motorola ;; delta88) basic_machine=m88k-motorola os=-sysv3 ;; dicos) basic_machine=i686-pc os=-dicos ;; djgpp) basic_machine=i586-pc os=-msdosdjgpp ;; dpx20 | dpx20-*) basic_machine=rs6000-bull os=-bosx ;; dpx2* | dpx2*-bull) basic_machine=m68k-bull os=-sysv3 ;; ebmon29k) basic_machine=a29k-amd os=-ebmon ;; elxsi) basic_machine=elxsi-elxsi os=-bsd ;; encore | umax | mmax) basic_machine=ns32k-encore ;; es1800 | OSE68k | ose68k | ose | OSE) basic_machine=m68k-ericsson os=-ose ;; fx2800) basic_machine=i860-alliant ;; genix) basic_machine=ns32k-ns ;; gmicro) basic_machine=tron-gmicro os=-sysv ;; go32) basic_machine=i386-pc os=-go32 ;; h3050r* | hiux*) basic_machine=hppa1.1-hitachi os=-hiuxwe2 ;; h8300hms) basic_machine=h8300-hitachi os=-hms ;; h8300xray) basic_machine=h8300-hitachi os=-xray ;; h8500hms) basic_machine=h8500-hitachi os=-hms ;; harris) basic_machine=m88k-harris os=-sysv3 ;; hp300-*) basic_machine=m68k-hp ;; hp300bsd) basic_machine=m68k-hp os=-bsd ;; hp300hpux) basic_machine=m68k-hp os=-hpux ;; hp3k9[0-9][0-9] | hp9[0-9][0-9]) basic_machine=hppa1.0-hp ;; hp9k2[0-9][0-9] | hp9k31[0-9]) basic_machine=m68000-hp ;; hp9k3[2-9][0-9]) basic_machine=m68k-hp ;; hp9k6[0-9][0-9] | hp6[0-9][0-9]) basic_machine=hppa1.0-hp ;; hp9k7[0-79][0-9] | hp7[0-79][0-9]) basic_machine=hppa1.1-hp ;; hp9k78[0-9] | hp78[0-9]) # FIXME: really hppa2.0-hp basic_machine=hppa1.1-hp ;; hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893) # FIXME: really hppa2.0-hp basic_machine=hppa1.1-hp ;; hp9k8[0-9][13679] | hp8[0-9][13679]) basic_machine=hppa1.1-hp ;; hp9k8[0-9][0-9] | hp8[0-9][0-9]) basic_machine=hppa1.0-hp ;; hppa-next) os=-nextstep3 ;; hppaosf) basic_machine=hppa1.1-hp os=-osf ;; hppro) basic_machine=hppa1.1-hp os=-proelf ;; i370-ibm* | ibm*) basic_machine=i370-ibm ;; i*86v32) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv32 ;; i*86v4*) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv4 ;; i*86v) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv ;; i*86sol2) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-solaris2 ;; i386mach) basic_machine=i386-mach os=-mach ;; i386-vsta | vsta) basic_machine=i386-unknown os=-vsta ;; iris | iris4d) basic_machine=mips-sgi case $os in -irix*) ;; *) os=-irix4 ;; esac ;; isi68 | isi) basic_machine=m68k-isi os=-sysv ;; m68knommu) basic_machine=m68k-unknown os=-linux ;; m68knommu-*) basic_machine=m68k-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; m88k-omron*) basic_machine=m88k-omron ;; magnum | m3230) basic_machine=mips-mips os=-sysv ;; merlin) basic_machine=ns32k-utek os=-sysv ;; microblaze*) basic_machine=microblaze-xilinx ;; mingw64) basic_machine=x86_64-pc os=-mingw64 ;; mingw32) basic_machine=i686-pc os=-mingw32 ;; mingw32ce) basic_machine=arm-unknown os=-mingw32ce ;; miniframe) basic_machine=m68000-convergent ;; *mint | -mint[0-9]* | *MiNT | *MiNT[0-9]*) basic_machine=m68k-atari os=-mint ;; mips3*-*) basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'` ;; mips3*) basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'`-unknown ;; monitor) basic_machine=m68k-rom68k os=-coff ;; morphos) basic_machine=powerpc-unknown os=-morphos ;; moxiebox) basic_machine=moxie-unknown os=-moxiebox ;; msdos) basic_machine=i386-pc os=-msdos ;; ms1-*) basic_machine=`echo $basic_machine | sed -e 's/ms1-/mt-/'` ;; msys) basic_machine=i686-pc os=-msys ;; mvs) basic_machine=i370-ibm os=-mvs ;; nacl) basic_machine=le32-unknown os=-nacl ;; ncr3000) basic_machine=i486-ncr os=-sysv4 ;; netbsd386) basic_machine=i386-unknown os=-netbsd ;; netwinder) basic_machine=armv4l-rebel os=-linux ;; news | news700 | news800 | news900) basic_machine=m68k-sony os=-newsos ;; news1000) basic_machine=m68030-sony os=-newsos ;; news-3600 | risc-news) basic_machine=mips-sony os=-newsos ;; necv70) basic_machine=v70-nec os=-sysv ;; next | m*-next ) basic_machine=m68k-next case $os in -nextstep* ) ;; -ns2*) os=-nextstep2 ;; *) os=-nextstep3 ;; esac ;; nh3000) basic_machine=m68k-harris os=-cxux ;; nh[45]000) basic_machine=m88k-harris os=-cxux ;; nindy960) basic_machine=i960-intel os=-nindy ;; mon960) basic_machine=i960-intel os=-mon960 ;; nonstopux) basic_machine=mips-compaq os=-nonstopux ;; np1) basic_machine=np1-gould ;; neo-tandem) basic_machine=neo-tandem ;; nse-tandem) basic_machine=nse-tandem ;; nsr-tandem) basic_machine=nsr-tandem ;; op50n-* | op60c-*) basic_machine=hppa1.1-oki os=-proelf ;; openrisc | openrisc-*) basic_machine=or32-unknown ;; os400) basic_machine=powerpc-ibm os=-os400 ;; OSE68000 | ose68000) basic_machine=m68000-ericsson os=-ose ;; os68k) basic_machine=m68k-none os=-os68k ;; pa-hitachi) basic_machine=hppa1.1-hitachi os=-hiuxwe2 ;; paragon) basic_machine=i860-intel os=-osf ;; parisc) basic_machine=hppa-unknown os=-linux ;; parisc-*) basic_machine=hppa-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; pbd) basic_machine=sparc-tti ;; pbb) basic_machine=m68k-tti ;; pc532 | pc532-*) basic_machine=ns32k-pc532 ;; pc98) basic_machine=i386-pc ;; pc98-*) basic_machine=i386-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentium | p5 | k5 | k6 | nexgen | viac3) basic_machine=i586-pc ;; pentiumpro | p6 | 6x86 | athlon | athlon_*) basic_machine=i686-pc ;; pentiumii | pentium2 | pentiumiii | pentium3) basic_machine=i686-pc ;; pentium4) basic_machine=i786-pc ;; pentium-* | p5-* | k5-* | k6-* | nexgen-* | viac3-*) basic_machine=i586-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentiumpro-* | p6-* | 6x86-* | athlon-*) basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentiumii-* | pentium2-* | pentiumiii-* | pentium3-*) basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentium4-*) basic_machine=i786-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pn) basic_machine=pn-gould ;; power) basic_machine=power-ibm ;; ppc | ppcbe) basic_machine=powerpc-unknown ;; ppc-* | ppcbe-*) basic_machine=powerpc-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppcle | powerpclittle | ppc-le | powerpc-little) basic_machine=powerpcle-unknown ;; ppcle-* | powerpclittle-*) basic_machine=powerpcle-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppc64) basic_machine=powerpc64-unknown ;; ppc64-*) basic_machine=powerpc64-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppc64le | powerpc64little | ppc64-le | powerpc64-little) basic_machine=powerpc64le-unknown ;; ppc64le-* | powerpc64little-*) basic_machine=powerpc64le-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ps2) basic_machine=i386-ibm ;; pw32) basic_machine=i586-unknown os=-pw32 ;; rdos | rdos64) basic_machine=x86_64-pc os=-rdos ;; rdos32) basic_machine=i386-pc os=-rdos ;; rom68k) basic_machine=m68k-rom68k os=-coff ;; rm[46]00) basic_machine=mips-siemens ;; rtpc | rtpc-*) basic_machine=romp-ibm ;; s390 | s390-*) basic_machine=s390-ibm ;; s390x | s390x-*) basic_machine=s390x-ibm ;; sa29200) basic_machine=a29k-amd os=-udi ;; sb1) basic_machine=mipsisa64sb1-unknown ;; sb1el) basic_machine=mipsisa64sb1el-unknown ;; sde) basic_machine=mipsisa32-sde os=-elf ;; sei) basic_machine=mips-sei os=-seiux ;; sequent) basic_machine=i386-sequent ;; sh) basic_machine=sh-hitachi os=-hms ;; sh5el) basic_machine=sh5le-unknown ;; sh64) basic_machine=sh64-unknown ;; sparclite-wrs | simso-wrs) basic_machine=sparclite-wrs os=-vxworks ;; sps7) basic_machine=m68k-bull os=-sysv2 ;; spur) basic_machine=spur-unknown ;; st2000) basic_machine=m68k-tandem ;; stratus) basic_machine=i860-stratus os=-sysv4 ;; strongarm-* | thumb-*) basic_machine=arm-`echo $basic_machine | sed 's/^[^-]*-//'` ;; sun2) basic_machine=m68000-sun ;; sun2os3) basic_machine=m68000-sun os=-sunos3 ;; sun2os4) basic_machine=m68000-sun os=-sunos4 ;; sun3os3) basic_machine=m68k-sun os=-sunos3 ;; sun3os4) basic_machine=m68k-sun os=-sunos4 ;; sun4os3) basic_machine=sparc-sun os=-sunos3 ;; sun4os4) basic_machine=sparc-sun os=-sunos4 ;; sun4sol2) basic_machine=sparc-sun os=-solaris2 ;; sun3 | sun3-*) basic_machine=m68k-sun ;; sun4) basic_machine=sparc-sun ;; sun386 | sun386i | roadrunner) basic_machine=i386-sun ;; sv1) basic_machine=sv1-cray os=-unicos ;; symmetry) basic_machine=i386-sequent os=-dynix ;; t3e) basic_machine=alphaev5-cray os=-unicos ;; t90) basic_machine=t90-cray os=-unicos ;; tile*) basic_machine=$basic_machine-unknown os=-linux-gnu ;; tx39) basic_machine=mipstx39-unknown ;; tx39el) basic_machine=mipstx39el-unknown ;; toad1) basic_machine=pdp10-xkl os=-tops20 ;; tower | tower-32) basic_machine=m68k-ncr ;; tpf) basic_machine=s390x-ibm os=-tpf ;; udi29k) basic_machine=a29k-amd os=-udi ;; ultra3) basic_machine=a29k-nyu os=-sym1 ;; v810 | necv810) basic_machine=v810-nec os=-none ;; vaxv) basic_machine=vax-dec os=-sysv ;; vms) basic_machine=vax-dec os=-vms ;; vpp*|vx|vx-*) basic_machine=f301-fujitsu ;; vxworks960) basic_machine=i960-wrs os=-vxworks ;; vxworks68) basic_machine=m68k-wrs os=-vxworks ;; vxworks29k) basic_machine=a29k-wrs os=-vxworks ;; w65*) basic_machine=w65-wdc os=-none ;; w89k-*) basic_machine=hppa1.1-winbond os=-proelf ;; xbox) basic_machine=i686-pc os=-mingw32 ;; xps | xps100) basic_machine=xps100-honeywell ;; xscale-* | xscalee[bl]-*) basic_machine=`echo $basic_machine | sed 's/^xscale/arm/'` ;; ymp) basic_machine=ymp-cray os=-unicos ;; z8k-*-coff) basic_machine=z8k-unknown os=-sim ;; z80-*-coff) basic_machine=z80-unknown os=-sim ;; none) basic_machine=none-none os=-none ;; # Here we handle the default manufacturer of certain CPU types. It is in # some cases the only manufacturer, in others, it is the most popular. w89k) basic_machine=hppa1.1-winbond ;; op50n) basic_machine=hppa1.1-oki ;; op60c) basic_machine=hppa1.1-oki ;; romp) basic_machine=romp-ibm ;; mmix) basic_machine=mmix-knuth ;; rs6000) basic_machine=rs6000-ibm ;; vax) basic_machine=vax-dec ;; pdp10) # there are many clones, so DEC is not a safe bet basic_machine=pdp10-unknown ;; pdp11) basic_machine=pdp11-dec ;; we32k) basic_machine=we32k-att ;; sh[1234] | sh[24]a | sh[24]aeb | sh[34]eb | sh[1234]le | sh[23]ele) basic_machine=sh-unknown ;; sparc | sparcv8 | sparcv9 | sparcv9b | sparcv9v) basic_machine=sparc-sun ;; cydra) basic_machine=cydra-cydrome ;; orion) basic_machine=orion-highlevel ;; orion105) basic_machine=clipper-highlevel ;; mac | mpw | mac-mpw) basic_machine=m68k-apple ;; pmac | pmac-mpw) basic_machine=powerpc-apple ;; *-unknown) # Make sure to match an already-canonicalized machine name. ;; *) echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 exit 1 ;; esac # Here we canonicalize certain aliases for manufacturers. case $basic_machine in *-digital*) basic_machine=`echo $basic_machine | sed 's/digital.*/dec/'` ;; *-commodore*) basic_machine=`echo $basic_machine | sed 's/commodore.*/cbm/'` ;; *) ;; esac # Decode manufacturer-specific aliases for certain operating systems. if [ x"$os" != x"" ] then case $os in # First match some system type aliases # that might get confused with valid system types. # -solaris* is a basic system type, with this one exception. -auroraux) os=-auroraux ;; -solaris1 | -solaris1.*) os=`echo $os | sed -e 's|solaris1|sunos4|'` ;; -solaris) os=-solaris2 ;; -svr4*) os=-sysv4 ;; -unixware*) os=-sysv4.2uw ;; -gnu/linux*) os=`echo $os | sed -e 's|gnu/linux|linux-gnu|'` ;; # First accept the basic system types. # The portable systems comes first. # Each alternative MUST END IN A *, to match a version number. # -sysv* is not here because it comes later, after sysvr4. -gnu* | -bsd* | -mach* | -minix* | -genix* | -ultrix* | -irix* \ | -*vms* | -sco* | -esix* | -isc* | -aix* | -cnk* | -sunos | -sunos[34]*\ | -hpux* | -unos* | -osf* | -luna* | -dgux* | -auroraux* | -solaris* \ | -sym* | -kopensolaris* | -plan9* \ | -amigaos* | -amigados* | -msdos* | -newsos* | -unicos* | -aof* \ | -aos* | -aros* \ | -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \ | -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \ | -hiux* | -386bsd* | -knetbsd* | -mirbsd* | -netbsd* \ | -bitrig* | -openbsd* | -solidbsd* \ | -ekkobsd* | -kfreebsd* | -freebsd* | -riscix* | -lynxos* \ | -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \ | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \ | -chorusos* | -chorusrdb* | -cegcc* \ | -cygwin* | -msys* | -pe* | -psos* | -moss* | -proelf* | -rtems* \ | -mingw32* | -mingw64* | -linux-gnu* | -linux-android* \ | -linux-newlib* | -linux-musl* | -linux-uclibc* \ | -uxpv* | -beos* | -mpeix* | -udk* | -moxiebox* \ | -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \ | -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \ | -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \ | -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \ | -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \ | -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \ | -skyos* | -haiku* | -rdos* | -toppers* | -drops* | -es* | -tirtos*) # Remember, each alternative MUST END IN *, to match a version number. ;; -qnx*) case $basic_machine in x86-* | i*86-*) ;; *) os=-nto$os ;; esac ;; -nto-qnx*) ;; -nto*) os=`echo $os | sed -e 's|nto|nto-qnx|'` ;; -sim | -es1800* | -hms* | -xray | -os68k* | -none* | -v88r* \ | -windows* | -osx | -abug | -netware* | -os9* | -beos* | -haiku* \ | -macos* | -mpw* | -magic* | -mmixware* | -mon960* | -lnews*) ;; -mac*) os=`echo $os | sed -e 's|mac|macos|'` ;; -linux-dietlibc) os=-linux-dietlibc ;; -linux*) os=`echo $os | sed -e 's|linux|linux-gnu|'` ;; -sunos5*) os=`echo $os | sed -e 's|sunos5|solaris2|'` ;; -sunos6*) os=`echo $os | sed -e 's|sunos6|solaris3|'` ;; -opened*) os=-openedition ;; -os400*) os=-os400 ;; -wince*) os=-wince ;; -osfrose*) os=-osfrose ;; -osf*) os=-osf ;; -utek*) os=-bsd ;; -dynix*) os=-bsd ;; -acis*) os=-aos ;; -atheos*) os=-atheos ;; -syllable*) os=-syllable ;; -386bsd) os=-bsd ;; -ctix* | -uts*) os=-sysv ;; -nova*) os=-rtmk-nova ;; -ns2 ) os=-nextstep2 ;; -nsk*) os=-nsk ;; # Preserve the version number of sinix5. -sinix5.*) os=`echo $os | sed -e 's|sinix|sysv|'` ;; -sinix*) os=-sysv4 ;; -tpf*) os=-tpf ;; -triton*) os=-sysv3 ;; -oss*) os=-sysv3 ;; -svr4) os=-sysv4 ;; -svr3) os=-sysv3 ;; -sysvr4) os=-sysv4 ;; # This must come after -sysvr4. -sysv*) ;; -ose*) os=-ose ;; -es1800*) os=-ose ;; -xenix) os=-xenix ;; -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*) os=-mint ;; -aros*) os=-aros ;; -zvmoe) os=-zvmoe ;; -dicos*) os=-dicos ;; -nacl*) ;; -none) ;; *) # Get rid of the `-' at the beginning of $os. os=`echo $os | sed 's/[^-]*-//'` echo Invalid configuration \`$1\': system \`$os\' not recognized 1>&2 exit 1 ;; esac else # Here we handle the default operating systems that come with various machines. # The value should be what the vendor currently ships out the door with their # machine or put another way, the most popular os provided with the machine. # Note that if you're going to try to match "-MANUFACTURER" here (say, # "-sun"), then you have to tell the case statement up towards the top # that MANUFACTURER isn't an operating system. Otherwise, code above # will signal an error saying that MANUFACTURER isn't an operating # system, and we'll never get to this point. case $basic_machine in score-*) os=-elf ;; spu-*) os=-elf ;; *-acorn) os=-riscix1.2 ;; arm*-rebel) os=-linux ;; arm*-semi) os=-aout ;; c4x-* | tic4x-*) os=-coff ;; c8051-*) os=-elf ;; hexagon-*) os=-elf ;; tic54x-*) os=-coff ;; tic55x-*) os=-coff ;; tic6x-*) os=-coff ;; # This must come before the *-dec entry. pdp10-*) os=-tops20 ;; pdp11-*) os=-none ;; *-dec | vax-*) os=-ultrix4.2 ;; m68*-apollo) os=-domain ;; i386-sun) os=-sunos4.0.2 ;; m68000-sun) os=-sunos3 ;; m68*-cisco) os=-aout ;; mep-*) os=-elf ;; mips*-cisco) os=-elf ;; mips*-*) os=-elf ;; or32-*) os=-coff ;; *-tti) # must be before sparc entry or we get the wrong os. os=-sysv3 ;; sparc-* | *-sun) os=-sunos4.1.1 ;; *-be) os=-beos ;; *-haiku) os=-haiku ;; *-ibm) os=-aix ;; *-knuth) os=-mmixware ;; *-wec) os=-proelf ;; *-winbond) os=-proelf ;; *-oki) os=-proelf ;; *-hp) os=-hpux ;; *-hitachi) os=-hiux ;; i860-* | *-att | *-ncr | *-altos | *-motorola | *-convergent) os=-sysv ;; *-cbm) os=-amigaos ;; *-dg) os=-dgux ;; *-dolphin) os=-sysv3 ;; m68k-ccur) os=-rtu ;; m88k-omron*) os=-luna ;; *-next ) os=-nextstep ;; *-sequent) os=-ptx ;; *-crds) os=-unos ;; *-ns) os=-genix ;; i370-*) os=-mvs ;; *-next) os=-nextstep3 ;; *-gould) os=-sysv ;; *-highlevel) os=-bsd ;; *-encore) os=-bsd ;; *-sgi) os=-irix ;; *-siemens) os=-sysv4 ;; *-masscomp) os=-rtu ;; f30[01]-fujitsu | f700-fujitsu) os=-uxpv ;; *-rom68k) os=-coff ;; *-*bug) os=-coff ;; *-apple) os=-macos ;; *-atari*) os=-mint ;; *) os=-none ;; esac fi # Here we handle the case where we know the os, and the CPU type, but not the # manufacturer. We pick the logical manufacturer. vendor=unknown case $basic_machine in *-unknown) case $os in -riscix*) vendor=acorn ;; -sunos*) vendor=sun ;; -cnk*|-aix*) vendor=ibm ;; -beos*) vendor=be ;; -hpux*) vendor=hp ;; -mpeix*) vendor=hp ;; -hiux*) vendor=hitachi ;; -unos*) vendor=crds ;; -dgux*) vendor=dg ;; -luna*) vendor=omron ;; -genix*) vendor=ns ;; -mvs* | -opened*) vendor=ibm ;; -os400*) vendor=ibm ;; -ptx*) vendor=sequent ;; -tpf*) vendor=ibm ;; -vxsim* | -vxworks* | -windiss*) vendor=wrs ;; -aux*) vendor=apple ;; -hms*) vendor=hitachi ;; -mpw* | -macos*) vendor=apple ;; -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*) vendor=atari ;; -vos*) vendor=stratus ;; esac basic_machine=`echo $basic_machine | sed "s/unknown/$vendor/"` ;; esac echo $basic_machine$os exit # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: slurm-slurm-15-08-7-1/auxdir/depcomp000077500000000000000000000560161265000126300172060ustar00rootroot00000000000000#! /bin/sh # depcomp - compile a program generating dependencies as side-effects scriptversion=2013-05-30.07; # UTC # Copyright (C) 1999-2013 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # Originally written by Alexandre Oliva . case $1 in '') echo "$0: No command. Try '$0 --help' for more information." 1>&2 exit 1; ;; -h | --h*) cat <<\EOF Usage: depcomp [--help] [--version] PROGRAM [ARGS] Run PROGRAMS ARGS to compile a file, generating dependencies as side-effects. Environment variables: depmode Dependency tracking mode. source Source file read by 'PROGRAMS ARGS'. object Object file output by 'PROGRAMS ARGS'. DEPDIR directory where to store dependencies. depfile Dependency file to output. tmpdepfile Temporary file to use when outputting dependencies. libtool Whether libtool is used (yes/no). Report bugs to . EOF exit $? ;; -v | --v*) echo "depcomp $scriptversion" exit $? ;; esac # Get the directory component of the given path, and save it in the # global variables '$dir'. Note that this directory component will # be either empty or ending with a '/' character. This is deliberate. set_dir_from () { case $1 in */*) dir=`echo "$1" | sed -e 's|/[^/]*$|/|'`;; *) dir=;; esac } # Get the suffix-stripped basename of the given path, and save it the # global variable '$base'. set_base_from () { base=`echo "$1" | sed -e 's|^.*/||' -e 's/\.[^.]*$//'` } # If no dependency file was actually created by the compiler invocation, # we still have to create a dummy depfile, to avoid errors with the # Makefile "include basename.Plo" scheme. make_dummy_depfile () { echo "#dummy" > "$depfile" } # Factor out some common post-processing of the generated depfile. # Requires the auxiliary global variable '$tmpdepfile' to be set. aix_post_process_depfile () { # If the compiler actually managed to produce a dependency file, # post-process it. if test -f "$tmpdepfile"; then # Each line is of the form 'foo.o: dependency.h'. # Do two passes, one to just change these to # $object: dependency.h # and one to simply output # dependency.h: # which is needed to avoid the deleted-header problem. { sed -e "s,^.*\.[$lower]*:,$object:," < "$tmpdepfile" sed -e "s,^.*\.[$lower]*:[$tab ]*,," -e 's,$,:,' < "$tmpdepfile" } > "$depfile" rm -f "$tmpdepfile" else make_dummy_depfile fi } # A tabulation character. tab=' ' # A newline character. nl=' ' # Character ranges might be problematic outside the C locale. # These definitions help. upper=ABCDEFGHIJKLMNOPQRSTUVWXYZ lower=abcdefghijklmnopqrstuvwxyz digits=0123456789 alpha=${upper}${lower} if test -z "$depmode" || test -z "$source" || test -z "$object"; then echo "depcomp: Variables source, object and depmode must be set" 1>&2 exit 1 fi # Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po. depfile=${depfile-`echo "$object" | sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`} tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`} rm -f "$tmpdepfile" # Avoid interferences from the environment. gccflag= dashmflag= # Some modes work just like other modes, but use different flags. We # parameterize here, but still list the modes in the big case below, # to make depend.m4 easier to write. Note that we *cannot* use a case # here, because this file can only contain one case statement. if test "$depmode" = hp; then # HP compiler uses -M and no extra arg. gccflag=-M depmode=gcc fi if test "$depmode" = dashXmstdout; then # This is just like dashmstdout with a different argument. dashmflag=-xM depmode=dashmstdout fi cygpath_u="cygpath -u -f -" if test "$depmode" = msvcmsys; then # This is just like msvisualcpp but w/o cygpath translation. # Just convert the backslash-escaped backslashes to single forward # slashes to satisfy depend.m4 cygpath_u='sed s,\\\\,/,g' depmode=msvisualcpp fi if test "$depmode" = msvc7msys; then # This is just like msvc7 but w/o cygpath translation. # Just convert the backslash-escaped backslashes to single forward # slashes to satisfy depend.m4 cygpath_u='sed s,\\\\,/,g' depmode=msvc7 fi if test "$depmode" = xlc; then # IBM C/C++ Compilers xlc/xlC can output gcc-like dependency information. gccflag=-qmakedep=gcc,-MF depmode=gcc fi case "$depmode" in gcc3) ## gcc 3 implements dependency tracking that does exactly what ## we want. Yay! Note: for some reason libtool 1.4 doesn't like ## it if -MD -MP comes after the -MF stuff. Hmm. ## Unfortunately, FreeBSD c89 acceptance of flags depends upon ## the command line argument order; so add the flags where they ## appear in depend2.am. Note that the slowdown incurred here ## affects only configure: in makefiles, %FASTDEP% shortcuts this. for arg do case $arg in -c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;; *) set fnord "$@" "$arg" ;; esac shift # fnord shift # $arg done "$@" stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi mv "$tmpdepfile" "$depfile" ;; gcc) ## Note that this doesn't just cater to obsosete pre-3.x GCC compilers. ## but also to in-use compilers like IMB xlc/xlC and the HP C compiler. ## (see the conditional assignment to $gccflag above). ## There are various ways to get dependency output from gcc. Here's ## why we pick this rather obscure method: ## - Don't want to use -MD because we'd like the dependencies to end ## up in a subdir. Having to rename by hand is ugly. ## (We might end up doing this anyway to support other compilers.) ## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like ## -MM, not -M (despite what the docs say). Also, it might not be ## supported by the other compilers which use the 'gcc' depmode. ## - Using -M directly means running the compiler twice (even worse ## than renaming). if test -z "$gccflag"; then gccflag=-MD, fi "$@" -Wp,"$gccflag$tmpdepfile" stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" echo "$object : \\" > "$depfile" # The second -e expression handles DOS-style file names with drive # letters. sed -e 's/^[^:]*: / /' \ -e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile" ## This next piece of magic avoids the "deleted header file" problem. ## The problem is that when a header file which appears in a .P file ## is deleted, the dependency causes make to die (because there is ## typically no way to rebuild the header). We avoid this by adding ## dummy dependencies for each header file. Too bad gcc doesn't do ## this for us directly. ## Some versions of gcc put a space before the ':'. On the theory ## that the space means something, we add a space to the output as ## well. hp depmode also adds that space, but also prefixes the VPATH ## to the object. Take care to not repeat it in the output. ## Some versions of the HPUX 10.20 sed can't process this invocation ## correctly. Breaking it into two sed invocations is a workaround. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^\\$//' -e '/^$/d' -e "s|.*$object$||" -e '/:$/d' \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; hp) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; sgi) if test "$libtool" = yes; then "$@" "-Wp,-MDupdate,$tmpdepfile" else "$@" -MDupdate "$tmpdepfile" fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files echo "$object : \\" > "$depfile" # Clip off the initial element (the dependent). Don't try to be # clever and replace this with sed code, as IRIX sed won't handle # lines with more than a fixed number of characters (4096 in # IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines; # the IRIX cc adds comments like '#:fec' to the end of the # dependency line. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' \ | tr "$nl" ' ' >> "$depfile" echo >> "$depfile" # The second pass generates a dummy entry for each header file. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \ >> "$depfile" else make_dummy_depfile fi rm -f "$tmpdepfile" ;; xlc) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; aix) # The C for AIX Compiler uses -M and outputs the dependencies # in a .u file. In older versions, this file always lives in the # current directory. Also, the AIX compiler puts '$object:' at the # start of each line; $object doesn't have directory information. # Version 6 uses the directory in both cases. set_dir_from "$object" set_base_from "$object" if test "$libtool" = yes; then tmpdepfile1=$dir$base.u tmpdepfile2=$base.u tmpdepfile3=$dir.libs/$base.u "$@" -Wc,-M else tmpdepfile1=$dir$base.u tmpdepfile2=$dir$base.u tmpdepfile3=$dir$base.u "$@" -M fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" do test -f "$tmpdepfile" && break done aix_post_process_depfile ;; tcc) # tcc (Tiny C Compiler) understand '-MD -MF file' since version 0.9.26 # FIXME: That version still under development at the moment of writing. # Make that this statement remains true also for stable, released # versions. # It will wrap lines (doesn't matter whether long or short) with a # trailing '\', as in: # # foo.o : \ # foo.c \ # foo.h \ # # It will put a trailing '\' even on the last line, and will use leading # spaces rather than leading tabs (at least since its commit 0394caf7 # "Emit spaces for -MD"). "$@" -MD -MF "$tmpdepfile" stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" # Each non-empty line is of the form 'foo.o : \' or ' dep.h \'. # We have to change lines of the first kind to '$object: \'. sed -e "s|.*:|$object :|" < "$tmpdepfile" > "$depfile" # And for each line of the second kind, we have to emit a 'dep.h:' # dummy dependency, to avoid the deleted-header problem. sed -n -e 's|^ *\(.*\) *\\$|\1:|p' < "$tmpdepfile" >> "$depfile" rm -f "$tmpdepfile" ;; ## The order of this option in the case statement is important, since the ## shell code in configure will try each of these formats in the order ## listed in this file. A plain '-MD' option would be understood by many ## compilers, so we must ensure this comes after the gcc and icc options. pgcc) # Portland's C compiler understands '-MD'. # Will always output deps to 'file.d' where file is the root name of the # source file under compilation, even if file resides in a subdirectory. # The object file name does not affect the name of the '.d' file. # pgcc 10.2 will output # foo.o: sub/foo.c sub/foo.h # and will wrap long lines using '\' : # foo.o: sub/foo.c ... \ # sub/foo.h ... \ # ... set_dir_from "$object" # Use the source, not the object, to determine the base name, since # that's sadly what pgcc will do too. set_base_from "$source" tmpdepfile=$base.d # For projects that build the same source file twice into different object # files, the pgcc approach of using the *source* file root name can cause # problems in parallel builds. Use a locking strategy to avoid stomping on # the same $tmpdepfile. lockdir=$base.d-lock trap " echo '$0: caught signal, cleaning up...' >&2 rmdir '$lockdir' exit 1 " 1 2 13 15 numtries=100 i=$numtries while test $i -gt 0; do # mkdir is a portable test-and-set. if mkdir "$lockdir" 2>/dev/null; then # This process acquired the lock. "$@" -MD stat=$? # Release the lock. rmdir "$lockdir" break else # If the lock is being held by a different process, wait # until the winning process is done or we timeout. while test -d "$lockdir" && test $i -gt 0; do sleep 1 i=`expr $i - 1` done fi i=`expr $i - 1` done trap - 1 2 13 15 if test $i -le 0; then echo "$0: failed to acquire lock after $numtries attempts" >&2 echo "$0: check lockdir '$lockdir'" >&2 exit 1 fi if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" # Each line is of the form `foo.o: dependent.h', # or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'. # Do two passes, one to just change these to # `$object: dependent.h' and one to simply `dependent.h:'. sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile" # Some versions of the HPUX 10.20 sed can't process this invocation # correctly. Breaking it into two sed invocations is a workaround. sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; hp2) # The "hp" stanza above does not work with aCC (C++) and HP's ia64 # compilers, which have integrated preprocessors. The correct option # to use with these is +Maked; it writes dependencies to a file named # 'foo.d', which lands next to the object file, wherever that # happens to be. # Much of this is similar to the tru64 case; see comments there. set_dir_from "$object" set_base_from "$object" if test "$libtool" = yes; then tmpdepfile1=$dir$base.d tmpdepfile2=$dir.libs/$base.d "$@" -Wc,+Maked else tmpdepfile1=$dir$base.d tmpdepfile2=$dir$base.d "$@" +Maked fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile1" "$tmpdepfile2" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" do test -f "$tmpdepfile" && break done if test -f "$tmpdepfile"; then sed -e "s,^.*\.[$lower]*:,$object:," "$tmpdepfile" > "$depfile" # Add 'dependent.h:' lines. sed -ne '2,${ s/^ *// s/ \\*$// s/$/:/ p }' "$tmpdepfile" >> "$depfile" else make_dummy_depfile fi rm -f "$tmpdepfile" "$tmpdepfile2" ;; tru64) # The Tru64 compiler uses -MD to generate dependencies as a side # effect. 'cc -MD -o foo.o ...' puts the dependencies into 'foo.o.d'. # At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put # dependencies in 'foo.d' instead, so we check for that too. # Subdirectories are respected. set_dir_from "$object" set_base_from "$object" if test "$libtool" = yes; then # Libtool generates 2 separate objects for the 2 libraries. These # two compilations output dependencies in $dir.libs/$base.o.d and # in $dir$base.o.d. We have to check for both files, because # one of the two compilations can be disabled. We should prefer # $dir$base.o.d over $dir.libs/$base.o.d because the latter is # automatically cleaned when .libs/ is deleted, while ignoring # the former would cause a distcleancheck panic. tmpdepfile1=$dir$base.o.d # libtool 1.5 tmpdepfile2=$dir.libs/$base.o.d # Likewise. tmpdepfile3=$dir.libs/$base.d # Compaq CCC V6.2-504 "$@" -Wc,-MD else tmpdepfile1=$dir$base.d tmpdepfile2=$dir$base.d tmpdepfile3=$dir$base.d "$@" -MD fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" do test -f "$tmpdepfile" && break done # Same post-processing that is required for AIX mode. aix_post_process_depfile ;; msvc7) if test "$libtool" = yes; then showIncludes=-Wc,-showIncludes else showIncludes=-showIncludes fi "$@" $showIncludes > "$tmpdepfile" stat=$? grep -v '^Note: including file: ' "$tmpdepfile" if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" echo "$object : \\" > "$depfile" # The first sed program below extracts the file names and escapes # backslashes for cygpath. The second sed program outputs the file # name when reading, but also accumulates all include files in the # hold buffer in order to output them again at the end. This only # works with sed implementations that can handle large buffers. sed < "$tmpdepfile" -n ' /^Note: including file: *\(.*\)/ { s//\1/ s/\\/\\\\/g p }' | $cygpath_u | sort -u | sed -n ' s/ /\\ /g s/\(.*\)/'"$tab"'\1 \\/p s/.\(.*\) \\/\1:/ H $ { s/.*/'"$tab"'/ G p }' >> "$depfile" echo >> "$depfile" # make sure the fragment doesn't end with a backslash rm -f "$tmpdepfile" ;; msvc7msys) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; #nosideeffect) # This comment above is used by automake to tell side-effect # dependency tracking mechanisms from slower ones. dashmstdout) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout, regardless of -o. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # Remove '-o $object'. IFS=" " for arg do case $arg in -o) shift ;; $object) shift ;; *) set fnord "$@" "$arg" shift # fnord shift # $arg ;; esac done test -z "$dashmflag" && dashmflag=-M # Require at least two characters before searching for ':' # in the target name. This is to cope with DOS-style filenames: # a dependency such as 'c:/foo/bar' could be seen as target 'c' otherwise. "$@" $dashmflag | sed "s|^[$tab ]*[^:$tab ][^:][^:]*:[$tab ]*|$object: |" > "$tmpdepfile" rm -f "$depfile" cat < "$tmpdepfile" > "$depfile" # Some versions of the HPUX 10.20 sed can't process this sed invocation # correctly. Breaking it into two sed invocations is a workaround. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; dashXmstdout) # This case only exists to satisfy depend.m4. It is never actually # run, as this mode is specially recognized in the preamble. exit 1 ;; makedepend) "$@" || exit $? # Remove any Libtool call if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # X makedepend shift cleared=no eat=no for arg do case $cleared in no) set ""; shift cleared=yes ;; esac if test $eat = yes; then eat=no continue fi case "$arg" in -D*|-I*) set fnord "$@" "$arg"; shift ;; # Strip any option that makedepend may not understand. Remove # the object too, otherwise makedepend will parse it as a source file. -arch) eat=yes ;; -*|$object) ;; *) set fnord "$@" "$arg"; shift ;; esac done obj_suffix=`echo "$object" | sed 's/^.*\././'` touch "$tmpdepfile" ${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@" rm -f "$depfile" # makedepend may prepend the VPATH from the source file name to the object. # No need to regex-escape $object, excess matching of '.' is harmless. sed "s|^.*\($object *:\)|\1|" "$tmpdepfile" > "$depfile" # Some versions of the HPUX 10.20 sed can't process the last invocation # correctly. Breaking it into two sed invocations is a workaround. sed '1,2d' "$tmpdepfile" \ | tr ' ' "$nl" \ | sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" "$tmpdepfile".bak ;; cpp) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # Remove '-o $object'. IFS=" " for arg do case $arg in -o) shift ;; $object) shift ;; *) set fnord "$@" "$arg" shift # fnord shift # $arg ;; esac done "$@" -E \ | sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \ -e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \ | sed '$ s: \\$::' > "$tmpdepfile" rm -f "$depfile" echo "$object : \\" > "$depfile" cat < "$tmpdepfile" >> "$depfile" sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; msvisualcpp) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi IFS=" " for arg do case "$arg" in -o) shift ;; $object) shift ;; "-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI") set fnord "$@" shift shift ;; *) set fnord "$@" "$arg" shift shift ;; esac done "$@" -E 2>/dev/null | sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::\1:p' | $cygpath_u | sort -u > "$tmpdepfile" rm -f "$depfile" echo "$object : \\" > "$depfile" sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::'"$tab"'\1 \\:p' >> "$depfile" echo "$tab" >> "$depfile" sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::\1\::p' >> "$depfile" rm -f "$tmpdepfile" ;; msvcmsys) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; none) exec "$@" ;; *) echo "Unknown depmode $depmode" 1>&2 exit 1 ;; esac exit 0 # Local Variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC" # time-stamp-end: "; # UTC" # End: slurm-slurm-15-08-7-1/auxdir/install-sh000077500000000000000000000332551265000126300176350ustar00rootroot00000000000000#!/bin/sh # install - install a program, script, or datafile scriptversion=2011-11-20.07; # UTC # This originates from X11R5 (mit/util/scripts/install.sh), which was # later released in X11R6 (xc/config/util/install.sh) with the # following copyright and license. # # Copyright (C) 1994 X Consortium # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to # deal in the Software without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN # AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC- # TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # # Except as contained in this notice, the name of the X Consortium shall not # be used in advertising or otherwise to promote the sale, use or other deal- # ings in this Software without prior written authorization from the X Consor- # tium. # # # FSF changes to this file are in the public domain. # # Calling this script install-sh is preferred over install.sh, to prevent # 'make' implicit rules from creating a file called install from it # when there is no Makefile. # # This script is compatible with the BSD install script, but was written # from scratch. nl=' ' IFS=" "" $nl" # set DOITPROG to echo to test this script # Don't use :- since 4.3BSD and earlier shells don't like it. doit=${DOITPROG-} if test -z "$doit"; then doit_exec=exec else doit_exec=$doit fi # Put in absolute file names if you don't have them in your path; # or use environment vars. chgrpprog=${CHGRPPROG-chgrp} chmodprog=${CHMODPROG-chmod} chownprog=${CHOWNPROG-chown} cmpprog=${CMPPROG-cmp} cpprog=${CPPROG-cp} mkdirprog=${MKDIRPROG-mkdir} mvprog=${MVPROG-mv} rmprog=${RMPROG-rm} stripprog=${STRIPPROG-strip} posix_glob='?' initialize_posix_glob=' test "$posix_glob" != "?" || { if (set -f) 2>/dev/null; then posix_glob= else posix_glob=: fi } ' posix_mkdir= # Desired mode of installed file. mode=0755 chgrpcmd= chmodcmd=$chmodprog chowncmd= mvcmd=$mvprog rmcmd="$rmprog -f" stripcmd= src= dst= dir_arg= dst_arg= copy_on_change=false no_target_directory= usage="\ Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE or: $0 [OPTION]... SRCFILES... DIRECTORY or: $0 [OPTION]... -t DIRECTORY SRCFILES... or: $0 [OPTION]... -d DIRECTORIES... In the 1st form, copy SRCFILE to DSTFILE. In the 2nd and 3rd, copy all SRCFILES to DIRECTORY. In the 4th, create DIRECTORIES. Options: --help display this help and exit. --version display version info and exit. -c (ignored) -C install only if different (preserve the last data modification time) -d create directories instead of installing files. -g GROUP $chgrpprog installed files to GROUP. -m MODE $chmodprog installed files to MODE. -o USER $chownprog installed files to USER. -s $stripprog installed files. -t DIRECTORY install into DIRECTORY. -T report an error if DSTFILE is a directory. Environment variables override the default commands: CHGRPPROG CHMODPROG CHOWNPROG CMPPROG CPPROG MKDIRPROG MVPROG RMPROG STRIPPROG " while test $# -ne 0; do case $1 in -c) ;; -C) copy_on_change=true;; -d) dir_arg=true;; -g) chgrpcmd="$chgrpprog $2" shift;; --help) echo "$usage"; exit $?;; -m) mode=$2 case $mode in *' '* | *' '* | *' '* | *'*'* | *'?'* | *'['*) echo "$0: invalid mode: $mode" >&2 exit 1;; esac shift;; -o) chowncmd="$chownprog $2" shift;; -s) stripcmd=$stripprog;; -t) dst_arg=$2 # Protect names problematic for 'test' and other utilities. case $dst_arg in -* | [=\(\)!]) dst_arg=./$dst_arg;; esac shift;; -T) no_target_directory=true;; --version) echo "$0 $scriptversion"; exit $?;; --) shift break;; -*) echo "$0: invalid option: $1" >&2 exit 1;; *) break;; esac shift done if test $# -ne 0 && test -z "$dir_arg$dst_arg"; then # When -d is used, all remaining arguments are directories to create. # When -t is used, the destination is already specified. # Otherwise, the last argument is the destination. Remove it from $@. for arg do if test -n "$dst_arg"; then # $@ is not empty: it contains at least $arg. set fnord "$@" "$dst_arg" shift # fnord fi shift # arg dst_arg=$arg # Protect names problematic for 'test' and other utilities. case $dst_arg in -* | [=\(\)!]) dst_arg=./$dst_arg;; esac done fi if test $# -eq 0; then if test -z "$dir_arg"; then echo "$0: no input file specified." >&2 exit 1 fi # It's OK to call 'install-sh -d' without argument. # This can happen when creating conditional directories. exit 0 fi if test -z "$dir_arg"; then do_exit='(exit $ret); exit $ret' trap "ret=129; $do_exit" 1 trap "ret=130; $do_exit" 2 trap "ret=141; $do_exit" 13 trap "ret=143; $do_exit" 15 # Set umask so as not to create temps with too-generous modes. # However, 'strip' requires both read and write access to temps. case $mode in # Optimize common cases. *644) cp_umask=133;; *755) cp_umask=22;; *[0-7]) if test -z "$stripcmd"; then u_plus_rw= else u_plus_rw='% 200' fi cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;; *) if test -z "$stripcmd"; then u_plus_rw= else u_plus_rw=,u+rw fi cp_umask=$mode$u_plus_rw;; esac fi for src do # Protect names problematic for 'test' and other utilities. case $src in -* | [=\(\)!]) src=./$src;; esac if test -n "$dir_arg"; then dst=$src dstdir=$dst test -d "$dstdir" dstdir_status=$? else # Waiting for this to be detected by the "$cpprog $src $dsttmp" command # might cause directories to be created, which would be especially bad # if $src (and thus $dsttmp) contains '*'. if test ! -f "$src" && test ! -d "$src"; then echo "$0: $src does not exist." >&2 exit 1 fi if test -z "$dst_arg"; then echo "$0: no destination specified." >&2 exit 1 fi dst=$dst_arg # If destination is a directory, append the input filename; won't work # if double slashes aren't ignored. if test -d "$dst"; then if test -n "$no_target_directory"; then echo "$0: $dst_arg: Is a directory" >&2 exit 1 fi dstdir=$dst dst=$dstdir/`basename "$src"` dstdir_status=0 else # Prefer dirname, but fall back on a substitute if dirname fails. dstdir=` (dirname "$dst") 2>/dev/null || expr X"$dst" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$dst" : 'X\(//\)[^/]' \| \ X"$dst" : 'X\(//\)$' \| \ X"$dst" : 'X\(/\)' \| . 2>/dev/null || echo X"$dst" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q' ` test -d "$dstdir" dstdir_status=$? fi fi obsolete_mkdir_used=false if test $dstdir_status != 0; then case $posix_mkdir in '') # Create intermediate dirs using mode 755 as modified by the umask. # This is like FreeBSD 'install' as of 1997-10-28. umask=`umask` case $stripcmd.$umask in # Optimize common cases. *[2367][2367]) mkdir_umask=$umask;; .*0[02][02] | .[02][02] | .[02]) mkdir_umask=22;; *[0-7]) mkdir_umask=`expr $umask + 22 \ - $umask % 100 % 40 + $umask % 20 \ - $umask % 10 % 4 + $umask % 2 `;; *) mkdir_umask=$umask,go-w;; esac # With -d, create the new directory with the user-specified mode. # Otherwise, rely on $mkdir_umask. if test -n "$dir_arg"; then mkdir_mode=-m$mode else mkdir_mode= fi posix_mkdir=false case $umask in *[123567][0-7][0-7]) # POSIX mkdir -p sets u+wx bits regardless of umask, which # is incompatible with FreeBSD 'install' when (umask & 300) != 0. ;; *) tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$ trap 'ret=$?; rmdir "$tmpdir/d" "$tmpdir" 2>/dev/null; exit $ret' 0 if (umask $mkdir_umask && exec $mkdirprog $mkdir_mode -p -- "$tmpdir/d") >/dev/null 2>&1 then if test -z "$dir_arg" || { # Check for POSIX incompatibilities with -m. # HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or # other-writable bit of parent directory when it shouldn't. # FreeBSD 6.1 mkdir -m -p sets mode of existing directory. ls_ld_tmpdir=`ls -ld "$tmpdir"` case $ls_ld_tmpdir in d????-?r-*) different_mode=700;; d????-?--*) different_mode=755;; *) false;; esac && $mkdirprog -m$different_mode -p -- "$tmpdir" && { ls_ld_tmpdir_1=`ls -ld "$tmpdir"` test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1" } } then posix_mkdir=: fi rmdir "$tmpdir/d" "$tmpdir" else # Remove any dirs left behind by ancient mkdir implementations. rmdir ./$mkdir_mode ./-p ./-- 2>/dev/null fi trap '' 0;; esac;; esac if $posix_mkdir && ( umask $mkdir_umask && $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir" ) then : else # The umask is ridiculous, or mkdir does not conform to POSIX, # or it failed possibly due to a race condition. Create the # directory the slow way, step by step, checking for races as we go. case $dstdir in /*) prefix='/';; [-=\(\)!]*) prefix='./';; *) prefix='';; esac eval "$initialize_posix_glob" oIFS=$IFS IFS=/ $posix_glob set -f set fnord $dstdir shift $posix_glob set +f IFS=$oIFS prefixes= for d do test X"$d" = X && continue prefix=$prefix$d if test -d "$prefix"; then prefixes= else if $posix_mkdir; then (umask=$mkdir_umask && $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break # Don't fail if two instances are running concurrently. test -d "$prefix" || exit 1 else case $prefix in *\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;; *) qprefix=$prefix;; esac prefixes="$prefixes '$qprefix'" fi fi prefix=$prefix/ done if test -n "$prefixes"; then # Don't fail if two instances are running concurrently. (umask $mkdir_umask && eval "\$doit_exec \$mkdirprog $prefixes") || test -d "$dstdir" || exit 1 obsolete_mkdir_used=true fi fi fi if test -n "$dir_arg"; then { test -z "$chowncmd" || $doit $chowncmd "$dst"; } && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } && { test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false || test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1 else # Make a couple of temp file names in the proper directory. dsttmp=$dstdir/_inst.$$_ rmtmp=$dstdir/_rm.$$_ # Trap to clean up those temp files at exit. trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0 # Copy the file name to the temp name. (umask $cp_umask && $doit_exec $cpprog "$src" "$dsttmp") && # and set any options; do chmod last to preserve setuid bits. # # If any of these fail, we abort the whole thing. If we want to # ignore errors from any of these, just make sure not to ignore # errors from the above "$doit $cpprog $src $dsttmp" command. # { test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } && { test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } && { test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } && # If -C, don't bother to copy if it wouldn't change the file. if $copy_on_change && old=`LC_ALL=C ls -dlL "$dst" 2>/dev/null` && new=`LC_ALL=C ls -dlL "$dsttmp" 2>/dev/null` && eval "$initialize_posix_glob" && $posix_glob set -f && set X $old && old=:$2:$4:$5:$6 && set X $new && new=:$2:$4:$5:$6 && $posix_glob set +f && test "$old" = "$new" && $cmpprog "$dst" "$dsttmp" >/dev/null 2>&1 then rm -f "$dsttmp" else # Rename the file to the real destination. $doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null || # The rename failed, perhaps because mv can't rename something else # to itself, or perhaps because mv is so ancient that it does not # support -f. { # Now remove or move aside any old file at destination location. # We try this two ways since rm can't unlink itself on some # systems and the destination file might be busy for other # reasons. In this case, the final cleanup might fail but the new # file should still install successfully. { test ! -f "$dst" || $doit $rmcmd -f "$dst" 2>/dev/null || { $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null && { $doit $rmcmd -f "$rmtmp" 2>/dev/null; :; } } || { echo "$0: cannot unlink or rename $dst" >&2 (exit 1); exit 1 } } && # Now rename the file to the real destination. $doit $mvcmd "$dsttmp" "$dst" } fi || exit 1 trap '' 0 fi done # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC" # time-stamp-end: "; # UTC" # End: slurm-slurm-15-08-7-1/auxdir/libtool.m4000066400000000000000000010601111265000126300175270ustar00rootroot00000000000000# libtool.m4 - Configure libtool for the host system. -*-Autoconf-*- # # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, # 2006, 2007, 2008, 2009, 2010, 2011 Free Software # Foundation, Inc. # Written by Gordon Matzigkeit, 1996 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. m4_define([_LT_COPYING], [dnl # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, # 2006, 2007, 2008, 2009, 2010, 2011 Free Software # Foundation, Inc. # Written by Gordon Matzigkeit, 1996 # # This file is part of GNU Libtool. # # GNU Libtool is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License as # published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # # As a special exception to the GNU General Public License, # if you distribute this file as part of a program or library that # is built using GNU Libtool, you may include this file under the # same distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with GNU Libtool; see the file COPYING. If not, a copy # can be downloaded from http://www.gnu.org/licenses/gpl.html, or # obtained by writing to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. ]) # serial 57 LT_INIT # LT_PREREQ(VERSION) # ------------------ # Complain and exit if this libtool version is less that VERSION. m4_defun([LT_PREREQ], [m4_if(m4_version_compare(m4_defn([LT_PACKAGE_VERSION]), [$1]), -1, [m4_default([$3], [m4_fatal([Libtool version $1 or higher is required], 63)])], [$2])]) # _LT_CHECK_BUILDDIR # ------------------ # Complain if the absolute build directory name contains unusual characters m4_defun([_LT_CHECK_BUILDDIR], [case `pwd` in *\ * | *\ *) AC_MSG_WARN([Libtool does not cope well with whitespace in `pwd`]) ;; esac ]) # LT_INIT([OPTIONS]) # ------------------ AC_DEFUN([LT_INIT], [AC_PREREQ([2.58])dnl We use AC_INCLUDES_DEFAULT AC_REQUIRE([AC_CONFIG_AUX_DIR_DEFAULT])dnl AC_BEFORE([$0], [LT_LANG])dnl AC_BEFORE([$0], [LT_OUTPUT])dnl AC_BEFORE([$0], [LTDL_INIT])dnl m4_require([_LT_CHECK_BUILDDIR])dnl dnl Autoconf doesn't catch unexpanded LT_ macros by default: m4_pattern_forbid([^_?LT_[A-Z_]+$])dnl m4_pattern_allow([^(_LT_EOF|LT_DLGLOBAL|LT_DLLAZY_OR_NOW|LT_MULTI_MODULE)$])dnl dnl aclocal doesn't pull ltoptions.m4, ltsugar.m4, or ltversion.m4 dnl unless we require an AC_DEFUNed macro: AC_REQUIRE([LTOPTIONS_VERSION])dnl AC_REQUIRE([LTSUGAR_VERSION])dnl AC_REQUIRE([LTVERSION_VERSION])dnl AC_REQUIRE([LTOBSOLETE_VERSION])dnl m4_require([_LT_PROG_LTMAIN])dnl _LT_SHELL_INIT([SHELL=${CONFIG_SHELL-/bin/sh}]) dnl Parse OPTIONS _LT_SET_OPTIONS([$0], [$1]) # This can be used to rebuild libtool when needed LIBTOOL_DEPS="$ltmain" # Always use our own libtool. LIBTOOL='$(SHELL) $(top_builddir)/libtool' AC_SUBST(LIBTOOL)dnl _LT_SETUP # Only expand once: m4_define([LT_INIT]) ])# LT_INIT # Old names: AU_ALIAS([AC_PROG_LIBTOOL], [LT_INIT]) AU_ALIAS([AM_PROG_LIBTOOL], [LT_INIT]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_PROG_LIBTOOL], []) dnl AC_DEFUN([AM_PROG_LIBTOOL], []) # _LT_CC_BASENAME(CC) # ------------------- # Calculate cc_basename. Skip known compiler wrappers and cross-prefix. m4_defun([_LT_CC_BASENAME], [for cc_temp in $1""; do case $cc_temp in compile | *[[\\/]]compile | ccache | *[[\\/]]ccache ) ;; distcc | *[[\\/]]distcc | purify | *[[\\/]]purify ) ;; \-*) ;; *) break;; esac done cc_basename=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` ]) # _LT_FILEUTILS_DEFAULTS # ---------------------- # It is okay to use these file commands and assume they have been set # sensibly after `m4_require([_LT_FILEUTILS_DEFAULTS])'. m4_defun([_LT_FILEUTILS_DEFAULTS], [: ${CP="cp -f"} : ${MV="mv -f"} : ${RM="rm -f"} ])# _LT_FILEUTILS_DEFAULTS # _LT_SETUP # --------- m4_defun([_LT_SETUP], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl AC_REQUIRE([_LT_PREPARE_SED_QUOTE_VARS])dnl AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])dnl _LT_DECL([], [PATH_SEPARATOR], [1], [The PATH separator for the build system])dnl dnl _LT_DECL([], [host_alias], [0], [The host system])dnl _LT_DECL([], [host], [0])dnl _LT_DECL([], [host_os], [0])dnl dnl _LT_DECL([], [build_alias], [0], [The build system])dnl _LT_DECL([], [build], [0])dnl _LT_DECL([], [build_os], [0])dnl dnl AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([LT_PATH_LD])dnl AC_REQUIRE([LT_PATH_NM])dnl dnl AC_REQUIRE([AC_PROG_LN_S])dnl test -z "$LN_S" && LN_S="ln -s" _LT_DECL([], [LN_S], [1], [Whether we need soft or hard links])dnl dnl AC_REQUIRE([LT_CMD_MAX_LEN])dnl _LT_DECL([objext], [ac_objext], [0], [Object file suffix (normally "o")])dnl _LT_DECL([], [exeext], [0], [Executable file suffix (normally "")])dnl dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_CHECK_SHELL_FEATURES])dnl m4_require([_LT_PATH_CONVERSION_FUNCTIONS])dnl m4_require([_LT_CMD_RELOAD])dnl m4_require([_LT_CHECK_MAGIC_METHOD])dnl m4_require([_LT_CHECK_SHAREDLIB_FROM_LINKLIB])dnl m4_require([_LT_CMD_OLD_ARCHIVE])dnl m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl m4_require([_LT_WITH_SYSROOT])dnl _LT_CONFIG_LIBTOOL_INIT([ # See if we are running on zsh, and set the options which allow our # commands through without removal of \ escapes INIT. if test -n "\${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi ]) if test -n "${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi _LT_CHECK_OBJDIR m4_require([_LT_TAG_COMPILER])dnl case $host_os in aix3*) # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test "X${COLLECT_NAMES+set}" != Xset; then COLLECT_NAMES= export COLLECT_NAMES fi ;; esac # Global variables: ofile=libtool can_build_shared=yes # All known linkers require a `.a' archive for static linking (except MSVC, # which needs '.lib'). libext=a with_gnu_ld="$lt_cv_prog_gnu_ld" old_CC="$CC" old_CFLAGS="$CFLAGS" # Set sane defaults for various variables test -z "$CC" && CC=cc test -z "$LTCC" && LTCC=$CC test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS test -z "$LD" && LD=ld test -z "$ac_objext" && ac_objext=o _LT_CC_BASENAME([$compiler]) # Only perform the check for file, if the check method requires it test -z "$MAGIC_CMD" && MAGIC_CMD=file case $deplibs_check_method in file_magic*) if test "$file_magic_cmd" = '$MAGIC_CMD'; then _LT_PATH_MAGIC fi ;; esac # Use C for the default configuration in the libtool script LT_SUPPORTED_TAG([CC]) _LT_LANG_C_CONFIG _LT_LANG_DEFAULT_CONFIG _LT_CONFIG_COMMANDS ])# _LT_SETUP # _LT_PREPARE_SED_QUOTE_VARS # -------------------------- # Define a few sed substitution that help us do robust quoting. m4_defun([_LT_PREPARE_SED_QUOTE_VARS], [# Backslashify metacharacters that are still active within # double-quoted strings. sed_quote_subst='s/\([["`$\\]]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\([["`\\]]\)/\\\1/g' # Sed substitution to delay expansion of an escaped shell variable in a # double_quote_subst'ed string. delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' # Sed substitution to delay expansion of an escaped single quote. delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g' # Sed substitution to avoid accidental globbing in evaled expressions no_glob_subst='s/\*/\\\*/g' ]) # _LT_PROG_LTMAIN # --------------- # Note that this code is called both from `configure', and `config.status' # now that we use AC_CONFIG_COMMANDS to generate libtool. Notably, # `config.status' has no value for ac_aux_dir unless we are using Automake, # so we pass a copy along to make sure it has a sensible value anyway. m4_defun([_LT_PROG_LTMAIN], [m4_ifdef([AC_REQUIRE_AUX_FILE], [AC_REQUIRE_AUX_FILE([ltmain.sh])])dnl _LT_CONFIG_LIBTOOL_INIT([ac_aux_dir='$ac_aux_dir']) ltmain="$ac_aux_dir/ltmain.sh" ])# _LT_PROG_LTMAIN ## ------------------------------------- ## ## Accumulate code for creating libtool. ## ## ------------------------------------- ## # So that we can recreate a full libtool script including additional # tags, we accumulate the chunks of code to send to AC_CONFIG_COMMANDS # in macros and then make a single call at the end using the `libtool' # label. # _LT_CONFIG_LIBTOOL_INIT([INIT-COMMANDS]) # ---------------------------------------- # Register INIT-COMMANDS to be passed to AC_CONFIG_COMMANDS later. m4_define([_LT_CONFIG_LIBTOOL_INIT], [m4_ifval([$1], [m4_append([_LT_OUTPUT_LIBTOOL_INIT], [$1 ])])]) # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_INIT]) # _LT_CONFIG_LIBTOOL([COMMANDS]) # ------------------------------ # Register COMMANDS to be passed to AC_CONFIG_COMMANDS later. m4_define([_LT_CONFIG_LIBTOOL], [m4_ifval([$1], [m4_append([_LT_OUTPUT_LIBTOOL_COMMANDS], [$1 ])])]) # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS]) # _LT_CONFIG_SAVE_COMMANDS([COMMANDS], [INIT_COMMANDS]) # ----------------------------------------------------- m4_defun([_LT_CONFIG_SAVE_COMMANDS], [_LT_CONFIG_LIBTOOL([$1]) _LT_CONFIG_LIBTOOL_INIT([$2]) ]) # _LT_FORMAT_COMMENT([COMMENT]) # ----------------------------- # Add leading comment marks to the start of each line, and a trailing # full-stop to the whole comment if one is not present already. m4_define([_LT_FORMAT_COMMENT], [m4_ifval([$1], [ m4_bpatsubst([m4_bpatsubst([$1], [^ *], [# ])], [['`$\]], [\\\&])]m4_bmatch([$1], [[!?.]$], [], [.]) )]) ## ------------------------ ## ## FIXME: Eliminate VARNAME ## ## ------------------------ ## # _LT_DECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION], [IS-TAGGED?]) # ------------------------------------------------------------------- # CONFIGNAME is the name given to the value in the libtool script. # VARNAME is the (base) name used in the configure script. # VALUE may be 0, 1 or 2 for a computed quote escaped value based on # VARNAME. Any other value will be used directly. m4_define([_LT_DECL], [lt_if_append_uniq([lt_decl_varnames], [$2], [, ], [lt_dict_add_subkey([lt_decl_dict], [$2], [libtool_name], [m4_ifval([$1], [$1], [$2])]) lt_dict_add_subkey([lt_decl_dict], [$2], [value], [$3]) m4_ifval([$4], [lt_dict_add_subkey([lt_decl_dict], [$2], [description], [$4])]) lt_dict_add_subkey([lt_decl_dict], [$2], [tagged?], [m4_ifval([$5], [yes], [no])])]) ]) # _LT_TAGDECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION]) # -------------------------------------------------------- m4_define([_LT_TAGDECL], [_LT_DECL([$1], [$2], [$3], [$4], [yes])]) # lt_decl_tag_varnames([SEPARATOR], [VARNAME1...]) # ------------------------------------------------ m4_define([lt_decl_tag_varnames], [_lt_decl_filter([tagged?], [yes], $@)]) # _lt_decl_filter(SUBKEY, VALUE, [SEPARATOR], [VARNAME1..]) # --------------------------------------------------------- m4_define([_lt_decl_filter], [m4_case([$#], [0], [m4_fatal([$0: too few arguments: $#])], [1], [m4_fatal([$0: too few arguments: $#: $1])], [2], [lt_dict_filter([lt_decl_dict], [$1], [$2], [], lt_decl_varnames)], [3], [lt_dict_filter([lt_decl_dict], [$1], [$2], [$3], lt_decl_varnames)], [lt_dict_filter([lt_decl_dict], $@)])[]dnl ]) # lt_decl_quote_varnames([SEPARATOR], [VARNAME1...]) # -------------------------------------------------- m4_define([lt_decl_quote_varnames], [_lt_decl_filter([value], [1], $@)]) # lt_decl_dquote_varnames([SEPARATOR], [VARNAME1...]) # --------------------------------------------------- m4_define([lt_decl_dquote_varnames], [_lt_decl_filter([value], [2], $@)]) # lt_decl_varnames_tagged([SEPARATOR], [VARNAME1...]) # --------------------------------------------------- m4_define([lt_decl_varnames_tagged], [m4_assert([$# <= 2])dnl _$0(m4_quote(m4_default([$1], [[, ]])), m4_ifval([$2], [[$2]], [m4_dquote(lt_decl_tag_varnames)]), m4_split(m4_normalize(m4_quote(_LT_TAGS)), [ ]))]) m4_define([_lt_decl_varnames_tagged], [m4_ifval([$3], [lt_combine([$1], [$2], [_], $3)])]) # lt_decl_all_varnames([SEPARATOR], [VARNAME1...]) # ------------------------------------------------ m4_define([lt_decl_all_varnames], [_$0(m4_quote(m4_default([$1], [[, ]])), m4_if([$2], [], m4_quote(lt_decl_varnames), m4_quote(m4_shift($@))))[]dnl ]) m4_define([_lt_decl_all_varnames], [lt_join($@, lt_decl_varnames_tagged([$1], lt_decl_tag_varnames([[, ]], m4_shift($@))))dnl ]) # _LT_CONFIG_STATUS_DECLARE([VARNAME]) # ------------------------------------ # Quote a variable value, and forward it to `config.status' so that its # declaration there will have the same value as in `configure'. VARNAME # must have a single quote delimited value for this to work. m4_define([_LT_CONFIG_STATUS_DECLARE], [$1='`$ECHO "$][$1" | $SED "$delay_single_quote_subst"`']) # _LT_CONFIG_STATUS_DECLARATIONS # ------------------------------ # We delimit libtool config variables with single quotes, so when # we write them to config.status, we have to be sure to quote all # embedded single quotes properly. In configure, this macro expands # each variable declared with _LT_DECL (and _LT_TAGDECL) into: # # ='`$ECHO "$" | $SED "$delay_single_quote_subst"`' m4_defun([_LT_CONFIG_STATUS_DECLARATIONS], [m4_foreach([_lt_var], m4_quote(lt_decl_all_varnames), [m4_n([_LT_CONFIG_STATUS_DECLARE(_lt_var)])])]) # _LT_LIBTOOL_TAGS # ---------------- # Output comment and list of tags supported by the script m4_defun([_LT_LIBTOOL_TAGS], [_LT_FORMAT_COMMENT([The names of the tagged configurations supported by this script])dnl available_tags="_LT_TAGS"dnl ]) # _LT_LIBTOOL_DECLARE(VARNAME, [TAG]) # ----------------------------------- # Extract the dictionary values for VARNAME (optionally with TAG) and # expand to a commented shell variable setting: # # # Some comment about what VAR is for. # visible_name=$lt_internal_name m4_define([_LT_LIBTOOL_DECLARE], [_LT_FORMAT_COMMENT(m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [description])))[]dnl m4_pushdef([_libtool_name], m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [libtool_name])))[]dnl m4_case(m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [value])), [0], [_libtool_name=[$]$1], [1], [_libtool_name=$lt_[]$1], [2], [_libtool_name=$lt_[]$1], [_libtool_name=lt_dict_fetch([lt_decl_dict], [$1], [value])])[]dnl m4_ifval([$2], [_$2])[]m4_popdef([_libtool_name])[]dnl ]) # _LT_LIBTOOL_CONFIG_VARS # ----------------------- # Produce commented declarations of non-tagged libtool config variables # suitable for insertion in the LIBTOOL CONFIG section of the `libtool' # script. Tagged libtool config variables (even for the LIBTOOL CONFIG # section) are produced by _LT_LIBTOOL_TAG_VARS. m4_defun([_LT_LIBTOOL_CONFIG_VARS], [m4_foreach([_lt_var], m4_quote(_lt_decl_filter([tagged?], [no], [], lt_decl_varnames)), [m4_n([_LT_LIBTOOL_DECLARE(_lt_var)])])]) # _LT_LIBTOOL_TAG_VARS(TAG) # ------------------------- m4_define([_LT_LIBTOOL_TAG_VARS], [m4_foreach([_lt_var], m4_quote(lt_decl_tag_varnames), [m4_n([_LT_LIBTOOL_DECLARE(_lt_var, [$1])])])]) # _LT_TAGVAR(VARNAME, [TAGNAME]) # ------------------------------ m4_define([_LT_TAGVAR], [m4_ifval([$2], [$1_$2], [$1])]) # _LT_CONFIG_COMMANDS # ------------------- # Send accumulated output to $CONFIG_STATUS. Thanks to the lists of # variables for single and double quote escaping we saved from calls # to _LT_DECL, we can put quote escaped variables declarations # into `config.status', and then the shell code to quote escape them in # for loops in `config.status'. Finally, any additional code accumulated # from calls to _LT_CONFIG_LIBTOOL_INIT is expanded. m4_defun([_LT_CONFIG_COMMANDS], [AC_PROVIDE_IFELSE([LT_OUTPUT], dnl If the libtool generation code has been placed in $CONFIG_LT, dnl instead of duplicating it all over again into config.status, dnl then we will have config.status run $CONFIG_LT later, so it dnl needs to know what name is stored there: [AC_CONFIG_COMMANDS([libtool], [$SHELL $CONFIG_LT || AS_EXIT(1)], [CONFIG_LT='$CONFIG_LT'])], dnl If the libtool generation code is destined for config.status, dnl expand the accumulated commands and init code now: [AC_CONFIG_COMMANDS([libtool], [_LT_OUTPUT_LIBTOOL_COMMANDS], [_LT_OUTPUT_LIBTOOL_COMMANDS_INIT])]) ])#_LT_CONFIG_COMMANDS # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS_INIT], [ # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH sed_quote_subst='$sed_quote_subst' double_quote_subst='$double_quote_subst' delay_variable_subst='$delay_variable_subst' _LT_CONFIG_STATUS_DECLARATIONS LTCC='$LTCC' LTCFLAGS='$LTCFLAGS' compiler='$compiler_DEFAULT' # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$[]1 _LTECHO_EOF' } # Quote evaled strings. for var in lt_decl_all_varnames([[ \ ]], lt_decl_quote_varnames); do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[[\\\\\\\`\\"\\\$]]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done # Double-quote double-evaled strings. for var in lt_decl_all_varnames([[ \ ]], lt_decl_dquote_varnames); do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[[\\\\\\\`\\"\\\$]]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done _LT_OUTPUT_LIBTOOL_INIT ]) # _LT_GENERATED_FILE_INIT(FILE, [COMMENT]) # ------------------------------------ # Generate a child script FILE with all initialization necessary to # reuse the environment learned by the parent script, and make the # file executable. If COMMENT is supplied, it is inserted after the # `#!' sequence but before initialization text begins. After this # macro, additional text can be appended to FILE to form the body of # the child script. The macro ends with non-zero status if the # file could not be fully written (such as if the disk is full). m4_ifdef([AS_INIT_GENERATED], [m4_defun([_LT_GENERATED_FILE_INIT],[AS_INIT_GENERATED($@)])], [m4_defun([_LT_GENERATED_FILE_INIT], [m4_require([AS_PREPARE])]dnl [m4_pushdef([AS_MESSAGE_LOG_FD])]dnl [lt_write_fail=0 cat >$1 <<_ASEOF || lt_write_fail=1 #! $SHELL # Generated by $as_me. $2 SHELL=\${CONFIG_SHELL-$SHELL} export SHELL _ASEOF cat >>$1 <<\_ASEOF || lt_write_fail=1 AS_SHELL_SANITIZE _AS_PREPARE exec AS_MESSAGE_FD>&1 _ASEOF test $lt_write_fail = 0 && chmod +x $1[]dnl m4_popdef([AS_MESSAGE_LOG_FD])])])# _LT_GENERATED_FILE_INIT # LT_OUTPUT # --------- # This macro allows early generation of the libtool script (before # AC_OUTPUT is called), incase it is used in configure for compilation # tests. AC_DEFUN([LT_OUTPUT], [: ${CONFIG_LT=./config.lt} AC_MSG_NOTICE([creating $CONFIG_LT]) _LT_GENERATED_FILE_INIT(["$CONFIG_LT"], [# Run this file to recreate a libtool stub with the current configuration.]) cat >>"$CONFIG_LT" <<\_LTEOF lt_cl_silent=false exec AS_MESSAGE_LOG_FD>>config.log { echo AS_BOX([Running $as_me.]) } >&AS_MESSAGE_LOG_FD lt_cl_help="\ \`$as_me' creates a local libtool stub from the current configuration, for use in further configure time tests before the real libtool is generated. Usage: $[0] [[OPTIONS]] -h, --help print this help, then exit -V, --version print version number, then exit -q, --quiet do not print progress messages -d, --debug don't remove temporary files Report bugs to ." lt_cl_version="\ m4_ifset([AC_PACKAGE_NAME], [AC_PACKAGE_NAME ])config.lt[]dnl m4_ifset([AC_PACKAGE_VERSION], [ AC_PACKAGE_VERSION]) configured by $[0], generated by m4_PACKAGE_STRING. Copyright (C) 2011 Free Software Foundation, Inc. This config.lt script is free software; the Free Software Foundation gives unlimited permision to copy, distribute and modify it." while test $[#] != 0 do case $[1] in --version | --v* | -V ) echo "$lt_cl_version"; exit 0 ;; --help | --h* | -h ) echo "$lt_cl_help"; exit 0 ;; --debug | --d* | -d ) debug=: ;; --quiet | --q* | --silent | --s* | -q ) lt_cl_silent=: ;; -*) AC_MSG_ERROR([unrecognized option: $[1] Try \`$[0] --help' for more information.]) ;; *) AC_MSG_ERROR([unrecognized argument: $[1] Try \`$[0] --help' for more information.]) ;; esac shift done if $lt_cl_silent; then exec AS_MESSAGE_FD>/dev/null fi _LTEOF cat >>"$CONFIG_LT" <<_LTEOF _LT_OUTPUT_LIBTOOL_COMMANDS_INIT _LTEOF cat >>"$CONFIG_LT" <<\_LTEOF AC_MSG_NOTICE([creating $ofile]) _LT_OUTPUT_LIBTOOL_COMMANDS AS_EXIT(0) _LTEOF chmod +x "$CONFIG_LT" # configure is writing to config.log, but config.lt does its own redirection, # appending to config.log, which fails on DOS, as config.log is still kept # open by configure. Here we exec the FD to /dev/null, effectively closing # config.log, so it can be properly (re)opened and appended to by config.lt. lt_cl_success=: test "$silent" = yes && lt_config_lt_args="$lt_config_lt_args --quiet" exec AS_MESSAGE_LOG_FD>/dev/null $SHELL "$CONFIG_LT" $lt_config_lt_args || lt_cl_success=false exec AS_MESSAGE_LOG_FD>>config.log $lt_cl_success || AS_EXIT(1) ])# LT_OUTPUT # _LT_CONFIG(TAG) # --------------- # If TAG is the built-in tag, create an initial libtool script with a # default configuration from the untagged config vars. Otherwise add code # to config.status for appending the configuration named by TAG from the # matching tagged config vars. m4_defun([_LT_CONFIG], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl _LT_CONFIG_SAVE_COMMANDS([ m4_define([_LT_TAG], m4_if([$1], [], [C], [$1]))dnl m4_if(_LT_TAG, [C], [ # See if we are running on zsh, and set the options which allow our # commands through without removal of \ escapes. if test -n "${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi cfgfile="${ofile}T" trap "$RM \"$cfgfile\"; exit 1" 1 2 15 $RM "$cfgfile" cat <<_LT_EOF >> "$cfgfile" #! $SHELL # `$ECHO "$ofile" | sed 's%^.*/%%'` - Provide generalized library-building support services. # Generated automatically by $as_me ($PACKAGE$TIMESTAMP) $VERSION # Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: # NOTE: Changes made to this file will be lost: look at ltmain.sh. # _LT_COPYING _LT_LIBTOOL_TAGS # ### BEGIN LIBTOOL CONFIG _LT_LIBTOOL_CONFIG_VARS _LT_LIBTOOL_TAG_VARS # ### END LIBTOOL CONFIG _LT_EOF case $host_os in aix3*) cat <<\_LT_EOF >> "$cfgfile" # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test "X${COLLECT_NAMES+set}" != Xset; then COLLECT_NAMES= export COLLECT_NAMES fi _LT_EOF ;; esac _LT_PROG_LTMAIN # We use sed instead of cat because bash on DJGPP gets confused if # if finds mixed CR/LF and LF-only lines. Since sed operates in # text mode, it properly converts lines to CR/LF. This bash problem # is reportedly fixed, but why not run on old versions too? sed '$q' "$ltmain" >> "$cfgfile" \ || (rm -f "$cfgfile"; exit 1) _LT_PROG_REPLACE_SHELLFNS mv -f "$cfgfile" "$ofile" || (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile") chmod +x "$ofile" ], [cat <<_LT_EOF >> "$ofile" dnl Unfortunately we have to use $1 here, since _LT_TAG is not expanded dnl in a comment (ie after a #). # ### BEGIN LIBTOOL TAG CONFIG: $1 _LT_LIBTOOL_TAG_VARS(_LT_TAG) # ### END LIBTOOL TAG CONFIG: $1 _LT_EOF ])dnl /m4_if ], [m4_if([$1], [], [ PACKAGE='$PACKAGE' VERSION='$VERSION' TIMESTAMP='$TIMESTAMP' RM='$RM' ofile='$ofile'], []) ])dnl /_LT_CONFIG_SAVE_COMMANDS ])# _LT_CONFIG # LT_SUPPORTED_TAG(TAG) # --------------------- # Trace this macro to discover what tags are supported by the libtool # --tag option, using: # autoconf --trace 'LT_SUPPORTED_TAG:$1' AC_DEFUN([LT_SUPPORTED_TAG], []) # C support is built-in for now m4_define([_LT_LANG_C_enabled], []) m4_define([_LT_TAGS], []) # LT_LANG(LANG) # ------------- # Enable libtool support for the given language if not already enabled. AC_DEFUN([LT_LANG], [AC_BEFORE([$0], [LT_OUTPUT])dnl m4_case([$1], [C], [_LT_LANG(C)], [C++], [_LT_LANG(CXX)], [Go], [_LT_LANG(GO)], [Java], [_LT_LANG(GCJ)], [Fortran 77], [_LT_LANG(F77)], [Fortran], [_LT_LANG(FC)], [Windows Resource], [_LT_LANG(RC)], [m4_ifdef([_LT_LANG_]$1[_CONFIG], [_LT_LANG($1)], [m4_fatal([$0: unsupported language: "$1"])])])dnl ])# LT_LANG # _LT_LANG(LANGNAME) # ------------------ m4_defun([_LT_LANG], [m4_ifdef([_LT_LANG_]$1[_enabled], [], [LT_SUPPORTED_TAG([$1])dnl m4_append([_LT_TAGS], [$1 ])dnl m4_define([_LT_LANG_]$1[_enabled], [])dnl _LT_LANG_$1_CONFIG($1)])dnl ])# _LT_LANG m4_ifndef([AC_PROG_GO], [ ############################################################ # NOTE: This macro has been submitted for inclusion into # # GNU Autoconf as AC_PROG_GO. When it is available in # # a released version of Autoconf we should remove this # # macro and use it instead. # ############################################################ m4_defun([AC_PROG_GO], [AC_LANG_PUSH(Go)dnl AC_ARG_VAR([GOC], [Go compiler command])dnl AC_ARG_VAR([GOFLAGS], [Go compiler flags])dnl _AC_ARG_VAR_LDFLAGS()dnl AC_CHECK_TOOL(GOC, gccgo) if test -z "$GOC"; then if test -n "$ac_tool_prefix"; then AC_CHECK_PROG(GOC, [${ac_tool_prefix}gccgo], [${ac_tool_prefix}gccgo]) fi fi if test -z "$GOC"; then AC_CHECK_PROG(GOC, gccgo, gccgo, false) fi ])#m4_defun ])#m4_ifndef # _LT_LANG_DEFAULT_CONFIG # ----------------------- m4_defun([_LT_LANG_DEFAULT_CONFIG], [AC_PROVIDE_IFELSE([AC_PROG_CXX], [LT_LANG(CXX)], [m4_define([AC_PROG_CXX], defn([AC_PROG_CXX])[LT_LANG(CXX)])]) AC_PROVIDE_IFELSE([AC_PROG_F77], [LT_LANG(F77)], [m4_define([AC_PROG_F77], defn([AC_PROG_F77])[LT_LANG(F77)])]) AC_PROVIDE_IFELSE([AC_PROG_FC], [LT_LANG(FC)], [m4_define([AC_PROG_FC], defn([AC_PROG_FC])[LT_LANG(FC)])]) dnl The call to [A][M_PROG_GCJ] is quoted like that to stop aclocal dnl pulling things in needlessly. AC_PROVIDE_IFELSE([AC_PROG_GCJ], [LT_LANG(GCJ)], [AC_PROVIDE_IFELSE([A][M_PROG_GCJ], [LT_LANG(GCJ)], [AC_PROVIDE_IFELSE([LT_PROG_GCJ], [LT_LANG(GCJ)], [m4_ifdef([AC_PROG_GCJ], [m4_define([AC_PROG_GCJ], defn([AC_PROG_GCJ])[LT_LANG(GCJ)])]) m4_ifdef([A][M_PROG_GCJ], [m4_define([A][M_PROG_GCJ], defn([A][M_PROG_GCJ])[LT_LANG(GCJ)])]) m4_ifdef([LT_PROG_GCJ], [m4_define([LT_PROG_GCJ], defn([LT_PROG_GCJ])[LT_LANG(GCJ)])])])])]) AC_PROVIDE_IFELSE([AC_PROG_GO], [LT_LANG(GO)], [m4_define([AC_PROG_GO], defn([AC_PROG_GO])[LT_LANG(GO)])]) AC_PROVIDE_IFELSE([LT_PROG_RC], [LT_LANG(RC)], [m4_define([LT_PROG_RC], defn([LT_PROG_RC])[LT_LANG(RC)])]) ])# _LT_LANG_DEFAULT_CONFIG # Obsolete macros: AU_DEFUN([AC_LIBTOOL_CXX], [LT_LANG(C++)]) AU_DEFUN([AC_LIBTOOL_F77], [LT_LANG(Fortran 77)]) AU_DEFUN([AC_LIBTOOL_FC], [LT_LANG(Fortran)]) AU_DEFUN([AC_LIBTOOL_GCJ], [LT_LANG(Java)]) AU_DEFUN([AC_LIBTOOL_RC], [LT_LANG(Windows Resource)]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_CXX], []) dnl AC_DEFUN([AC_LIBTOOL_F77], []) dnl AC_DEFUN([AC_LIBTOOL_FC], []) dnl AC_DEFUN([AC_LIBTOOL_GCJ], []) dnl AC_DEFUN([AC_LIBTOOL_RC], []) # _LT_TAG_COMPILER # ---------------- m4_defun([_LT_TAG_COMPILER], [AC_REQUIRE([AC_PROG_CC])dnl _LT_DECL([LTCC], [CC], [1], [A C compiler])dnl _LT_DECL([LTCFLAGS], [CFLAGS], [1], [LTCC compiler flags])dnl _LT_TAGDECL([CC], [compiler], [1], [A language specific compiler])dnl _LT_TAGDECL([with_gcc], [GCC], [0], [Is the compiler the GNU compiler?])dnl # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC ])# _LT_TAG_COMPILER # _LT_COMPILER_BOILERPLATE # ------------------------ # Check for compiler boilerplate output or warnings with # the simple compiler test code. m4_defun([_LT_COMPILER_BOILERPLATE], [m4_require([_LT_DECL_SED])dnl ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ])# _LT_COMPILER_BOILERPLATE # _LT_LINKER_BOILERPLATE # ---------------------- # Check for linker boilerplate output or warnings with # the simple link test code. m4_defun([_LT_LINKER_BOILERPLATE], [m4_require([_LT_DECL_SED])dnl ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* ])# _LT_LINKER_BOILERPLATE # _LT_REQUIRED_DARWIN_CHECKS # ------------------------- m4_defun_once([_LT_REQUIRED_DARWIN_CHECKS],[ case $host_os in rhapsody* | darwin*) AC_CHECK_TOOL([DSYMUTIL], [dsymutil], [:]) AC_CHECK_TOOL([NMEDIT], [nmedit], [:]) AC_CHECK_TOOL([LIPO], [lipo], [:]) AC_CHECK_TOOL([OTOOL], [otool], [:]) AC_CHECK_TOOL([OTOOL64], [otool64], [:]) _LT_DECL([], [DSYMUTIL], [1], [Tool to manipulate archived DWARF debug symbol files on Mac OS X]) _LT_DECL([], [NMEDIT], [1], [Tool to change global to local symbols on Mac OS X]) _LT_DECL([], [LIPO], [1], [Tool to manipulate fat objects and archives on Mac OS X]) _LT_DECL([], [OTOOL], [1], [ldd/readelf like tool for Mach-O binaries on Mac OS X]) _LT_DECL([], [OTOOL64], [1], [ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4]) AC_CACHE_CHECK([for -single_module linker flag],[lt_cv_apple_cc_single_mod], [lt_cv_apple_cc_single_mod=no if test -z "${LT_MULTI_MODULE}"; then # By default we will add the -single_module flag. You can override # by either setting the environment variable LT_MULTI_MODULE # non-empty at configure time, or by adding -multi_module to the # link flags. rm -rf libconftest.dylib* echo "int foo(void){return 1;}" > conftest.c echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c 2>conftest.err _lt_result=$? # If there is a non-empty error log, and "single_module" # appears in it, assume the flag caused a linker warning if test -s conftest.err && $GREP single_module conftest.err; then cat conftest.err >&AS_MESSAGE_LOG_FD # Otherwise, if the output was created with a 0 exit code from # the compiler, it worked. elif test -f libconftest.dylib && test $_lt_result -eq 0; then lt_cv_apple_cc_single_mod=yes else cat conftest.err >&AS_MESSAGE_LOG_FD fi rm -rf libconftest.dylib* rm -f conftest.* fi]) AC_CACHE_CHECK([for -exported_symbols_list linker flag], [lt_cv_ld_exported_symbols_list], [lt_cv_ld_exported_symbols_list=no save_LDFLAGS=$LDFLAGS echo "_main" > conftest.sym LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym" AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])], [lt_cv_ld_exported_symbols_list=yes], [lt_cv_ld_exported_symbols_list=no]) LDFLAGS="$save_LDFLAGS" ]) AC_CACHE_CHECK([for -force_load linker flag],[lt_cv_ld_force_load], [lt_cv_ld_force_load=no cat > conftest.c << _LT_EOF int forced_loaded() { return 2;} _LT_EOF echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&AS_MESSAGE_LOG_FD echo "$AR cru libconftest.a conftest.o" >&AS_MESSAGE_LOG_FD $AR cru libconftest.a conftest.o 2>&AS_MESSAGE_LOG_FD echo "$RANLIB libconftest.a" >&AS_MESSAGE_LOG_FD $RANLIB libconftest.a 2>&AS_MESSAGE_LOG_FD cat > conftest.c << _LT_EOF int main() { return 0;} _LT_EOF echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err _lt_result=$? if test -s conftest.err && $GREP force_load conftest.err; then cat conftest.err >&AS_MESSAGE_LOG_FD elif test -f conftest && test $_lt_result -eq 0 && $GREP forced_load conftest >/dev/null 2>&1 ; then lt_cv_ld_force_load=yes else cat conftest.err >&AS_MESSAGE_LOG_FD fi rm -f conftest.err libconftest.a conftest conftest.c rm -rf conftest.dSYM ]) case $host_os in rhapsody* | darwin1.[[012]]) _lt_dar_allow_undefined='${wl}-undefined ${wl}suppress' ;; darwin1.*) _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; darwin*) # darwin 5.x on # if running on 10.5 or later, the deployment target defaults # to the OS version, if on x86, and 10.4, the deployment # target defaults to 10.4. Don't you love it? case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in 10.0,*86*-darwin8*|10.0,*-darwin[[91]]*) _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; 10.[[012]]*) _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; 10.*) _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; esac ;; esac if test "$lt_cv_apple_cc_single_mod" = "yes"; then _lt_dar_single_mod='$single_module' fi if test "$lt_cv_ld_exported_symbols_list" = "yes"; then _lt_dar_export_syms=' ${wl}-exported_symbols_list,$output_objdir/${libname}-symbols.expsym' else _lt_dar_export_syms='~$NMEDIT -s $output_objdir/${libname}-symbols.expsym ${lib}' fi if test "$DSYMUTIL" != ":" && test "$lt_cv_ld_force_load" = "no"; then _lt_dsymutil='~$DSYMUTIL $lib || :' else _lt_dsymutil= fi ;; esac ]) # _LT_DARWIN_LINKER_FEATURES([TAG]) # --------------------------------- # Checks for linker and compiler features on darwin m4_defun([_LT_DARWIN_LINKER_FEATURES], [ m4_require([_LT_REQUIRED_DARWIN_CHECKS]) _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported if test "$lt_cv_ld_force_load" = "yes"; then _LT_TAGVAR(whole_archive_flag_spec, $1)='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience ${wl}-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' m4_case([$1], [F77], [_LT_TAGVAR(compiler_needs_object, $1)=yes], [FC], [_LT_TAGVAR(compiler_needs_object, $1)=yes]) else _LT_TAGVAR(whole_archive_flag_spec, $1)='' fi _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(allow_undefined_flag, $1)="$_lt_dar_allow_undefined" case $cc_basename in ifort*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac if test "$_lt_dar_can_shared" = "yes"; then output_verbose_link_cmd=func_echo_all _LT_TAGVAR(archive_cmds, $1)="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod${_lt_dsymutil}" _LT_TAGVAR(module_cmds, $1)="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dsymutil}" _LT_TAGVAR(archive_expsym_cmds, $1)="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring ${_lt_dar_single_mod}${_lt_dar_export_syms}${_lt_dsymutil}" _LT_TAGVAR(module_expsym_cmds, $1)="sed -e 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dar_export_syms}${_lt_dsymutil}" m4_if([$1], [CXX], [ if test "$lt_cv_apple_cc_single_mod" != "yes"; then _LT_TAGVAR(archive_cmds, $1)="\$CC -r -keep_private_externs -nostdlib -o \${lib}-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \${lib}-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring${_lt_dsymutil}" _LT_TAGVAR(archive_expsym_cmds, $1)="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -r -keep_private_externs -nostdlib -o \${lib}-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \${lib}-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring${_lt_dar_export_syms}${_lt_dsymutil}" fi ],[]) else _LT_TAGVAR(ld_shlibs, $1)=no fi ]) # _LT_SYS_MODULE_PATH_AIX([TAGNAME]) # ---------------------------------- # Links a minimal program and checks the executable # for the system default hardcoded library path. In most cases, # this is /usr/lib:/lib, but when the MPI compilers are used # the location of the communication and MPI libs are included too. # If we don't find anything, use the default library path according # to the aix ld manual. # Store the results from the different compilers for each TAGNAME. # Allow to override them for all tags through lt_cv_aix_libpath. m4_defun([_LT_SYS_MODULE_PATH_AIX], [m4_require([_LT_DECL_SED])dnl if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else AC_CACHE_VAL([_LT_TAGVAR([lt_cv_aix_libpath_], [$1])], [AC_LINK_IFELSE([AC_LANG_PROGRAM],[ lt_aix_libpath_sed='[ /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }]' _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi],[]) if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then _LT_TAGVAR([lt_cv_aix_libpath_], [$1])="/usr/lib:/lib" fi ]) aix_libpath=$_LT_TAGVAR([lt_cv_aix_libpath_], [$1]) fi ])# _LT_SYS_MODULE_PATH_AIX # _LT_SHELL_INIT(ARG) # ------------------- m4_define([_LT_SHELL_INIT], [m4_divert_text([M4SH-INIT], [$1 ])])# _LT_SHELL_INIT # _LT_PROG_ECHO_BACKSLASH # ----------------------- # Find how we can fake an echo command that does not interpret backslash. # In particular, with Autoconf 2.60 or later we add some code to the start # of the generated configure script which will find a shell with a builtin # printf (which we can use as an echo command). m4_defun([_LT_PROG_ECHO_BACKSLASH], [ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO AC_MSG_CHECKING([how to print strings]) # Test print first, because it will be a builtin if present. if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \ test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='print -r --' elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='printf %s\n' else # Use this function as a fallback that always works. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $[]1 _LTECHO_EOF' } ECHO='func_fallback_echo' fi # func_echo_all arg... # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "$*" } case "$ECHO" in printf*) AC_MSG_RESULT([printf]) ;; print*) AC_MSG_RESULT([print -r]) ;; *) AC_MSG_RESULT([cat]) ;; esac m4_ifdef([_AS_DETECT_SUGGESTED], [_AS_DETECT_SUGGESTED([ test -n "${ZSH_VERSION+set}${BASH_VERSION+set}" || ( ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO PATH=/empty FPATH=/empty; export PATH FPATH test "X`printf %s $ECHO`" = "X$ECHO" \ || test "X`print -r -- $ECHO`" = "X$ECHO" )])]) _LT_DECL([], [SHELL], [1], [Shell to use when invoking shell scripts]) _LT_DECL([], [ECHO], [1], [An echo program that protects backslashes]) ])# _LT_PROG_ECHO_BACKSLASH # _LT_WITH_SYSROOT # ---------------- AC_DEFUN([_LT_WITH_SYSROOT], [AC_MSG_CHECKING([for sysroot]) AC_ARG_WITH([sysroot], [ --with-sysroot[=DIR] Search for dependent libraries within DIR (or the compiler's sysroot if not specified).], [], [with_sysroot=no]) dnl lt_sysroot will always be passed unquoted. We quote it here dnl in case the user passed a directory name. lt_sysroot= case ${with_sysroot} in #( yes) if test "$GCC" = yes; then lt_sysroot=`$CC --print-sysroot 2>/dev/null` fi ;; #( /*) lt_sysroot=`echo "$with_sysroot" | sed -e "$sed_quote_subst"` ;; #( no|'') ;; #( *) AC_MSG_RESULT([${with_sysroot}]) AC_MSG_ERROR([The sysroot must be an absolute path.]) ;; esac AC_MSG_RESULT([${lt_sysroot:-no}]) _LT_DECL([], [lt_sysroot], [0], [The root where to search for ]dnl [dependent libraries, and in which our libraries should be installed.])]) # _LT_ENABLE_LOCK # --------------- m4_defun([_LT_ENABLE_LOCK], [AC_ARG_ENABLE([libtool-lock], [AS_HELP_STRING([--disable-libtool-lock], [avoid locking (might break parallel builds)])]) test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes # Some flags need to be propagated to the compiler or linker for good # libtool support. case $host in ia64-*-hpux*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `/usr/bin/file conftest.$ac_objext` in *ELF-32*) HPUX_IA64_MODE="32" ;; *ELF-64*) HPUX_IA64_MODE="64" ;; esac fi rm -rf conftest* ;; *-*-irix6*) # Find out which ABI we are using. echo '[#]line '$LINENO' "configure"' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then if test "$lt_cv_prog_gnu_ld" = yes; then case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -melf32bsmip" ;; *N32*) LD="${LD-ld} -melf32bmipn32" ;; *64-bit*) LD="${LD-ld} -melf64bmip" ;; esac else case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -32" ;; *N32*) LD="${LD-ld} -n32" ;; *64-bit*) LD="${LD-ld} -64" ;; esac fi fi rm -rf conftest* ;; x86_64-*kfreebsd*-gnu|x86_64-*linux*|powerpc*-*linux*| \ s390*-*linux*|s390*-*tpf*|sparc*-*linux*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `/usr/bin/file conftest.o` in *32-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_i386_fbsd" ;; x86_64-*linux*) case `/usr/bin/file conftest.o` in *x86-64*) LD="${LD-ld} -m elf32_x86_64" ;; *) LD="${LD-ld} -m elf_i386" ;; esac ;; powerpc64le-*) LD="${LD-ld} -m elf32lppclinux" ;; powerpc64-*) LD="${LD-ld} -m elf32ppclinux" ;; s390x-*linux*) LD="${LD-ld} -m elf_s390" ;; sparc64-*linux*) LD="${LD-ld} -m elf32_sparc" ;; esac ;; *64-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_x86_64_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_x86_64" ;; powerpcle-*) LD="${LD-ld} -m elf64lppc" ;; powerpc-*) LD="${LD-ld} -m elf64ppc" ;; s390*-*linux*|s390*-*tpf*) LD="${LD-ld} -m elf64_s390" ;; sparc*-*linux*) LD="${LD-ld} -m elf64_sparc" ;; esac ;; esac fi rm -rf conftest* ;; *-*-sco3.2v5*) # On SCO OpenServer 5, we need -belf to get full-featured binaries. SAVE_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS -belf" AC_CACHE_CHECK([whether the C compiler needs -belf], lt_cv_cc_needs_belf, [AC_LANG_PUSH(C) AC_LINK_IFELSE([AC_LANG_PROGRAM([[]],[[]])],[lt_cv_cc_needs_belf=yes],[lt_cv_cc_needs_belf=no]) AC_LANG_POP]) if test x"$lt_cv_cc_needs_belf" != x"yes"; then # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf CFLAGS="$SAVE_CFLAGS" fi ;; *-*solaris*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `/usr/bin/file conftest.o` in *64-bit*) case $lt_cv_prog_gnu_ld in yes*) case $host in i?86-*-solaris*) LD="${LD-ld} -m elf_x86_64" ;; sparc*-*-solaris*) LD="${LD-ld} -m elf64_sparc" ;; esac # GNU ld 2.21 introduced _sol2 emulations. Use them if available. if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then LD="${LD-ld}_sol2" fi ;; *) if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then LD="${LD-ld} -64" fi ;; esac ;; esac fi rm -rf conftest* ;; esac need_locks="$enable_libtool_lock" ])# _LT_ENABLE_LOCK # _LT_PROG_AR # ----------- m4_defun([_LT_PROG_AR], [AC_CHECK_TOOLS(AR, [ar], false) : ${AR=ar} : ${AR_FLAGS=cru} _LT_DECL([], [AR], [1], [The archiver]) _LT_DECL([], [AR_FLAGS], [1], [Flags to create an archive]) AC_CACHE_CHECK([for archiver @FILE support], [lt_cv_ar_at_file], [lt_cv_ar_at_file=no AC_COMPILE_IFELSE([AC_LANG_PROGRAM], [echo conftest.$ac_objext > conftest.lst lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&AS_MESSAGE_LOG_FD' AC_TRY_EVAL([lt_ar_try]) if test "$ac_status" -eq 0; then # Ensure the archiver fails upon bogus file names. rm -f conftest.$ac_objext libconftest.a AC_TRY_EVAL([lt_ar_try]) if test "$ac_status" -ne 0; then lt_cv_ar_at_file=@ fi fi rm -f conftest.* libconftest.a ]) ]) if test "x$lt_cv_ar_at_file" = xno; then archiver_list_spec= else archiver_list_spec=$lt_cv_ar_at_file fi _LT_DECL([], [archiver_list_spec], [1], [How to feed a file listing to the archiver]) ])# _LT_PROG_AR # _LT_CMD_OLD_ARCHIVE # ------------------- m4_defun([_LT_CMD_OLD_ARCHIVE], [_LT_PROG_AR AC_CHECK_TOOL(STRIP, strip, :) test -z "$STRIP" && STRIP=: _LT_DECL([], [STRIP], [1], [A symbol stripping program]) AC_CHECK_TOOL(RANLIB, ranlib, :) test -z "$RANLIB" && RANLIB=: _LT_DECL([], [RANLIB], [1], [Commands used to install an old-style archive]) # Determine commands to create old-style static archives. old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs' old_postinstall_cmds='chmod 644 $oldlib' old_postuninstall_cmds= if test -n "$RANLIB"; then case $host_os in openbsd*) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib" ;; *) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib" ;; esac old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib" fi case $host_os in darwin*) lock_old_archive_extraction=yes ;; *) lock_old_archive_extraction=no ;; esac _LT_DECL([], [old_postinstall_cmds], [2]) _LT_DECL([], [old_postuninstall_cmds], [2]) _LT_TAGDECL([], [old_archive_cmds], [2], [Commands used to build an old-style archive]) _LT_DECL([], [lock_old_archive_extraction], [0], [Whether to use a lock for old archive extraction]) ])# _LT_CMD_OLD_ARCHIVE # _LT_COMPILER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS, # [OUTPUT-FILE], [ACTION-SUCCESS], [ACTION-FAILURE]) # ---------------------------------------------------------------- # Check whether the given compiler option works AC_DEFUN([_LT_COMPILER_OPTION], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_SED])dnl AC_CACHE_CHECK([$1], [$2], [$2=no m4_if([$4], , [ac_outfile=conftest.$ac_objext], [ac_outfile=$4]) echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="$3" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&AS_MESSAGE_LOG_FD echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then $2=yes fi fi $RM conftest* ]) if test x"[$]$2" = xyes; then m4_if([$5], , :, [$5]) else m4_if([$6], , :, [$6]) fi ])# _LT_COMPILER_OPTION # Old name: AU_ALIAS([AC_LIBTOOL_COMPILER_OPTION], [_LT_COMPILER_OPTION]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_COMPILER_OPTION], []) # _LT_LINKER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS, # [ACTION-SUCCESS], [ACTION-FAILURE]) # ---------------------------------------------------- # Check whether the given linker option works AC_DEFUN([_LT_LINKER_OPTION], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_SED])dnl AC_CACHE_CHECK([$1], [$2], [$2=no save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS $3" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&AS_MESSAGE_LOG_FD $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then $2=yes fi else $2=yes fi fi $RM -r conftest* LDFLAGS="$save_LDFLAGS" ]) if test x"[$]$2" = xyes; then m4_if([$4], , :, [$4]) else m4_if([$5], , :, [$5]) fi ])# _LT_LINKER_OPTION # Old name: AU_ALIAS([AC_LIBTOOL_LINKER_OPTION], [_LT_LINKER_OPTION]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_LINKER_OPTION], []) # LT_CMD_MAX_LEN #--------------- AC_DEFUN([LT_CMD_MAX_LEN], [AC_REQUIRE([AC_CANONICAL_HOST])dnl # find the maximum length of command line arguments AC_MSG_CHECKING([the maximum length of command line arguments]) AC_CACHE_VAL([lt_cv_sys_max_cmd_len], [dnl i=0 teststring="ABCD" case $build_os in msdosdjgpp*) # On DJGPP, this test can blow up pretty badly due to problems in libc # (any single argument exceeding 2000 bytes causes a buffer overrun # during glob expansion). Even if it were fixed, the result of this # check would be larger than it should be. lt_cv_sys_max_cmd_len=12288; # 12K is about right ;; gnu*) # Under GNU Hurd, this test is not required because there is # no limit to the length of command line arguments. # Libtool will interpret -1 as no limit whatsoever lt_cv_sys_max_cmd_len=-1; ;; cygwin* | mingw* | cegcc*) # On Win9x/ME, this test blows up -- it succeeds, but takes # about 5 minutes as the teststring grows exponentially. # Worse, since 9x/ME are not pre-emptively multitasking, # you end up with a "frozen" computer, even though with patience # the test eventually succeeds (with a max line length of 256k). # Instead, let's just punt: use the minimum linelength reported by # all of the supported platforms: 8192 (on NT/2K/XP). lt_cv_sys_max_cmd_len=8192; ;; mint*) # On MiNT this can take a long time and run out of memory. lt_cv_sys_max_cmd_len=8192; ;; amigaos*) # On AmigaOS with pdksh, this test takes hours, literally. # So we just punt and use a minimum line length of 8192. lt_cv_sys_max_cmd_len=8192; ;; netbsd* | freebsd* | openbsd* | darwin* | dragonfly*) # This has been around since 386BSD, at least. Likely further. if test -x /sbin/sysctl; then lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax` elif test -x /usr/sbin/sysctl; then lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax` else lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs fi # And add a safety zone lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` ;; interix*) # We know the value 262144 and hardcode it with a safety zone (like BSD) lt_cv_sys_max_cmd_len=196608 ;; os2*) # The test takes a long time on OS/2. lt_cv_sys_max_cmd_len=8192 ;; osf*) # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not # nice to cause kernel panics so lets avoid the loop below. # First set a reasonable default. lt_cv_sys_max_cmd_len=16384 # if test -x /sbin/sysconfig; then case `/sbin/sysconfig -q proc exec_disable_arg_limit` in *1*) lt_cv_sys_max_cmd_len=-1 ;; esac fi ;; sco3.2v5*) lt_cv_sys_max_cmd_len=102400 ;; sysv5* | sco5v6* | sysv4.2uw2*) kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null` if test -n "$kargmax"; then lt_cv_sys_max_cmd_len=`echo $kargmax | sed 's/.*[[ ]]//'` else lt_cv_sys_max_cmd_len=32768 fi ;; *) lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` if test -n "$lt_cv_sys_max_cmd_len" && \ test undefined != "$lt_cv_sys_max_cmd_len"; then lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` else # Make teststring a little bigger before we do anything with it. # a 1K string should be a reasonable start. for i in 1 2 3 4 5 6 7 8 ; do teststring=$teststring$teststring done SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}} # If test is not a shell built-in, we'll probably end up computing a # maximum length that is only half of the actual maximum length, but # we can't tell. while { test "X"`env echo "$teststring$teststring" 2>/dev/null` \ = "X$teststring$teststring"; } >/dev/null 2>&1 && test $i != 17 # 1/2 MB should be enough do i=`expr $i + 1` teststring=$teststring$teststring done # Only check the string length outside the loop. lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1` teststring= # Add a significant safety factor because C++ compilers can tack on # massive amounts of additional arguments before passing them to the # linker. It appears as though 1/2 is a usable value. lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2` fi ;; esac ]) if test -n $lt_cv_sys_max_cmd_len ; then AC_MSG_RESULT($lt_cv_sys_max_cmd_len) else AC_MSG_RESULT(none) fi max_cmd_len=$lt_cv_sys_max_cmd_len _LT_DECL([], [max_cmd_len], [0], [What is the maximum length of a command?]) ])# LT_CMD_MAX_LEN # Old name: AU_ALIAS([AC_LIBTOOL_SYS_MAX_CMD_LEN], [LT_CMD_MAX_LEN]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_SYS_MAX_CMD_LEN], []) # _LT_HEADER_DLFCN # ---------------- m4_defun([_LT_HEADER_DLFCN], [AC_CHECK_HEADERS([dlfcn.h], [], [], [AC_INCLUDES_DEFAULT])dnl ])# _LT_HEADER_DLFCN # _LT_TRY_DLOPEN_SELF (ACTION-IF-TRUE, ACTION-IF-TRUE-W-USCORE, # ACTION-IF-FALSE, ACTION-IF-CROSS-COMPILING) # ---------------------------------------------------------------- m4_defun([_LT_TRY_DLOPEN_SELF], [m4_require([_LT_HEADER_DLFCN])dnl if test "$cross_compiling" = yes; then : [$4] else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF [#line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisbility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; }] _LT_EOF if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext} 2>/dev/null; then (./conftest; exit; ) >&AS_MESSAGE_LOG_FD 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) $1 ;; x$lt_dlneed_uscore) $2 ;; x$lt_dlunknown|x*) $3 ;; esac else : # compilation failed $3 fi fi rm -fr conftest* ])# _LT_TRY_DLOPEN_SELF # LT_SYS_DLOPEN_SELF # ------------------ AC_DEFUN([LT_SYS_DLOPEN_SELF], [m4_require([_LT_HEADER_DLFCN])dnl if test "x$enable_dlopen" != xyes; then enable_dlopen=unknown enable_dlopen_self=unknown enable_dlopen_self_static=unknown else lt_cv_dlopen=no lt_cv_dlopen_libs= case $host_os in beos*) lt_cv_dlopen="load_add_on" lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ;; mingw* | pw32* | cegcc*) lt_cv_dlopen="LoadLibrary" lt_cv_dlopen_libs= ;; cygwin*) lt_cv_dlopen="dlopen" lt_cv_dlopen_libs= ;; darwin*) # if libdl is installed we need to link against it AC_CHECK_LIB([dl], [dlopen], [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"],[ lt_cv_dlopen="dyld" lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ]) ;; *) AC_CHECK_FUNC([shl_load], [lt_cv_dlopen="shl_load"], [AC_CHECK_LIB([dld], [shl_load], [lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-ldld"], [AC_CHECK_FUNC([dlopen], [lt_cv_dlopen="dlopen"], [AC_CHECK_LIB([dl], [dlopen], [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"], [AC_CHECK_LIB([svld], [dlopen], [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld"], [AC_CHECK_LIB([dld], [dld_link], [lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-ldld"]) ]) ]) ]) ]) ]) ;; esac if test "x$lt_cv_dlopen" != xno; then enable_dlopen=yes else enable_dlopen=no fi case $lt_cv_dlopen in dlopen) save_CPPFLAGS="$CPPFLAGS" test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" save_LDFLAGS="$LDFLAGS" wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" save_LIBS="$LIBS" LIBS="$lt_cv_dlopen_libs $LIBS" AC_CACHE_CHECK([whether a program can dlopen itself], lt_cv_dlopen_self, [dnl _LT_TRY_DLOPEN_SELF( lt_cv_dlopen_self=yes, lt_cv_dlopen_self=yes, lt_cv_dlopen_self=no, lt_cv_dlopen_self=cross) ]) if test "x$lt_cv_dlopen_self" = xyes; then wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\" AC_CACHE_CHECK([whether a statically linked program can dlopen itself], lt_cv_dlopen_self_static, [dnl _LT_TRY_DLOPEN_SELF( lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=no, lt_cv_dlopen_self_static=cross) ]) fi CPPFLAGS="$save_CPPFLAGS" LDFLAGS="$save_LDFLAGS" LIBS="$save_LIBS" ;; esac case $lt_cv_dlopen_self in yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; *) enable_dlopen_self=unknown ;; esac case $lt_cv_dlopen_self_static in yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; *) enable_dlopen_self_static=unknown ;; esac fi _LT_DECL([dlopen_support], [enable_dlopen], [0], [Whether dlopen is supported]) _LT_DECL([dlopen_self], [enable_dlopen_self], [0], [Whether dlopen of programs is supported]) _LT_DECL([dlopen_self_static], [enable_dlopen_self_static], [0], [Whether dlopen of statically linked programs is supported]) ])# LT_SYS_DLOPEN_SELF # Old name: AU_ALIAS([AC_LIBTOOL_DLOPEN_SELF], [LT_SYS_DLOPEN_SELF]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_DLOPEN_SELF], []) # _LT_COMPILER_C_O([TAGNAME]) # --------------------------- # Check to see if options -c and -o are simultaneously supported by compiler. # This macro does not hard code the compiler like AC_PROG_CC_C_O. m4_defun([_LT_COMPILER_C_O], [m4_require([_LT_DECL_SED])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_TAG_COMPILER])dnl AC_CACHE_CHECK([if $compiler supports -c -o file.$ac_objext], [_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)], [_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&AS_MESSAGE_LOG_FD echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then _LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes fi fi chmod u+w . 2>&AS_MESSAGE_LOG_FD $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* ]) _LT_TAGDECL([compiler_c_o], [lt_cv_prog_compiler_c_o], [1], [Does compiler simultaneously support -c and -o options?]) ])# _LT_COMPILER_C_O # _LT_COMPILER_FILE_LOCKS([TAGNAME]) # ---------------------------------- # Check to see if we can do hard links to lock some files if needed m4_defun([_LT_COMPILER_FILE_LOCKS], [m4_require([_LT_ENABLE_LOCK])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl _LT_COMPILER_C_O([$1]) hard_links="nottested" if test "$_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)" = no && test "$need_locks" != no; then # do not overwrite the value of need_locks provided by the user AC_MSG_CHECKING([if we can lock with hard links]) hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no AC_MSG_RESULT([$hard_links]) if test "$hard_links" = no; then AC_MSG_WARN([`$CC' does not support `-c -o', so `make -j' may be unsafe]) need_locks=warn fi else need_locks=no fi _LT_DECL([], [need_locks], [1], [Must we lock files when doing compilation?]) ])# _LT_COMPILER_FILE_LOCKS # _LT_CHECK_OBJDIR # ---------------- m4_defun([_LT_CHECK_OBJDIR], [AC_CACHE_CHECK([for objdir], [lt_cv_objdir], [rm -f .libs 2>/dev/null mkdir .libs 2>/dev/null if test -d .libs; then lt_cv_objdir=.libs else # MS-DOS does not allow filenames that begin with a dot. lt_cv_objdir=_libs fi rmdir .libs 2>/dev/null]) objdir=$lt_cv_objdir _LT_DECL([], [objdir], [0], [The name of the directory that contains temporary libtool files])dnl m4_pattern_allow([LT_OBJDIR])dnl AC_DEFINE_UNQUOTED(LT_OBJDIR, "$lt_cv_objdir/", [Define to the sub-directory in which libtool stores uninstalled libraries.]) ])# _LT_CHECK_OBJDIR # _LT_LINKER_HARDCODE_LIBPATH([TAGNAME]) # -------------------------------------- # Check hardcoding attributes. m4_defun([_LT_LINKER_HARDCODE_LIBPATH], [AC_MSG_CHECKING([how to hardcode library paths into programs]) _LT_TAGVAR(hardcode_action, $1)= if test -n "$_LT_TAGVAR(hardcode_libdir_flag_spec, $1)" || test -n "$_LT_TAGVAR(runpath_var, $1)" || test "X$_LT_TAGVAR(hardcode_automatic, $1)" = "Xyes" ; then # We can hardcode non-existent directories. if test "$_LT_TAGVAR(hardcode_direct, $1)" != no && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one ## test "$_LT_TAGVAR(hardcode_shlibpath_var, $1)" != no && test "$_LT_TAGVAR(hardcode_minus_L, $1)" != no; then # Linking always hardcodes the temporary library directory. _LT_TAGVAR(hardcode_action, $1)=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. _LT_TAGVAR(hardcode_action, $1)=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. _LT_TAGVAR(hardcode_action, $1)=unsupported fi AC_MSG_RESULT([$_LT_TAGVAR(hardcode_action, $1)]) if test "$_LT_TAGVAR(hardcode_action, $1)" = relink || test "$_LT_TAGVAR(inherit_rpath, $1)" = yes; then # Fast installation is not supported enable_fast_install=no elif test "$shlibpath_overrides_runpath" = yes || test "$enable_shared" = no; then # Fast installation is not necessary enable_fast_install=needless fi _LT_TAGDECL([], [hardcode_action], [0], [How to hardcode a shared library path into an executable]) ])# _LT_LINKER_HARDCODE_LIBPATH # _LT_CMD_STRIPLIB # ---------------- m4_defun([_LT_CMD_STRIPLIB], [m4_require([_LT_DECL_EGREP]) striplib= old_striplib= AC_MSG_CHECKING([whether stripping libraries is possible]) if test -n "$STRIP" && $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then test -z "$old_striplib" && old_striplib="$STRIP --strip-debug" test -z "$striplib" && striplib="$STRIP --strip-unneeded" AC_MSG_RESULT([yes]) else # FIXME - insert some real tests, host_os isn't really good enough case $host_os in darwin*) if test -n "$STRIP" ; then striplib="$STRIP -x" old_striplib="$STRIP -S" AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) fi ;; *) AC_MSG_RESULT([no]) ;; esac fi _LT_DECL([], [old_striplib], [1], [Commands to strip libraries]) _LT_DECL([], [striplib], [1]) ])# _LT_CMD_STRIPLIB # _LT_SYS_DYNAMIC_LINKER([TAG]) # ----------------------------- # PORTME Fill in your ld.so characteristics m4_defun([_LT_SYS_DYNAMIC_LINKER], [AC_REQUIRE([AC_CANONICAL_HOST])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_OBJDUMP])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_CHECK_SHELL_FEATURES])dnl AC_MSG_CHECKING([dynamic linker characteristics]) m4_if([$1], [], [ if test "$GCC" = yes; then case $host_os in darwin*) lt_awk_arg="/^libraries:/,/LR/" ;; *) lt_awk_arg="/^libraries:/" ;; esac case $host_os in mingw* | cegcc*) lt_sed_strip_eq="s,=\([[A-Za-z]]:\),\1,g" ;; *) lt_sed_strip_eq="s,=/,/,g" ;; esac lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq` case $lt_search_path_spec in *\;*) # if the path contains ";" then we assume it to be the separator # otherwise default to the standard path separator (i.e. ":") - it is # assumed that no part of a normal pathname contains ";" but that should # okay in the real world where ";" in dirpaths is itself problematic. lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'` ;; *) lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"` ;; esac # Ok, now we have the path, separated by spaces, we can step through it # and add multilib dir if necessary. lt_tmp_lt_search_path_spec= lt_multi_os_dir=`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` for lt_sys_path in $lt_search_path_spec; do if test -d "$lt_sys_path/$lt_multi_os_dir"; then lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path/$lt_multi_os_dir" else test -d "$lt_sys_path" && \ lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path" fi done lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk ' BEGIN {RS=" "; FS="/|\n";} { lt_foo=""; lt_count=0; for (lt_i = NF; lt_i > 0; lt_i--) { if ($lt_i != "" && $lt_i != ".") { if ($lt_i == "..") { lt_count++; } else { if (lt_count == 0) { lt_foo="/" $lt_i lt_foo; } else { lt_count--; } } } } if (lt_foo != "") { lt_freq[[lt_foo]]++; } if (lt_freq[[lt_foo]] == 1) { print lt_foo; } }'` # AWK program above erroneously prepends '/' to C:/dos/paths # for these hosts. case $host_os in mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\ $SED 's,/\([[A-Za-z]]:\),\1,g'` ;; esac sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP` else sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" fi]) library_names_spec= libname_spec='lib$name' soname_spec= shrext_cmds=".so" postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. soname_spec='${libname}${release}${shared_ext}$major' ;; aix[[4-9]]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes if test "$host_cpu" = ia64; then # AIX 5 supports IA64 library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with # the line `#! .'. This would cause the generated library to # depend on `.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[[01]] | aix4.[[01]].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' echo '#endif'; } | ${CC} -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac # AIX (on Power*) has no versioning support, so currently we can not hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. if test "$aix_use_runtimelinking" = yes; then # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' else # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. library_names_spec='${libname}${release}.a $libname.a' soname_spec='${libname}${release}${shared_ext}$major' fi shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([[^/]]*\)\.ixlibrary$%\1%'\''`; test $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) library_names_spec='${libname}${shared_ext}' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[[45]]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows shrext_cmds=".dll" need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' m4_if([$1], [],[ sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api"]) ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl*) # Native MSVC libname_spec='$name' soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' library_names_spec='${libname}.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([[a-zA-Z]]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) sys_lib_search_path_spec="$LIB" if $ECHO "$sys_lib_search_path_spec" | [$GREP ';[c-zC-Z]:/' >/dev/null]; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC wrapper library_names_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext} $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${major}$shared_ext ${libname}$shared_ext' soname_spec='${libname}${release}${major}$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' m4_if([$1], [],[ sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib"]) sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[[23]].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' need_version=no need_lib_prefix=no ;; freebsd-*) library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[[01]]* | freebsdelf3.[[01]]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[[2-9]]* | freebsdelf3.[[2-9]]* | \ freebsd4.[[0-5]] | freebsdelf4.[[0-5]] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LIBRARY_PATH shlibpath_overrides_runpath=yes sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' if test "X$HPUX_IA64_MODE" = X32; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" fi sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[[3-9]]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) if test "$lt_cv_prog_gnu_ld" = yes; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH AC_CACHE_VAL([lt_cv_shlibpath_overrides_runpath], [lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$_LT_TAGVAR(lt_prog_compiler_wl, $1)\"; \ LDFLAGS=\"\$LDFLAGS $_LT_TAGVAR(hardcode_libdir_flag_spec, $1)\"" AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])], [AS_IF([ ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null], [lt_cv_shlibpath_overrides_runpath=yes])]) LDFLAGS=$save_LDFLAGS libdir=$save_libdir ]) shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes # Append ld.so.conf contents to the search path if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \[$]2)); skip = 1; } { if (!skip) print \[$]0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; netbsdelf*-gnu) version_type=linux need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='NetBSD ld.elf_so' ;; netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; openbsd*) version_type=sunos sys_lib_dlsearch_path_spec="/usr/lib" need_lib_prefix=no # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. case $host_os in openbsd3.3 | openbsd3.3.*) need_version=yes ;; *) need_version=no ;; esac library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then case $host_os in openbsd2.[[89]] | openbsd2.[[89]].*) shlibpath_overrides_runpath=no ;; *) shlibpath_overrides_runpath=yes ;; esac else shlibpath_overrides_runpath=yes fi ;; os2*) libname_spec='$name' shrext_cmds=".dll" need_lib_prefix=no library_names_spec='$libname${shared_ext} $libname.a' dynamic_linker='OS/2 ld.exe' shlibpath_var=LIBPATH ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes if test "$with_gnu_ld" = yes; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) if test -d /usr/nec ;then version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' soname_spec='$libname${shared_ext}.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) version_type=freebsd-elf need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes if test "$with_gnu_ld" = yes; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac AC_MSG_RESULT([$dynamic_linker]) test "$dynamic_linker" = no && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" if test "$GCC" = yes; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi if test "${lt_cv_sys_lib_search_path_spec+set}" = set; then sys_lib_search_path_spec="$lt_cv_sys_lib_search_path_spec" fi if test "${lt_cv_sys_lib_dlsearch_path_spec+set}" = set; then sys_lib_dlsearch_path_spec="$lt_cv_sys_lib_dlsearch_path_spec" fi _LT_DECL([], [variables_saved_for_relink], [1], [Variables whose values should be saved in libtool wrapper scripts and restored at link time]) _LT_DECL([], [need_lib_prefix], [0], [Do we need the "lib" prefix for modules?]) _LT_DECL([], [need_version], [0], [Do we need a version for libraries?]) _LT_DECL([], [version_type], [0], [Library versioning type]) _LT_DECL([], [runpath_var], [0], [Shared library runtime path variable]) _LT_DECL([], [shlibpath_var], [0],[Shared library path variable]) _LT_DECL([], [shlibpath_overrides_runpath], [0], [Is shlibpath searched before the hard-coded library search path?]) _LT_DECL([], [libname_spec], [1], [Format of library name prefix]) _LT_DECL([], [library_names_spec], [1], [[List of archive names. First name is the real one, the rest are links. The last name is the one that the linker finds with -lNAME]]) _LT_DECL([], [soname_spec], [1], [[The coded name of the library, if different from the real name]]) _LT_DECL([], [install_override_mode], [1], [Permission mode override for installation of shared libraries]) _LT_DECL([], [postinstall_cmds], [2], [Command to use after installation of a shared archive]) _LT_DECL([], [postuninstall_cmds], [2], [Command to use after uninstallation of a shared archive]) _LT_DECL([], [finish_cmds], [2], [Commands used to finish a libtool library installation in a directory]) _LT_DECL([], [finish_eval], [1], [[As "finish_cmds", except a single script fragment to be evaled but not shown]]) _LT_DECL([], [hardcode_into_libs], [0], [Whether we should hardcode library paths into libraries]) _LT_DECL([], [sys_lib_search_path_spec], [2], [Compile-time system search path for libraries]) _LT_DECL([], [sys_lib_dlsearch_path_spec], [2], [Run-time system search path for libraries]) ])# _LT_SYS_DYNAMIC_LINKER # _LT_PATH_TOOL_PREFIX(TOOL) # -------------------------- # find a file program which can recognize shared library AC_DEFUN([_LT_PATH_TOOL_PREFIX], [m4_require([_LT_DECL_EGREP])dnl AC_MSG_CHECKING([for $1]) AC_CACHE_VAL(lt_cv_path_MAGIC_CMD, [case $MAGIC_CMD in [[\\/*] | ?:[\\/]*]) lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD="$MAGIC_CMD" lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR dnl $ac_dummy forces splitting on constant user-supplied paths. dnl POSIX.2 word splitting is done only on the output of word expansions, dnl not every word. This closes a longstanding sh security hole. ac_dummy="m4_if([$2], , $PATH, [$2])" for ac_dir in $ac_dummy; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$1; then lt_cv_path_MAGIC_CMD="$ac_dir/$1" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS="$lt_save_ifs" MAGIC_CMD="$lt_save_MAGIC_CMD" ;; esac]) MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if test -n "$MAGIC_CMD"; then AC_MSG_RESULT($MAGIC_CMD) else AC_MSG_RESULT(no) fi _LT_DECL([], [MAGIC_CMD], [0], [Used to examine libraries when file_magic_cmd begins with "file"])dnl ])# _LT_PATH_TOOL_PREFIX # Old name: AU_ALIAS([AC_PATH_TOOL_PREFIX], [_LT_PATH_TOOL_PREFIX]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_PATH_TOOL_PREFIX], []) # _LT_PATH_MAGIC # -------------- # find a file program which can recognize a shared library m4_defun([_LT_PATH_MAGIC], [_LT_PATH_TOOL_PREFIX(${ac_tool_prefix}file, /usr/bin$PATH_SEPARATOR$PATH) if test -z "$lt_cv_path_MAGIC_CMD"; then if test -n "$ac_tool_prefix"; then _LT_PATH_TOOL_PREFIX(file, /usr/bin$PATH_SEPARATOR$PATH) else MAGIC_CMD=: fi fi ])# _LT_PATH_MAGIC # LT_PATH_LD # ---------- # find the pathname to the GNU or non-GNU linker AC_DEFUN([LT_PATH_LD], [AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_PROG_ECHO_BACKSLASH])dnl AC_ARG_WITH([gnu-ld], [AS_HELP_STRING([--with-gnu-ld], [assume the C compiler uses GNU ld @<:@default=no@:>@])], [test "$withval" = no || with_gnu_ld=yes], [with_gnu_ld=no])dnl ac_prog=ld if test "$GCC" = yes; then # Check if gcc -print-prog-name=ld gives a path. AC_MSG_CHECKING([for ld used by $CC]) case $host in *-*-mingw*) # gcc leaves a trailing carriage return which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [[\\/]]* | ?:[[\\/]]*) re_direlt='/[[^/]][[^/]]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done test -z "$LD" && LD="$ac_prog" ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac elif test "$with_gnu_ld" = yes; then AC_MSG_CHECKING([for GNU ld]) else AC_MSG_CHECKING([for non-GNU ld]) fi AC_CACHE_VAL(lt_cv_path_LD, [if test -z "$LD"; then lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then lt_cv_path_LD="$ac_dir/$ac_prog" # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &1 /dev/null 2>&1; then lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' else # Keep this pattern in sync with the one in func_win32_libid. lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' lt_cv_file_magic_cmd='$OBJDUMP -f' fi ;; cegcc*) # use the weaker test based on 'objdump'. See mingw*. lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?' lt_cv_file_magic_cmd='$OBJDUMP -f' ;; darwin* | rhapsody*) lt_cv_deplibs_check_method=pass_all ;; freebsd* | dragonfly*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then case $host_cpu in i*86 ) # Not sure whether the presence of OpenBSD here was a mistake. # Let's accept both of them until this is cleared up. lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[[3-9]]86 (compact )?demand paged shared library' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` ;; esac else lt_cv_deplibs_check_method=pass_all fi ;; haiku*) lt_cv_deplibs_check_method=pass_all ;; hpux10.20* | hpux11*) lt_cv_file_magic_cmd=/usr/bin/file case $host_cpu in ia64*) lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|ELF-[[0-9]][[0-9]]) shared object file - IA64' lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so ;; hppa*64*) [lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]'] lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl ;; *) lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|PA-RISC[[0-9]]\.[[0-9]]) shared library' lt_cv_file_magic_test_file=/usr/lib/libc.sl ;; esac ;; interix[[3-9]]*) # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|\.a)$' ;; irix5* | irix6* | nonstopux*) case $LD in *-32|*"-32 ") libmagic=32-bit;; *-n32|*"-n32 ") libmagic=N32;; *-64|*"-64 ") libmagic=64-bit;; *) libmagic=never-match;; esac lt_cv_deplibs_check_method=pass_all ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) lt_cv_deplibs_check_method=pass_all ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|_pic\.a)$' fi ;; newos6*) lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (executable|dynamic lib)' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=/usr/lib/libnls.so ;; *nto* | *qnx*) lt_cv_deplibs_check_method=pass_all ;; openbsd*) if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|\.so|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$' fi ;; osf3* | osf4* | osf5*) lt_cv_deplibs_check_method=pass_all ;; rdos*) lt_cv_deplibs_check_method=pass_all ;; solaris*) lt_cv_deplibs_check_method=pass_all ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) lt_cv_deplibs_check_method=pass_all ;; sysv4 | sysv4.3*) case $host_vendor in motorola) lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib) M[[0-9]][[0-9]]* Version [[0-9]]' lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` ;; ncr) lt_cv_deplibs_check_method=pass_all ;; sequent) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB (shared object|dynamic lib )' ;; sni) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method="file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB dynamic lib" lt_cv_file_magic_test_file=/lib/libc.so ;; siemens) lt_cv_deplibs_check_method=pass_all ;; pc) lt_cv_deplibs_check_method=pass_all ;; esac ;; tpf*) lt_cv_deplibs_check_method=pass_all ;; esac ]) file_magic_glob= want_nocaseglob=no if test "$build" = "$host"; then case $host_os in mingw* | pw32*) if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then want_nocaseglob=yes else file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[[\1]]\/[[\1]]\/g;/g"` fi ;; esac fi file_magic_cmd=$lt_cv_file_magic_cmd deplibs_check_method=$lt_cv_deplibs_check_method test -z "$deplibs_check_method" && deplibs_check_method=unknown _LT_DECL([], [deplibs_check_method], [1], [Method to check whether dependent libraries are shared objects]) _LT_DECL([], [file_magic_cmd], [1], [Command to use when deplibs_check_method = "file_magic"]) _LT_DECL([], [file_magic_glob], [1], [How to find potential files when deplibs_check_method = "file_magic"]) _LT_DECL([], [want_nocaseglob], [1], [Find potential files using nocaseglob when deplibs_check_method = "file_magic"]) ])# _LT_CHECK_MAGIC_METHOD # LT_PATH_NM # ---------- # find the pathname to a BSD- or MS-compatible name lister AC_DEFUN([LT_PATH_NM], [AC_REQUIRE([AC_PROG_CC])dnl AC_CACHE_CHECK([for BSD- or MS-compatible name lister (nm)], lt_cv_path_NM, [if test -n "$NM"; then # Let the user override the test. lt_cv_path_NM="$NM" else lt_nm_to_check="${ac_tool_prefix}nm" if test -n "$ac_tool_prefix" && test "$build" = "$host"; then lt_nm_to_check="$lt_nm_to_check nm" fi for lt_tmp_nm in $lt_nm_to_check; do lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. tmp_nm="$ac_dir/$lt_tmp_nm" if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then # Check to see if the nm accepts a BSD-compat flag. # Adding the `sed 1q' prevents false positives on HP-UX, which says: # nm: unknown option "B" ignored # Tru64's nm complains that /dev/null is an invalid object file case `"$tmp_nm" -B /dev/null 2>&1 | sed '1q'` in */dev/null* | *'Invalid file or object type'*) lt_cv_path_NM="$tmp_nm -B" break ;; *) case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in */dev/null*) lt_cv_path_NM="$tmp_nm -p" break ;; *) lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but continue # so that we can try to find one that supports BSD flags ;; esac ;; esac fi done IFS="$lt_save_ifs" done : ${lt_cv_path_NM=no} fi]) if test "$lt_cv_path_NM" != "no"; then NM="$lt_cv_path_NM" else # Didn't find any BSD compatible name lister, look for dumpbin. if test -n "$DUMPBIN"; then : # Let the user override the test. else AC_CHECK_TOOLS(DUMPBIN, [dumpbin "link -dump"], :) case `$DUMPBIN -symbols /dev/null 2>&1 | sed '1q'` in *COFF*) DUMPBIN="$DUMPBIN -symbols" ;; *) DUMPBIN=: ;; esac fi AC_SUBST([DUMPBIN]) if test "$DUMPBIN" != ":"; then NM="$DUMPBIN" fi fi test -z "$NM" && NM=nm AC_SUBST([NM]) _LT_DECL([], [NM], [1], [A BSD- or MS-compatible name lister])dnl AC_CACHE_CHECK([the name lister ($NM) interface], [lt_cv_nm_interface], [lt_cv_nm_interface="BSD nm" echo "int some_variable = 0;" > conftest.$ac_ext (eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$ac_compile" 2>conftest.err) cat conftest.err >&AS_MESSAGE_LOG_FD (eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&AS_MESSAGE_LOG_FD) (eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out) cat conftest.err >&AS_MESSAGE_LOG_FD (eval echo "\"\$as_me:$LINENO: output\"" >&AS_MESSAGE_LOG_FD) cat conftest.out >&AS_MESSAGE_LOG_FD if $GREP 'External.*some_variable' conftest.out > /dev/null; then lt_cv_nm_interface="MS dumpbin" fi rm -f conftest*]) ])# LT_PATH_NM # Old names: AU_ALIAS([AM_PROG_NM], [LT_PATH_NM]) AU_ALIAS([AC_PROG_NM], [LT_PATH_NM]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AM_PROG_NM], []) dnl AC_DEFUN([AC_PROG_NM], []) # _LT_CHECK_SHAREDLIB_FROM_LINKLIB # -------------------------------- # how to determine the name of the shared library # associated with a specific link library. # -- PORTME fill in with the dynamic library characteristics m4_defun([_LT_CHECK_SHAREDLIB_FROM_LINKLIB], [m4_require([_LT_DECL_EGREP]) m4_require([_LT_DECL_OBJDUMP]) m4_require([_LT_DECL_DLLTOOL]) AC_CACHE_CHECK([how to associate runtime and link libraries], lt_cv_sharedlib_from_linklib_cmd, [lt_cv_sharedlib_from_linklib_cmd='unknown' case $host_os in cygwin* | mingw* | pw32* | cegcc*) # two different shell functions defined in ltmain.sh # decide which to use based on capabilities of $DLLTOOL case `$DLLTOOL --help 2>&1` in *--identify-strict*) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib ;; *) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback ;; esac ;; *) # fallback: assume linklib IS sharedlib lt_cv_sharedlib_from_linklib_cmd="$ECHO" ;; esac ]) sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO _LT_DECL([], [sharedlib_from_linklib_cmd], [1], [Command to associate shared and link libraries]) ])# _LT_CHECK_SHAREDLIB_FROM_LINKLIB # _LT_PATH_MANIFEST_TOOL # ---------------------- # locate the manifest tool m4_defun([_LT_PATH_MANIFEST_TOOL], [AC_CHECK_TOOL(MANIFEST_TOOL, mt, :) test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt AC_CACHE_CHECK([if $MANIFEST_TOOL is a manifest tool], [lt_cv_path_mainfest_tool], [lt_cv_path_mainfest_tool=no echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&AS_MESSAGE_LOG_FD $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out cat conftest.err >&AS_MESSAGE_LOG_FD if $GREP 'Manifest Tool' conftest.out > /dev/null; then lt_cv_path_mainfest_tool=yes fi rm -f conftest*]) if test "x$lt_cv_path_mainfest_tool" != xyes; then MANIFEST_TOOL=: fi _LT_DECL([], [MANIFEST_TOOL], [1], [Manifest tool])dnl ])# _LT_PATH_MANIFEST_TOOL # LT_LIB_M # -------- # check for math library AC_DEFUN([LT_LIB_M], [AC_REQUIRE([AC_CANONICAL_HOST])dnl LIBM= case $host in *-*-beos* | *-*-cegcc* | *-*-cygwin* | *-*-haiku* | *-*-pw32* | *-*-darwin*) # These system don't have libm, or don't need it ;; *-ncr-sysv4.3*) AC_CHECK_LIB(mw, _mwvalidcheckl, LIBM="-lmw") AC_CHECK_LIB(m, cos, LIBM="$LIBM -lm") ;; *) AC_CHECK_LIB(m, cos, LIBM="-lm") ;; esac AC_SUBST([LIBM]) ])# LT_LIB_M # Old name: AU_ALIAS([AC_CHECK_LIBM], [LT_LIB_M]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_CHECK_LIBM], []) # _LT_COMPILER_NO_RTTI([TAGNAME]) # ------------------------------- m4_defun([_LT_COMPILER_NO_RTTI], [m4_require([_LT_TAG_COMPILER])dnl _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)= if test "$GCC" = yes; then case $cc_basename in nvcc*) _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -Xcompiler -fno-builtin' ;; *) _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' ;; esac _LT_COMPILER_OPTION([if $compiler supports -fno-rtti -fno-exceptions], lt_cv_prog_compiler_rtti_exceptions, [-fno-rtti -fno-exceptions], [], [_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)="$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1) -fno-rtti -fno-exceptions"]) fi _LT_TAGDECL([no_builtin_flag], [lt_prog_compiler_no_builtin_flag], [1], [Compiler flag to turn off builtin functions]) ])# _LT_COMPILER_NO_RTTI # _LT_CMD_GLOBAL_SYMBOLS # ---------------------- m4_defun([_LT_CMD_GLOBAL_SYMBOLS], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([AC_PROG_AWK])dnl AC_REQUIRE([LT_PATH_NM])dnl AC_REQUIRE([LT_PATH_LD])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_TAG_COMPILER])dnl # Check for command to grab the raw symbol name followed by C symbol from nm. AC_MSG_CHECKING([command to parse $NM output from $compiler object]) AC_CACHE_VAL([lt_cv_sys_global_symbol_pipe], [ # These are sane defaults that work on at least a few old systems. # [They come from Ultrix. What could be older than Ultrix?!! ;)] # Character class describing NM global symbol codes. symcode='[[BCDEGRST]]' # Regexp to match symbols that can be accessed directly from C. sympat='\([[_A-Za-z]][[_A-Za-z0-9]]*\)' # Define system-specific variables. case $host_os in aix*) symcode='[[BCDT]]' ;; cygwin* | mingw* | pw32* | cegcc*) symcode='[[ABCDGISTW]]' ;; hpux*) if test "$host_cpu" = ia64; then symcode='[[ABCDEGRST]]' fi ;; irix* | nonstopux*) symcode='[[BCDEGRST]]' ;; osf*) symcode='[[BCDEGQRST]]' ;; solaris*) symcode='[[BDRT]]' ;; sco3.2v5*) symcode='[[DT]]' ;; sysv4.2uw2*) symcode='[[DT]]' ;; sysv5* | sco5v6* | unixware* | OpenUNIX*) symcode='[[ABDT]]' ;; sysv4) symcode='[[DFNSTU]]' ;; esac # If we're using GNU nm, then use its standard symbol codes. case `$NM -V 2>&1` in *GNU* | *'with BFD'*) symcode='[[ABCDGIRSTW]]' ;; esac # Transform an extracted symbol line into a proper C declaration. # Some systems (esp. on ia64) link data and code symbols differently, # so use this general approach. lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" # Transform an extracted symbol line into symbol name and symbol address lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/ {\"\2\", (void *) \&\2},/p'" lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \(lib[[^ ]]*\)$/ {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/ {\"lib\2\", (void *) \&\2},/p'" # Handle CRLF in mingw tool chain opt_cr= case $build_os in mingw*) opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp ;; esac # Try without a prefix underscore, then with it. for ac_symprfx in "" "_"; do # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol. symxfrm="\\1 $ac_symprfx\\2 \\2" # Write the raw and C identifiers. if test "$lt_cv_nm_interface" = "MS dumpbin"; then # Fake it for dumpbin and say T for any non-static function # and D for any global variable. # Also find C++ and __fastcall symbols from MSVC++, # which start with @ or ?. lt_cv_sys_global_symbol_pipe="$AWK ['"\ " {last_section=section; section=\$ 3};"\ " /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\ " /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\ " \$ 0!~/External *\|/{next};"\ " / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\ " {if(hide[section]) next};"\ " {f=0}; \$ 0~/\(\).*\|/{f=1}; {printf f ? \"T \" : \"D \"};"\ " {split(\$ 0, a, /\||\r/); split(a[2], s)};"\ " s[1]~/^[@?]/{print s[1], s[1]; next};"\ " s[1]~prfx {split(s[1],t,\"@\"); print t[1], substr(t[1],length(prfx))}"\ " ' prfx=^$ac_symprfx]" else lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[ ]]\($symcode$symcode*\)[[ ]][[ ]]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'" fi lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'" # Check to see that the pipe works correctly. pipe_works=no rm -f conftest* cat > conftest.$ac_ext <<_LT_EOF #ifdef __cplusplus extern "C" { #endif char nm_test_var; void nm_test_func(void); void nm_test_func(void){} #ifdef __cplusplus } #endif int main(){nm_test_var='a';nm_test_func();return(0);} _LT_EOF if AC_TRY_EVAL(ac_compile); then # Now try to grab the symbols. nlist=conftest.nm if AC_TRY_EVAL(NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist) && test -s "$nlist"; then # Try sorting and uniquifying the output. if sort "$nlist" | uniq > "$nlist"T; then mv -f "$nlist"T "$nlist" else rm -f "$nlist"T fi # Make sure that we snagged all the symbols we need. if $GREP ' nm_test_var$' "$nlist" >/dev/null; then if $GREP ' nm_test_func$' "$nlist" >/dev/null; then cat <<_LT_EOF > conftest.$ac_ext /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE) /* DATA imports from DLLs on WIN32 con't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT@&t@_DLSYM_CONST #elif defined(__osf__) /* This system does not cope well with relocations in const data. */ # define LT@&t@_DLSYM_CONST #else # define LT@&t@_DLSYM_CONST const #endif #ifdef __cplusplus extern "C" { #endif _LT_EOF # Now generate the symbol file. eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext' cat <<_LT_EOF >> conftest.$ac_ext /* The mapping between symbol names and symbols. */ LT@&t@_DLSYM_CONST struct { const char *name; void *address; } lt__PROGRAM__LTX_preloaded_symbols[[]] = { { "@PROGRAM@", (void *) 0 }, _LT_EOF $SED "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (void *) \&\2},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext cat <<\_LT_EOF >> conftest.$ac_ext {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt__PROGRAM__LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif _LT_EOF # Now try linking the two files. mv conftest.$ac_objext conftstm.$ac_objext lt_globsym_save_LIBS=$LIBS lt_globsym_save_CFLAGS=$CFLAGS LIBS="conftstm.$ac_objext" CFLAGS="$CFLAGS$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)" if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext}; then pipe_works=yes fi LIBS=$lt_globsym_save_LIBS CFLAGS=$lt_globsym_save_CFLAGS else echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD fi else echo "cannot find nm_test_var in $nlist" >&AS_MESSAGE_LOG_FD fi else echo "cannot run $lt_cv_sys_global_symbol_pipe" >&AS_MESSAGE_LOG_FD fi else echo "$progname: failed program was:" >&AS_MESSAGE_LOG_FD cat conftest.$ac_ext >&5 fi rm -rf conftest* conftst* # Do not use the global_symbol_pipe unless it works. if test "$pipe_works" = yes; then break else lt_cv_sys_global_symbol_pipe= fi done ]) if test -z "$lt_cv_sys_global_symbol_pipe"; then lt_cv_sys_global_symbol_to_cdecl= fi if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then AC_MSG_RESULT(failed) else AC_MSG_RESULT(ok) fi # Response file support. if test "$lt_cv_nm_interface" = "MS dumpbin"; then nm_file_list_spec='@' elif $NM --help 2>/dev/null | grep '[[@]]FILE' >/dev/null; then nm_file_list_spec='@' fi _LT_DECL([global_symbol_pipe], [lt_cv_sys_global_symbol_pipe], [1], [Take the output of nm and produce a listing of raw symbols and C names]) _LT_DECL([global_symbol_to_cdecl], [lt_cv_sys_global_symbol_to_cdecl], [1], [Transform the output of nm in a proper C declaration]) _LT_DECL([global_symbol_to_c_name_address], [lt_cv_sys_global_symbol_to_c_name_address], [1], [Transform the output of nm in a C name address pair]) _LT_DECL([global_symbol_to_c_name_address_lib_prefix], [lt_cv_sys_global_symbol_to_c_name_address_lib_prefix], [1], [Transform the output of nm in a C name address pair when lib prefix is needed]) _LT_DECL([], [nm_file_list_spec], [1], [Specify filename containing input files for $NM]) ]) # _LT_CMD_GLOBAL_SYMBOLS # _LT_COMPILER_PIC([TAGNAME]) # --------------------------- m4_defun([_LT_COMPILER_PIC], [m4_require([_LT_TAG_COMPILER])dnl _LT_TAGVAR(lt_prog_compiler_wl, $1)= _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)= m4_if([$1], [CXX], [ # C++ specific cases for pic, static, wl, etc. if test "$GXX" = yes; then _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' case $host_os in aix*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the `-m68020' flag to GCC prevents building anything better, # like `-m68040'. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' ;; *djgpp*) # DJGPP does not support shared libraries at all _LT_TAGVAR(lt_prog_compiler_pic, $1)= ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. _LT_TAGVAR(lt_prog_compiler_static, $1)= ;; interix[[3-9]]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic fi ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac else case $host_os in aix[[4-9]]*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' else _LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp' fi ;; chorus*) case $cc_basename in cxch68*) # Green Hills C++ Compiler # _LT_TAGVAR(lt_prog_compiler_static, $1)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a" ;; esac ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) ;; dgux*) case $cc_basename in ec++*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' ;; ghcx*) # Green Hills C++ Compiler _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; *) ;; esac ;; freebsd* | dragonfly*) # FreeBSD uses GNU C++ ;; hpux9* | hpux10* | hpux11*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' if test "$host_cpu" != ia64; then _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' fi ;; aCC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' ;; esac ;; *) ;; esac ;; interix*) # This is c89, which is MS Visual C++ (no shared libs) # Anyone wants to do a port? ;; irix5* | irix6* | nonstopux*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' # CC pic flag -KPIC is the default. ;; *) ;; esac ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in KCC*) # KAI C++ Compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; ecpc* ) # old Intel C++ for x86_64 which still supported -KPIC. _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; icpc* ) # Intel C++, used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; pgCC* | pgcpp*) # Portland Group C++ compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; cxx*) # Compaq C++ # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; xlc* | xlC* | bgxl[[cC]]* | mpixl[[cC]]*) # IBM XL 8.0, 9.0 on PPC and BlueGene _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; esac ;; esac ;; lynxos*) ;; m88k*) ;; mvs*) case $cc_basename in cxx*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-W c,exportall' ;; *) ;; esac ;; netbsd* | netbsdelf*-gnu) ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,' ;; RCC*) # Rational C++ 2.4.1 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; cxx*) # Digital/Compaq C++ _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; *) ;; esac ;; psos*) ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; gcx*) # Green Hills C++ Compiler _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' ;; *) ;; esac ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; lcc*) # Lucid _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; *) ;; esac ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' ;; *) ;; esac ;; vxworks*) ;; *) _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; esac fi ], [ if test "$GCC" = yes; then _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' case $host_os in aix*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the `-m68020' flag to GCC prevents building anything better, # like `-m68040'. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. _LT_TAGVAR(lt_prog_compiler_static, $1)= ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac ;; interix[[3-9]]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; msdosdjgpp*) # Just because we use GCC doesn't mean we suddenly get shared libraries # on systems that don't support them. _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no enable_shared=no ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic fi ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac case $cc_basename in nvcc*) # Cuda Compiler Driver 2.2 _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Xlinker ' if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then _LT_TAGVAR(lt_prog_compiler_pic, $1)="-Xcompiler $_LT_TAGVAR(lt_prog_compiler_pic, $1)" fi ;; esac else # PORTME Check for flag to pass linker flags through the system compiler. case $host_os in aix*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' else _LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp' fi ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) ;; hpux9* | hpux10* | hpux11*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but # not for PA HP-UX. case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' ;; esac # Is there a better lt_prog_compiler_static that works with the bundled CC? _LT_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' ;; irix5* | irix6* | nonstopux*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # PIC (with -KPIC) is the default. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in # old Intel for x86_64 which still supported -KPIC. ecc*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; # icc used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. icc* | ifort*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; # Lahey Fortran 8.1. lf95*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='--shared' _LT_TAGVAR(lt_prog_compiler_static, $1)='--static' ;; nagfor*) # NAG Fortran compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group compilers (*not* the Pentium gcc compiler, # which looks to be a dead project) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; ccc*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # All Alpha code is PIC. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; xl* | bgxl* | bgf* | mpixl*) # IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [[1-7]].* | *Sun*Fortran*\ 8.[[0-3]]*) # Sun Fortran 8.3 passes all unrecognized flags to the linker _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='' ;; *Sun\ F* | *Sun*Fortran*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; *Sun\ C*) # Sun C 5.9 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' ;; *Intel*\ [[CF]]*Compiler*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; *Portland\ Group*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; esac ;; esac ;; newsos6) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; osf3* | osf4* | osf5*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # All OSF/1 code is PIC. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; rdos*) _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; solaris*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' case $cc_basename in f77* | f90* | f95* | sunf77* | sunf90* | sunf95*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ';; *) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,';; esac ;; sunos4*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; sysv4 | sysv4.2uw2* | sysv4.3*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; sysv4*MP*) if test -d /usr/nec ;then _LT_TAGVAR(lt_prog_compiler_pic, $1)='-Kconform_pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; unicos*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; uts4*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; *) _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; esac fi ]) case $host_os in # For platforms which do not support PIC, -DPIC is meaningless: *djgpp*) _LT_TAGVAR(lt_prog_compiler_pic, $1)= ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])" ;; esac AC_CACHE_CHECK([for $compiler option to produce PIC], [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)], [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_prog_compiler_pic, $1)]) _LT_TAGVAR(lt_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_cv_prog_compiler_pic, $1) # # Check to make sure the PIC flag actually works. # if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then _LT_COMPILER_OPTION([if $compiler PIC flag $_LT_TAGVAR(lt_prog_compiler_pic, $1) works], [_LT_TAGVAR(lt_cv_prog_compiler_pic_works, $1)], [$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])], [], [case $_LT_TAGVAR(lt_prog_compiler_pic, $1) in "" | " "*) ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)=" $_LT_TAGVAR(lt_prog_compiler_pic, $1)" ;; esac], [_LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no]) fi _LT_TAGDECL([pic_flag], [lt_prog_compiler_pic], [1], [Additional compiler flags for building library objects]) _LT_TAGDECL([wl], [lt_prog_compiler_wl], [1], [How to pass a linker flag through the compiler]) # # Check to make sure the static flag actually works. # wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1) eval lt_tmp_static_flag=\"$_LT_TAGVAR(lt_prog_compiler_static, $1)\" _LT_LINKER_OPTION([if $compiler static flag $lt_tmp_static_flag works], _LT_TAGVAR(lt_cv_prog_compiler_static_works, $1), $lt_tmp_static_flag, [], [_LT_TAGVAR(lt_prog_compiler_static, $1)=]) _LT_TAGDECL([link_static_flag], [lt_prog_compiler_static], [1], [Compiler flag to prevent dynamic linking]) ])# _LT_COMPILER_PIC # _LT_LINKER_SHLIBS([TAGNAME]) # ---------------------------- # See if the linker supports building shared libraries. m4_defun([_LT_LINKER_SHLIBS], [AC_REQUIRE([LT_PATH_LD])dnl AC_REQUIRE([LT_PATH_NM])dnl m4_require([_LT_PATH_MANIFEST_TOOL])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl m4_require([_LT_TAG_COMPILER])dnl AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries]) m4_if([$1], [CXX], [ _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'] case $host_os in aix[[4-9]]*) # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to AIX nm, but means don't demangle with GNU nm # Also, AIX nm treats weak defined symbols like other global defined # symbols, whereas GNU nm marks them as "W". if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then _LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' else _LT_TAGVAR(export_symbols_cmds, $1)='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' fi ;; pw32*) _LT_TAGVAR(export_symbols_cmds, $1)="$ltdll_cmds" ;; cygwin* | mingw* | cegcc*) case $cc_basename in cl*) _LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' ;; *) _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'] ;; esac ;; linux* | k*bsd*-gnu | gnu*) _LT_TAGVAR(link_all_deplibs, $1)=no ;; *) _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' ;; esac ], [ runpath_var= _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_cmds, $1)= _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(compiler_needs_object, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(old_archive_from_new_cmds, $1)= _LT_TAGVAR(old_archive_from_expsyms_cmds, $1)= _LT_TAGVAR(thread_safe_flag_spec, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= # include_expsyms should be a list of space-separated symbols to be *always* # included in the symbol list _LT_TAGVAR(include_expsyms, $1)= # exclude_expsyms can be an extended regexp of symbols to exclude # it will be wrapped by ` (' and `)$', so one must not match beginning or # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', # as well as any symbol that contains `d'. _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'] # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out # platforms (ab)use it in PIC code, but their linkers get confused if # the symbol is explicitly referenced. Since portable code cannot # rely on this symbol name, it's probably fine to never include it in # preloaded symbol tables. # Exclude shared library initialization/finalization symbols. dnl Note also adjust exclude_expsyms for C++ above. extract_expsyms_cmds= case $host_os in cygwin* | mingw* | pw32* | cegcc*) # FIXME: the MSVC++ port hasn't been tested in a loooong time # When not using gcc, we currently assume that we are using # Microsoft Visual C++. if test "$GCC" != yes; then with_gnu_ld=no fi ;; interix*) # we just hope/assume this is gcc and not c89 (= MSVC++) with_gnu_ld=yes ;; openbsd*) with_gnu_ld=no ;; linux* | k*bsd*-gnu | gnu*) _LT_TAGVAR(link_all_deplibs, $1)=no ;; esac _LT_TAGVAR(ld_shlibs, $1)=yes # On some targets, GNU ld is compatible enough with the native linker # that we're better off using the native interface for both. lt_use_gnu_ld_interface=no if test "$with_gnu_ld" = yes; then case $host_os in aix*) # The AIX port of GNU ld has always aspired to compatibility # with the native linker. However, as the warning in the GNU ld # block says, versions before 2.19.5* couldn't really create working # shared libraries, regardless of the interface used. case `$LD -v 2>&1` in *\ \(GNU\ Binutils\)\ 2.19.5*) ;; *\ \(GNU\ Binutils\)\ 2.[[2-9]]*) ;; *\ \(GNU\ Binutils\)\ [[3-9]]*) ;; *) lt_use_gnu_ld_interface=yes ;; esac ;; *) lt_use_gnu_ld_interface=yes ;; esac fi if test "$lt_use_gnu_ld_interface" = yes; then # If archive_cmds runs LD, not CC, wlarc should be empty wlarc='${wl}' # Set some defaults for GNU ld with shared library support. These # are reset later if shared libraries are not supported. Putting them # here allows them to be overridden if necessary. runpath_var=LD_RUN_PATH _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' # ancient GNU ld didn't support --whole-archive et. al. if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then _LT_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' else _LT_TAGVAR(whole_archive_flag_spec, $1)= fi supports_anon_versioning=no case `$LD -v 2>&1` in *GNU\ gold*) supports_anon_versioning=yes ;; *\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.10.*) ;; # catch versions < 2.11 *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... *\ 2.11.*) ;; # other 2.11 versions *) supports_anon_versioning=yes ;; esac # See if GNU ld supports shared libraries. case $host_os in aix[[3-9]]*) # On AIX/PPC, the GNU linker is very broken if test "$host_cpu" != ia64; then _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 *** Warning: the GNU linker, at least up to release 2.19, is reported *** to be unable to reliably create shared libraries on AIX. *** Therefore, libtool is disabling shared libraries support. If you *** really care for shared libraries, you may want to install binutils *** 2.20 or above, or modify your PATH so that a non-GNU linker is found. *** You will then need to restart the configuration process. _LT_EOF fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='' ;; m68k) _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME _LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; cygwin* | mingw* | pw32* | cegcc*) # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless, # as there is no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'] if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file (1st line # is EXPORTS), use it as is; otherwise, prepend... _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; haiku*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(link_all_deplibs, $1)=yes ;; interix[[3-9]]*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu) tmp_diet=no if test "$host_os" = linux-dietlibc; then case $cc_basename in diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn) esac fi if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \ && test "$tmp_diet" = no then tmp_addflag=' $pic_flag' tmp_sharedflag='-shared' case $cc_basename,$host_cpu in pgcc*) # Portland Group C compiler _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' tmp_addflag=' $pic_flag' ;; pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group f77 and f90 compilers _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' tmp_addflag=' $pic_flag -Mnomain' ;; ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 tmp_addflag=' -i_dynamic' ;; efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 tmp_addflag=' -i_dynamic -nofor_main' ;; ifc* | ifort*) # Intel Fortran compiler tmp_addflag=' -nofor_main' ;; lf95*) # Lahey Fortran 8.1 _LT_TAGVAR(whole_archive_flag_spec, $1)= tmp_sharedflag='--shared' ;; xl[[cC]]* | bgxl[[cC]]* | mpixl[[cC]]*) # IBM XL C 8.0 on PPC (deal with xlf below) tmp_sharedflag='-qmkshrobj' tmp_addflag= ;; nvcc*) # Cuda Compiler Driver 2.2 _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes ;; esac case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C 5.9 _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes tmp_sharedflag='-G' ;; *Sun\ F*) # Sun Fortran 8.3 tmp_sharedflag='-G' ;; esac _LT_TAGVAR(archive_cmds, $1)='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' if test "x$supports_anon_versioning" = xyes; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' fi case $cc_basename in xlf* | bgf* | bgxlf* | mpixlf*) # IBM XL Fortran 10.1 on PPC cannot create shared libs itself _LT_TAGVAR(whole_archive_flag_spec, $1)='--whole-archive$convenience --no-whole-archive' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib' if test "x$supports_anon_versioning" = xyes; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib' fi ;; esac else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' wlarc= else _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' fi ;; solaris*) if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 *** Warning: The releases 2.8.* of the GNU linker cannot reliably *** create shared libraries on Solaris systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.9.1 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) case `$LD -v 2>&1` in *\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.1[[0-5]].*) _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 *** Warning: Releases of the GNU linker prior to 2.16.91.0.3 can not *** reliably create shared libraries on SCO systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.16.91.0.3 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF ;; *) # For security reasons, it is highly recommended that you always # use absolute paths for naming shared libraries, and exclude the # DT_RUNPATH tag from executables and libraries. But doing so # requires that you compile everything twice, which is a pain. if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; sunos4*) _LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' wlarc= _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac if test "$_LT_TAGVAR(ld_shlibs, $1)" = no; then runpath_var= _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= fi else # PORTME fill in a description of your system's linker (not GNU ld) case $host_os in aix3*) _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(archive_expsym_cmds, $1)='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' # Note: this linker hardcodes the directories in LIBPATH if there # are no directories specified by -L. _LT_TAGVAR(hardcode_minus_L, $1)=yes if test "$GCC" = yes && test -z "$lt_prog_compiler_static"; then # Neither direct hardcoding nor static linking is supported with a # broken collect2. _LT_TAGVAR(hardcode_direct, $1)=unsupported fi ;; aix[[4-9]]*) if test "$host_cpu" = ia64; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag="" else # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to AIX nm, but means don't demangle with GNU nm # Also, AIX nm treats weak defined symbols like other global # defined symbols, whereas GNU nm marks them as "W". if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then _LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' else _LT_TAGVAR(export_symbols_cmds, $1)='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' fi aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # need to do runtime linking. case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*) for ld_flag in $LDFLAGS; do if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then aix_use_runtimelinking=yes break fi done ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. _LT_TAGVAR(archive_cmds, $1)='' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(file_list_spec, $1)='${wl}-f,' if test "$GCC" = yes; then case $host_os in aix4.[[012]]|aix4.[[012]].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`${CC} -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 _LT_TAGVAR(hardcode_direct, $1)=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)= fi ;; esac shared_flag='-shared' if test "$aix_use_runtimelinking" = yes; then shared_flag="$shared_flag "'${wl}-G' fi _LT_TAGVAR(link_all_deplibs, $1)=no else # not using gcc if test "$host_cpu" = ia64; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test "$aix_use_runtimelinking" = yes; then shared_flag='${wl}-G' else shared_flag='${wl}-bM:SRE' fi fi fi _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to export. _LT_TAGVAR(always_export_symbols, $1)=yes if test "$aix_use_runtimelinking" = yes; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. _LT_TAGVAR(allow_undefined_flag, $1)='-berok' # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" else if test "$host_cpu" = ia64; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $libdir:/usr/lib:/lib' _LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs" _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-bernotok' _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-berok' if test "$with_gnu_ld" = yes; then # We only use this code for GNU lds that support --whole-archive. _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive$convenience ${wl}--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives _LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience' fi _LT_TAGVAR(archive_cmds_need_lc, $1)=yes # This is similar to how AIX traditionally builds its shared libraries. _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' fi fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='' ;; m68k) _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac ;; bsdi[[45]]*) _LT_TAGVAR(export_dynamic_flag_spec, $1)=-rdynamic ;; cygwin* | mingw* | pw32* | cegcc*) # When not using gcc, we currently assume that we are using # Microsoft Visual C++. # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. case $cc_basename in cl*) # Native MSVC _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(file_list_spec, $1)='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames=' _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp; else sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes _LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1,DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols' # Don't use ranlib _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib' _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile="$lt_outputfile.exe" lt_tool_outputfile="$lt_tool_outputfile.exe" ;; esac~ if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # Assume MSVC wrapper _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. _LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames=' # The linker will automatically build a .lib file if we build a DLL. _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' # FIXME: Should let the user specify the lib program. _LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes ;; esac ;; darwin* | rhapsody*) _LT_DARWIN_LINKER_FEATURES($1) ;; dgux*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor # support. Future versions do this automatically, but an explicit c++rt0.o # does not break anything, and helps significantly (at the cost of a little # extra space). freebsd2.2*) _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # Unfortunately, older versions of FreeBSD 2 do not have this feature. freebsd2.*) _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # FreeBSD 3 and greater uses gcc -shared to do shared libraries. freebsd* | dragonfly*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; hpux9*) if test "$GCC" = yes; then _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' else _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_direct, $1)=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' ;; hpux10*) if test "$GCC" = yes && test "$with_gnu_ld" = no; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi if test "$with_gnu_ld" = no; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes fi ;; hpux11*) if test "$GCC" = yes && test "$with_gnu_ld" = no; then case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' ;; esac else case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) m4_if($1, [], [ # Older versions of the 11.00 compiler do not understand -b yet # (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does) _LT_LINKER_OPTION([if $CC understands -b], _LT_TAGVAR(lt_cv_prog_compiler__b, $1), [-b], [_LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'], [_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'])], [_LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags']) ;; esac fi if test "$with_gnu_ld" = no; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: case $host_cpu in hppa*64*|ia64*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac fi ;; irix5* | irix6* | nonstopux*) if test "$GCC" = yes; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' # Try to use the -exported_symbol ld option, if it does not # work, assume that -exports_file does not work either and # implicitly export all symbols. # This should be the same for all languages, so no per-tag cache variable. AC_CACHE_CHECK([whether the $host_os linker accepts -exported_symbol], [lt_cv_irix_exported_symbol], [save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null" AC_LINK_IFELSE( [AC_LANG_SOURCE( [AC_LANG_CASE([C], [[int foo (void) { return 0; }]], [C++], [[int foo (void) { return 0; }]], [Fortran 77], [[ subroutine foo end]], [Fortran], [[ subroutine foo end]])])], [lt_cv_irix_exported_symbol=yes], [lt_cv_irix_exported_symbol=no]) LDFLAGS="$save_LDFLAGS"]) if test "$lt_cv_irix_exported_symbol" = yes; then _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib' fi else _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(inherit_rpath, $1)=yes _LT_TAGVAR(link_all_deplibs, $1)=yes ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out else _LT_TAGVAR(archive_cmds, $1)='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; newsos6) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *nto* | *qnx*) ;; openbsd*) if test -f /usr/libexec/ld.so; then _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=yes if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-retain-symbols-file,$export_symbols' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' else case $host_os in openbsd[[01]].* | openbsd2.[[0-7]] | openbsd2.[[0-7]].*) _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' ;; esac fi else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; os2*) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~echo DATA >> $output_objdir/$libname.def~echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' _LT_TAGVAR(old_archive_from_new_cmds, $1)='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' ;; osf3*) if test "$GCC" = yes; then _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' else _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: ;; osf4* | osf5*) # as osf3* with the addition of -msym flag if test "$GCC" = yes; then _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' else _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~ $CC -shared${allow_undefined_flag} ${wl}-input ${wl}$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib~$RM $lib.exp' # Both c and cxx compiler support -rpath directly _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' _LT_TAGVAR(hardcode_libdir_separator, $1)=: ;; solaris*) _LT_TAGVAR(no_undefined_flag, $1)=' -z defs' if test "$GCC" = yes; then wlarc='${wl}' _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' else case `$CC -V 2>&1` in *"Compilers 5.0"*) wlarc='' _LT_TAGVAR(archive_cmds, $1)='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp' ;; *) wlarc='${wl}' _LT_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' ;; esac fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands `-z linker_flag'. GCC discards it without `$wl', # but is careful enough not to reorder. # Supported since Solaris 2.6 (maybe 2.5.1?) if test "$GCC" = yes; then _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' else _LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract' fi ;; esac _LT_TAGVAR(link_all_deplibs, $1)=yes ;; sunos4*) if test "x$host_vendor" = xsequent; then # Use $CC to link under sequent, because it throws in some extra .o # files that make .init and .fini sections work. _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; sysv4) case $host_vendor in sni) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes # is this really true??? ;; siemens) ## LD is ld it makes a PLAMLIB ## CC just makes a GrossModule. _LT_TAGVAR(archive_cmds, $1)='$LD -G -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(reload_cmds, $1)='$CC -r -o $output$reload_objs' _LT_TAGVAR(hardcode_direct, $1)=no ;; motorola) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=no #Motorola manual says yes, but my tests say they lie ;; esac runpath_var='LD_RUN_PATH' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; sysv4.3*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)='-Bexport' ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var=LD_RUN_PATH hardcode_runpath_var=yes _LT_TAGVAR(ld_shlibs, $1)=yes fi ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*) _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var='LD_RUN_PATH' if test "$GCC" = yes; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We can NOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' _LT_TAGVAR(allow_undefined_flag, $1)='${wl}-z,nodefs' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Bexport' runpath_var='LD_RUN_PATH' if test "$GCC" = yes; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; uts4*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(ld_shlibs, $1)=no ;; esac if test x$host_vendor = xsni; then case $host in sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Blargedynsym' ;; esac fi fi ]) AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)]) test "$_LT_TAGVAR(ld_shlibs, $1)" = no && can_build_shared=no _LT_TAGVAR(with_gnu_ld, $1)=$with_gnu_ld _LT_DECL([], [libext], [0], [Old archive suffix (normally "a")])dnl _LT_DECL([], [shrext_cmds], [1], [Shared library suffix (normally ".so")])dnl _LT_DECL([], [extract_expsyms_cmds], [2], [The commands to extract the exported symbol list from a shared archive]) # # Do we need to explicitly link libc? # case "x$_LT_TAGVAR(archive_cmds_need_lc, $1)" in x|xyes) # Assume -lc should be added _LT_TAGVAR(archive_cmds_need_lc, $1)=yes if test "$enable_shared" = yes && test "$GCC" = yes; then case $_LT_TAGVAR(archive_cmds, $1) in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. AC_CACHE_CHECK([whether -lc should be explicitly linked in], [lt_cv_]_LT_TAGVAR(archive_cmds_need_lc, $1), [$RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if AC_TRY_EVAL(ac_compile) 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1) pic_flag=$_LT_TAGVAR(lt_prog_compiler_pic, $1) compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$_LT_TAGVAR(allow_undefined_flag, $1) _LT_TAGVAR(allow_undefined_flag, $1)= if AC_TRY_EVAL(_LT_TAGVAR(archive_cmds, $1) 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) then lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=no else lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=yes fi _LT_TAGVAR(allow_undefined_flag, $1)=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* ]) _LT_TAGVAR(archive_cmds_need_lc, $1)=$lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1) ;; esac fi ;; esac _LT_TAGDECL([build_libtool_need_lc], [archive_cmds_need_lc], [0], [Whether or not to add -lc for building shared libraries]) _LT_TAGDECL([allow_libtool_libs_with_static_runtimes], [enable_shared_with_static_runtimes], [0], [Whether or not to disallow shared libs when runtime libs are static]) _LT_TAGDECL([], [export_dynamic_flag_spec], [1], [Compiler flag to allow reflexive dlopens]) _LT_TAGDECL([], [whole_archive_flag_spec], [1], [Compiler flag to generate shared objects directly from archives]) _LT_TAGDECL([], [compiler_needs_object], [1], [Whether the compiler copes with passing no objects directly]) _LT_TAGDECL([], [old_archive_from_new_cmds], [2], [Create an old-style archive from a shared archive]) _LT_TAGDECL([], [old_archive_from_expsyms_cmds], [2], [Create a temporary old-style archive to link instead of a shared archive]) _LT_TAGDECL([], [archive_cmds], [2], [Commands used to build a shared archive]) _LT_TAGDECL([], [archive_expsym_cmds], [2]) _LT_TAGDECL([], [module_cmds], [2], [Commands used to build a loadable module if different from building a shared archive.]) _LT_TAGDECL([], [module_expsym_cmds], [2]) _LT_TAGDECL([], [with_gnu_ld], [1], [Whether we are building with GNU ld or not]) _LT_TAGDECL([], [allow_undefined_flag], [1], [Flag that allows shared libraries with undefined symbols to be built]) _LT_TAGDECL([], [no_undefined_flag], [1], [Flag that enforces no undefined symbols]) _LT_TAGDECL([], [hardcode_libdir_flag_spec], [1], [Flag to hardcode $libdir into a binary during linking. This must work even if $libdir does not exist]) _LT_TAGDECL([], [hardcode_libdir_separator], [1], [Whether we need a single "-rpath" flag with a separated argument]) _LT_TAGDECL([], [hardcode_direct], [0], [Set to "yes" if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_direct_absolute], [0], [Set to "yes" if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the resulting binary and the resulting library dependency is "absolute", i.e impossible to change by setting ${shlibpath_var} if the library is relocated]) _LT_TAGDECL([], [hardcode_minus_L], [0], [Set to "yes" if using the -LDIR flag during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_shlibpath_var], [0], [Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_automatic], [0], [Set to "yes" if building a shared library automatically hardcodes DIR into the library and all subsequent libraries and executables linked against it]) _LT_TAGDECL([], [inherit_rpath], [0], [Set to yes if linker adds runtime paths of dependent libraries to runtime path list]) _LT_TAGDECL([], [link_all_deplibs], [0], [Whether libtool must link a program against all its dependency libraries]) _LT_TAGDECL([], [always_export_symbols], [0], [Set to "yes" if exported symbols are required]) _LT_TAGDECL([], [export_symbols_cmds], [2], [The commands to list exported symbols]) _LT_TAGDECL([], [exclude_expsyms], [1], [Symbols that should not be listed in the preloaded symbols]) _LT_TAGDECL([], [include_expsyms], [1], [Symbols that must always be exported]) _LT_TAGDECL([], [prelink_cmds], [2], [Commands necessary for linking programs (against libraries) with templates]) _LT_TAGDECL([], [postlink_cmds], [2], [Commands necessary for finishing linking programs]) _LT_TAGDECL([], [file_list_spec], [1], [Specify filename containing input files]) dnl FIXME: Not yet implemented dnl _LT_TAGDECL([], [thread_safe_flag_spec], [1], dnl [Compiler flag to generate thread safe objects]) ])# _LT_LINKER_SHLIBS # _LT_LANG_C_CONFIG([TAG]) # ------------------------ # Ensure that the configuration variables for a C compiler are suitably # defined. These variables are subsequently used by _LT_CONFIG to write # the compiler configuration to `libtool'. m4_defun([_LT_LANG_C_CONFIG], [m4_require([_LT_DECL_EGREP])dnl lt_save_CC="$CC" AC_LANG_PUSH(C) # Source file extension for C test sources. ac_ext=c # Object file extension for compiled C test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(){return(0);}' _LT_TAG_COMPILER # Save the default compiler, since it gets overwritten when the other # tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP. compiler_DEFAULT=$CC # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) LT_SYS_DLOPEN_SELF _LT_CMD_STRIPLIB # Report which library types will actually be built AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) test "$can_build_shared" = "no" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test "$enable_shared" = yes && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then test "$enable_shared" = yes && enable_static=no fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. test "$enable_shared" = yes || enable_static=yes AC_MSG_RESULT([$enable_static]) _LT_CONFIG($1) fi AC_LANG_POP CC="$lt_save_CC" ])# _LT_LANG_C_CONFIG # _LT_LANG_CXX_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for a C++ compiler are suitably # defined. These variables are subsequently used by _LT_CONFIG to write # the compiler configuration to `libtool'. m4_defun([_LT_LANG_CXX_CONFIG], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_PATH_MANIFEST_TOOL])dnl if test -n "$CXX" && ( test "X$CXX" != "Xno" && ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) || (test "X$CXX" != "Xg++"))) ; then AC_PROG_CXXCPP else _lt_caught_CXX_error=yes fi AC_LANG_PUSH(C++) _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(compiler_needs_object, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for C++ test sources. ac_ext=cpp # Object file extension for compiled C++ test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the CXX compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test "$_lt_caught_CXX_error" != yes; then # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(int, char *[[]]) { return(0); }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_LD=$LD lt_save_GCC=$GCC GCC=$GXX lt_save_with_gnu_ld=$with_gnu_ld lt_save_path_LD=$lt_cv_path_LD if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx else $as_unset lt_cv_prog_gnu_ld fi if test -n "${lt_cv_path_LDCXX+set}"; then lt_cv_path_LD=$lt_cv_path_LDCXX else $as_unset lt_cv_path_LD fi test -z "${LDCXX+set}" || LD=$LDCXX CC=${CXX-"c++"} CFLAGS=$CXXFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) if test -n "$compiler"; then # We don't want -fno-exception when compiling C++ code, so set the # no_builtin_flag separately if test "$GXX" = yes; then _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' else _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)= fi if test "$GXX" = yes; then # Set up default GNU C++ configuration LT_PATH_LD # Check if GNU C++ uses GNU ld as the underlying linker, since the # archiving commands below assume that GNU ld is being used. if test "$with_gnu_ld" = yes; then _LT_TAGVAR(archive_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' # If archive_cmds runs LD, not CC, wlarc should be empty # XXX I think wlarc can be eliminated in ltcf-cxx, but I need to # investigate it a little bit more. (MM) wlarc='${wl}' # ancient GNU ld didn't support --whole-archive et. al. if eval "`$CC -print-prog-name=ld` --help 2>&1" | $GREP 'no-whole-archive' > /dev/null; then _LT_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' else _LT_TAGVAR(whole_archive_flag_spec, $1)= fi else with_gnu_ld=no wlarc= # A generic and very simple default shared library creation # command for GNU C++ for the case where it uses the native # linker, instead of GNU ld. If possible, this setting should # overridden to take advantage of the native linker features on # the platform it is being used on. _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' fi # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else GXX=no with_gnu_ld=no wlarc= fi # PORTME: fill in a description of your system's C++ link characteristics AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries]) _LT_TAGVAR(ld_shlibs, $1)=yes case $host_os in aix3*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aix[[4-9]]*) if test "$host_cpu" = ia64; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag="" else aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # need to do runtime linking. case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*) for ld_flag in $LDFLAGS; do case $ld_flag in *-brtl*) aix_use_runtimelinking=yes break ;; esac done ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. _LT_TAGVAR(archive_cmds, $1)='' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(file_list_spec, $1)='${wl}-f,' if test "$GXX" = yes; then case $host_os in aix4.[[012]]|aix4.[[012]].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`${CC} -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 _LT_TAGVAR(hardcode_direct, $1)=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)= fi esac shared_flag='-shared' if test "$aix_use_runtimelinking" = yes; then shared_flag="$shared_flag "'${wl}-G' fi else # not using gcc if test "$host_cpu" = ia64; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test "$aix_use_runtimelinking" = yes; then shared_flag='${wl}-G' else shared_flag='${wl}-bM:SRE' fi fi fi _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to # export. _LT_TAGVAR(always_export_symbols, $1)=yes if test "$aix_use_runtimelinking" = yes; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. _LT_TAGVAR(allow_undefined_flag, $1)='-berok' # Determine the default libpath from the value encoded in an empty # executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" else if test "$host_cpu" = ia64; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $libdir:/usr/lib:/lib' _LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs" _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-bernotok' _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-berok' if test "$with_gnu_ld" = yes; then # We only use this code for GNU lds that support --whole-archive. _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive$convenience ${wl}--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives _LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience' fi _LT_TAGVAR(archive_cmds_need_lc, $1)=yes # This is similar to how AIX traditionally builds its shared # libraries. _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' fi fi ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME _LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; chorus*) case $cc_basename in *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; cygwin* | mingw* | pw32* | cegcc*) case $GXX,$cc_basename in ,cl* | no,cl*) # Native MSVC # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(file_list_spec, $1)='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames=' _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp; else $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes # Don't use ranlib _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib' _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile="$lt_outputfile.exe" lt_tool_outputfile="$lt_tool_outputfile.exe" ;; esac~ func_to_tool_file "$lt_outputfile"~ if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # g++ # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless, # as there is no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file (1st line # is EXPORTS), use it as is; otherwise, prepend... _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; darwin* | rhapsody*) _LT_DARWIN_LINKER_FEATURES($1) ;; dgux*) case $cc_basename in ec++*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; ghcx*) # Green Hills C++ Compiler # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; freebsd2.*) # C++ shared libraries reported to be fairly broken before # switch to ELF _LT_TAGVAR(ld_shlibs, $1)=no ;; freebsd-elf*) _LT_TAGVAR(archive_cmds_need_lc, $1)=no ;; freebsd* | dragonfly*) # FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF # conventions _LT_TAGVAR(ld_shlibs, $1)=yes ;; haiku*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(link_all_deplibs, $1)=yes ;; hpux9*) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH, # but as the default # location of the library. case $cc_basename in CC*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aCC*) _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -b ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $EGREP "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes; then _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; hpux10*|hpux11*) if test $with_gnu_ld = no; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: case $host_cpu in hppa*64*|ia64*) ;; *) _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' ;; esac fi case $host_cpu in hppa*64*|ia64*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH, # but as the default # location of the library. ;; esac case $cc_basename in CC*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aCC*) case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $GREP "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes; then if test $with_gnu_ld = no; then case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac fi else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; interix[[3-9]]*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; irix5* | irix6*) case $cc_basename in CC*) # SGI C++ _LT_TAGVAR(archive_cmds, $1)='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' # Archives containing C++ object files must be created using # "CC -ar", where "CC" is the IRIX C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -ar -WR,-u -o $oldlib $oldobjs' ;; *) if test "$GXX" = yes; then if test "$with_gnu_ld" = no; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' else _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib' fi fi _LT_TAGVAR(link_all_deplibs, $1)=yes ;; esac _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(inherit_rpath, $1)=yes ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. _LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib ${wl}-retain-symbols-file,$export_symbols; mv \$templib $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | $GREP "ld"`; rm -f libconftest$shared_ext; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' # Archives containing C++ object files must be created using # "CC -Bstatic", where "CC" is the KAI C++ compiler. _LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' ;; icpc* | ecpc* ) # Intel C++ with_gnu_ld=yes # version 8.0 and above of icpc choke on multiply defined symbols # if we add $predep_objects and $postdep_objects, however 7.1 and # earlier do not add the objects themselves. case `$CC -V 2>&1` in *"Version 7."*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' ;; *) # Version 8.0 or newer tmp_idyn= case $host_cpu in ia64*) tmp_idyn=' -i_dynamic';; esac _LT_TAGVAR(archive_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' ;; esac _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive$convenience ${wl}--no-whole-archive' ;; pgCC* | pgcpp*) # Portland Group C++ compiler case `$CC -V` in *pgCC\ [[1-5]].* | *pgcpp\ [[1-5]].*) _LT_TAGVAR(prelink_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~ compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"' _LT_TAGVAR(old_archive_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~ $AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~ $RANLIB $oldlib' _LT_TAGVAR(archive_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' ;; *) # Version 6 and above use weak symbols _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' ;; esac _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}--rpath ${wl}$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' ;; cxx*) # Compaq C++ _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib ${wl}-retain-symbols-file $wl$export_symbols' runpath_var=LD_RUN_PATH _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "X$list" | $Xsed' ;; xl* | mpixl* | bgxl*) # IBM XL 8.0 on PPC, with GNU ld _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' _LT_TAGVAR(archive_cmds, $1)='$CC -qmkshrobj $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' if test "x$supports_anon_versioning" = xyes; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC -qmkshrobj $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' fi ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs' _LT_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file ${wl}$export_symbols' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes # Not sure whether something based on # $CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 # would be better. output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs' ;; esac ;; esac ;; lynxos*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; m88k*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; mvs*) case $cc_basename in cxx*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags' wlarc= _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no fi # Workaround some broken pre-1.5 toolchains output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"' ;; *nto* | *qnx*) _LT_TAGVAR(ld_shlibs, $1)=yes ;; openbsd2*) # C++ shared libraries are fairly broken _LT_TAGVAR(ld_shlibs, $1)=no ;; openbsd*) if test -f /usr/libexec/ld.so; then _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file,$export_symbols -o $lib' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' _LT_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' fi output_verbose_link_cmd=func_echo_all else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. _LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo "$lib" | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Archives containing C++ object files must be created using # the KAI C++ compiler. case $host in osf3*) _LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' ;; *) _LT_TAGVAR(old_archive_cmds, $1)='$CC -o $oldlib $oldobjs' ;; esac ;; RCC*) # Rational C++ 2.4.1 # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; cxx*) case $host in osf3*) _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $soname `test -n "$verstring" && func_echo_all "${wl}-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' ;; *) _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~ echo "-hidden">> $lib.exp~ $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname ${wl}-input ${wl}$lib.exp `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib~ $RM $lib.exp' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' ;; esac _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld" | $GREP -v "ld:"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes && test "$with_gnu_ld" = no; then _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' case $host in osf3*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' ;; esac _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; psos*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; lcc*) # Lucid # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ _LT_TAGVAR(archive_cmds_need_lc,$1)=yes _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs' _LT_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G${allow_undefined_flag} ${wl}-M ${wl}$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands `-z linker_flag'. # Supported since Solaris 2.6 (maybe 2.5.1?) _LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract' ;; esac _LT_TAGVAR(link_all_deplibs, $1)=yes output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs' ;; gcx*) # Green Hills C++ Compiler _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' # The C++ compiler must be used to create the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC $LDFLAGS -archive -o $oldlib $oldobjs' ;; *) # GNU C++ compiler with Solaris linker if test "$GXX" = yes && test "$with_gnu_ld" = no; then _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-z ${wl}defs' if $CC --version | $GREP -v '^2\.7' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # g++ 2.7 appears to require `-G' NOT `-shared' on this # platform. _LT_TAGVAR(archive_cmds, $1)='$CC -G -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -G $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $wl$libdir' case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' ;; esac fi ;; esac ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*) _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var='LD_RUN_PATH' case $cc_basename in CC*) _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We can NOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' _LT_TAGVAR(allow_undefined_flag, $1)='${wl}-z,nodefs' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Bexport' runpath_var='LD_RUN_PATH' case $cc_basename in CC*) _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(old_archive_cmds, $1)='$CC -Tprelink_objects $oldobjs~ '"$_LT_TAGVAR(old_archive_cmds, $1)" _LT_TAGVAR(reload_cmds, $1)='$CC -Tprelink_objects $reload_objs~ '"$_LT_TAGVAR(reload_cmds, $1)" ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; vxworks*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)]) test "$_LT_TAGVAR(ld_shlibs, $1)" = no && can_build_shared=no _LT_TAGVAR(GCC, $1)="$GXX" _LT_TAGVAR(LD, $1)="$LD" ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_SYS_HIDDEN_LIBDEPS($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS LDCXX=$LD LD=$lt_save_LD GCC=$lt_save_GCC with_gnu_ld=$lt_save_with_gnu_ld lt_cv_path_LDCXX=$lt_cv_path_LD lt_cv_path_LD=$lt_save_path_LD lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld fi # test "$_lt_caught_CXX_error" != yes AC_LANG_POP ])# _LT_LANG_CXX_CONFIG # _LT_FUNC_STRIPNAME_CNF # ---------------------- # func_stripname_cnf prefix suffix name # strip PREFIX and SUFFIX off of NAME. # PREFIX and SUFFIX must not contain globbing or regex special # characters, hashes, percent signs, but SUFFIX may contain a leading # dot (in which case that matches only a dot). # # This function is identical to the (non-XSI) version of func_stripname, # except this one can be used by m4 code that may be executed by configure, # rather than the libtool script. m4_defun([_LT_FUNC_STRIPNAME_CNF],[dnl AC_REQUIRE([_LT_DECL_SED]) AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH]) func_stripname_cnf () { case ${2} in .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;; *) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;; esac } # func_stripname_cnf ])# _LT_FUNC_STRIPNAME_CNF # _LT_SYS_HIDDEN_LIBDEPS([TAGNAME]) # --------------------------------- # Figure out "hidden" library dependencies from verbose # compiler output when linking a shared library. # Parse the compiler output and extract the necessary # objects, libraries and library flags. m4_defun([_LT_SYS_HIDDEN_LIBDEPS], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl AC_REQUIRE([_LT_FUNC_STRIPNAME_CNF])dnl # Dependencies to place before and after the object being linked: _LT_TAGVAR(predep_objects, $1)= _LT_TAGVAR(postdep_objects, $1)= _LT_TAGVAR(predeps, $1)= _LT_TAGVAR(postdeps, $1)= _LT_TAGVAR(compiler_lib_search_path, $1)= dnl we can't use the lt_simple_compile_test_code here, dnl because it contains code intended for an executable, dnl not a library. It's possible we should let each dnl tag define a new lt_????_link_test_code variable, dnl but it's only used here... m4_if([$1], [], [cat > conftest.$ac_ext <<_LT_EOF int a; void foo (void) { a = 0; } _LT_EOF ], [$1], [CXX], [cat > conftest.$ac_ext <<_LT_EOF class Foo { public: Foo (void) { a = 0; } private: int a; }; _LT_EOF ], [$1], [F77], [cat > conftest.$ac_ext <<_LT_EOF subroutine foo implicit none integer*4 a a=0 return end _LT_EOF ], [$1], [FC], [cat > conftest.$ac_ext <<_LT_EOF subroutine foo implicit none integer a a=0 return end _LT_EOF ], [$1], [GCJ], [cat > conftest.$ac_ext <<_LT_EOF public class foo { private int a; public void bar (void) { a = 0; } }; _LT_EOF ], [$1], [GO], [cat > conftest.$ac_ext <<_LT_EOF package foo func foo() { } _LT_EOF ]) _lt_libdeps_save_CFLAGS=$CFLAGS case "$CC $CFLAGS " in #( *\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;; *\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;; *\ -fuse-linker-plugin*\ *) CFLAGS="$CFLAGS -fno-use-linker-plugin" ;; esac dnl Parse the compiler output and extract the necessary dnl objects, libraries and library flags. if AC_TRY_EVAL(ac_compile); then # Parse the compiler output and extract the necessary # objects, libraries and library flags. # Sentinel used to keep track of whether or not we are before # the conftest object file. pre_test_object_deps_done=no for p in `eval "$output_verbose_link_cmd"`; do case ${prev}${p} in -L* | -R* | -l*) # Some compilers place space between "-{L,R}" and the path. # Remove the space. if test $p = "-L" || test $p = "-R"; then prev=$p continue fi # Expand the sysroot to ease extracting the directories later. if test -z "$prev"; then case $p in -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;; -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;; -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;; esac fi case $p in =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;; esac if test "$pre_test_object_deps_done" = no; then case ${prev} in -L | -R) # Internal compiler library paths should come after those # provided the user. The postdeps already come after the # user supplied libs so there is no need to process them. if test -z "$_LT_TAGVAR(compiler_lib_search_path, $1)"; then _LT_TAGVAR(compiler_lib_search_path, $1)="${prev}${p}" else _LT_TAGVAR(compiler_lib_search_path, $1)="${_LT_TAGVAR(compiler_lib_search_path, $1)} ${prev}${p}" fi ;; # The "-l" case would never come before the object being # linked, so don't bother handling this case. esac else if test -z "$_LT_TAGVAR(postdeps, $1)"; then _LT_TAGVAR(postdeps, $1)="${prev}${p}" else _LT_TAGVAR(postdeps, $1)="${_LT_TAGVAR(postdeps, $1)} ${prev}${p}" fi fi prev= ;; *.lto.$objext) ;; # Ignore GCC LTO objects *.$objext) # This assumes that the test object file only shows up # once in the compiler output. if test "$p" = "conftest.$objext"; then pre_test_object_deps_done=yes continue fi if test "$pre_test_object_deps_done" = no; then if test -z "$_LT_TAGVAR(predep_objects, $1)"; then _LT_TAGVAR(predep_objects, $1)="$p" else _LT_TAGVAR(predep_objects, $1)="$_LT_TAGVAR(predep_objects, $1) $p" fi else if test -z "$_LT_TAGVAR(postdep_objects, $1)"; then _LT_TAGVAR(postdep_objects, $1)="$p" else _LT_TAGVAR(postdep_objects, $1)="$_LT_TAGVAR(postdep_objects, $1) $p" fi fi ;; *) ;; # Ignore the rest. esac done # Clean up. rm -f a.out a.exe else echo "libtool.m4: error: problem compiling $1 test program" fi $RM -f confest.$objext CFLAGS=$_lt_libdeps_save_CFLAGS # PORTME: override above test on systems where it is broken m4_if([$1], [CXX], [case $host_os in interix[[3-9]]*) # Interix 3.5 installs completely hosed .la files for C++, so rather than # hack all around it, let's just trust "g++" to DTRT. _LT_TAGVAR(predep_objects,$1)= _LT_TAGVAR(postdep_objects,$1)= _LT_TAGVAR(postdeps,$1)= ;; linux*) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 # The more standards-conforming stlport4 library is # incompatible with the Cstd library. Avoid specifying # it if it's in CXXFLAGS. Ignore libCrun as # -library=stlport4 depends on it. case " $CXX $CXXFLAGS " in *" -library=stlport4 "*) solaris_use_stlport4=yes ;; esac if test "$solaris_use_stlport4" != yes; then _LT_TAGVAR(postdeps,$1)='-library=Cstd -library=Crun' fi ;; esac ;; solaris*) case $cc_basename in CC* | sunCC*) # The more standards-conforming stlport4 library is # incompatible with the Cstd library. Avoid specifying # it if it's in CXXFLAGS. Ignore libCrun as # -library=stlport4 depends on it. case " $CXX $CXXFLAGS " in *" -library=stlport4 "*) solaris_use_stlport4=yes ;; esac # Adding this requires a known-good setup of shared libraries for # Sun compiler versions before 5.6, else PIC objects from an old # archive will be linked into the output, leading to subtle bugs. if test "$solaris_use_stlport4" != yes; then _LT_TAGVAR(postdeps,$1)='-library=Cstd -library=Crun' fi ;; esac ;; esac ]) case " $_LT_TAGVAR(postdeps, $1) " in *" -lc "*) _LT_TAGVAR(archive_cmds_need_lc, $1)=no ;; esac _LT_TAGVAR(compiler_lib_search_dirs, $1)= if test -n "${_LT_TAGVAR(compiler_lib_search_path, $1)}"; then _LT_TAGVAR(compiler_lib_search_dirs, $1)=`echo " ${_LT_TAGVAR(compiler_lib_search_path, $1)}" | ${SED} -e 's! -L! !g' -e 's!^ !!'` fi _LT_TAGDECL([], [compiler_lib_search_dirs], [1], [The directories searched by this compiler when creating a shared library]) _LT_TAGDECL([], [predep_objects], [1], [Dependencies to place before and after the objects being linked to create a shared library]) _LT_TAGDECL([], [postdep_objects], [1]) _LT_TAGDECL([], [predeps], [1]) _LT_TAGDECL([], [postdeps], [1]) _LT_TAGDECL([], [compiler_lib_search_path], [1], [The library search path used internally by the compiler when linking a shared library]) ])# _LT_SYS_HIDDEN_LIBDEPS # _LT_LANG_F77_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for a Fortran 77 compiler are # suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to `libtool'. m4_defun([_LT_LANG_F77_CONFIG], [AC_LANG_PUSH(Fortran 77) if test -z "$F77" || test "X$F77" = "Xno"; then _lt_disable_F77=yes fi _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for f77 test sources. ac_ext=f # Object file extension for compiled f77 test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the F77 compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test "$_lt_disable_F77" != yes; then # Code to be used in simple compile tests lt_simple_compile_test_code="\ subroutine t return end " # Code to be used in simple link tests lt_simple_link_test_code="\ program t end " # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC="$CC" lt_save_GCC=$GCC lt_save_CFLAGS=$CFLAGS CC=${F77-"f77"} CFLAGS=$FFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) GCC=$G77 if test -n "$compiler"; then AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) test "$can_build_shared" = "no" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test "$enable_shared" = yes && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then test "$enable_shared" = yes && enable_static=no fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. test "$enable_shared" = yes || enable_static=yes AC_MSG_RESULT([$enable_static]) _LT_TAGVAR(GCC, $1)="$G77" _LT_TAGVAR(LD, $1)="$LD" ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" GCC=$lt_save_GCC CC="$lt_save_CC" CFLAGS="$lt_save_CFLAGS" fi # test "$_lt_disable_F77" != yes AC_LANG_POP ])# _LT_LANG_F77_CONFIG # _LT_LANG_FC_CONFIG([TAG]) # ------------------------- # Ensure that the configuration variables for a Fortran compiler are # suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to `libtool'. m4_defun([_LT_LANG_FC_CONFIG], [AC_LANG_PUSH(Fortran) if test -z "$FC" || test "X$FC" = "Xno"; then _lt_disable_FC=yes fi _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for fc test sources. ac_ext=${ac_fc_srcext-f} # Object file extension for compiled fc test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the FC compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test "$_lt_disable_FC" != yes; then # Code to be used in simple compile tests lt_simple_compile_test_code="\ subroutine t return end " # Code to be used in simple link tests lt_simple_link_test_code="\ program t end " # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC="$CC" lt_save_GCC=$GCC lt_save_CFLAGS=$CFLAGS CC=${FC-"f95"} CFLAGS=$FCFLAGS compiler=$CC GCC=$ac_cv_fc_compiler_gnu _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) if test -n "$compiler"; then AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) test "$can_build_shared" = "no" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test "$enable_shared" = yes && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then test "$enable_shared" = yes && enable_static=no fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. test "$enable_shared" = yes || enable_static=yes AC_MSG_RESULT([$enable_static]) _LT_TAGVAR(GCC, $1)="$ac_cv_fc_compiler_gnu" _LT_TAGVAR(LD, $1)="$LD" ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_SYS_HIDDEN_LIBDEPS($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS fi # test "$_lt_disable_FC" != yes AC_LANG_POP ])# _LT_LANG_FC_CONFIG # _LT_LANG_GCJ_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for the GNU Java Compiler compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to `libtool'. m4_defun([_LT_LANG_GCJ_CONFIG], [AC_REQUIRE([LT_PROG_GCJ])dnl AC_LANG_SAVE # Source file extension for Java test sources. ac_ext=java # Object file extension for compiled Java test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="class foo {}" # Code to be used in simple link tests lt_simple_link_test_code='public class conftest { public static void main(String[[]] argv) {}; }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC=yes CC=${GCJ-"gcj"} CFLAGS=$GCJFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_TAGVAR(LD, $1)="$LD" _LT_CC_BASENAME([$compiler]) # GCJ did not exist at the time GCC didn't implicitly link libc in. _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi AC_LANG_RESTORE GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_GCJ_CONFIG # _LT_LANG_GO_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for the GNU Go compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to `libtool'. m4_defun([_LT_LANG_GO_CONFIG], [AC_REQUIRE([LT_PROG_GO])dnl AC_LANG_SAVE # Source file extension for Go test sources. ac_ext=go # Object file extension for compiled Go test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="package main; func main() { }" # Code to be used in simple link tests lt_simple_link_test_code='package main; func main() { }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC=yes CC=${GOC-"gccgo"} CFLAGS=$GOFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_TAGVAR(LD, $1)="$LD" _LT_CC_BASENAME([$compiler]) # Go did not exist at the time GCC didn't implicitly link libc in. _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi AC_LANG_RESTORE GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_GO_CONFIG # _LT_LANG_RC_CONFIG([TAG]) # ------------------------- # Ensure that the configuration variables for the Windows resource compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to `libtool'. m4_defun([_LT_LANG_RC_CONFIG], [AC_REQUIRE([LT_PROG_RC])dnl AC_LANG_SAVE # Source file extension for RC test sources. ac_ext=rc # Object file extension for compiled RC test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code='sample MENU { MENUITEM "&Soup", 100, CHECKED }' # Code to be used in simple link tests lt_simple_link_test_code="$lt_simple_compile_test_code" # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC="$CC" lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC= CC=${RC-"windres"} CFLAGS= compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) _LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes if test -n "$compiler"; then : _LT_CONFIG($1) fi GCC=$lt_save_GCC AC_LANG_RESTORE CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_RC_CONFIG # LT_PROG_GCJ # ----------- AC_DEFUN([LT_PROG_GCJ], [m4_ifdef([AC_PROG_GCJ], [AC_PROG_GCJ], [m4_ifdef([A][M_PROG_GCJ], [A][M_PROG_GCJ], [AC_CHECK_TOOL(GCJ, gcj,) test "x${GCJFLAGS+set}" = xset || GCJFLAGS="-g -O2" AC_SUBST(GCJFLAGS)])])[]dnl ]) # Old name: AU_ALIAS([LT_AC_PROG_GCJ], [LT_PROG_GCJ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_GCJ], []) # LT_PROG_GO # ---------- AC_DEFUN([LT_PROG_GO], [AC_CHECK_TOOL(GOC, gccgo,) ]) # LT_PROG_RC # ---------- AC_DEFUN([LT_PROG_RC], [AC_CHECK_TOOL(RC, windres,) ]) # Old name: AU_ALIAS([LT_AC_PROG_RC], [LT_PROG_RC]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_RC], []) # _LT_DECL_EGREP # -------------- # If we don't have a new enough Autoconf to choose the best grep # available, choose the one first in the user's PATH. m4_defun([_LT_DECL_EGREP], [AC_REQUIRE([AC_PROG_EGREP])dnl AC_REQUIRE([AC_PROG_FGREP])dnl test -z "$GREP" && GREP=grep _LT_DECL([], [GREP], [1], [A grep program that handles long lines]) _LT_DECL([], [EGREP], [1], [An ERE matcher]) _LT_DECL([], [FGREP], [1], [A literal string matcher]) dnl Non-bleeding-edge autoconf doesn't subst GREP, so do it here too AC_SUBST([GREP]) ]) # _LT_DECL_OBJDUMP # -------------- # If we don't have a new enough Autoconf to choose the best objdump # available, choose the one first in the user's PATH. m4_defun([_LT_DECL_OBJDUMP], [AC_CHECK_TOOL(OBJDUMP, objdump, false) test -z "$OBJDUMP" && OBJDUMP=objdump _LT_DECL([], [OBJDUMP], [1], [An object symbol dumper]) AC_SUBST([OBJDUMP]) ]) # _LT_DECL_DLLTOOL # ---------------- # Ensure DLLTOOL variable is set. m4_defun([_LT_DECL_DLLTOOL], [AC_CHECK_TOOL(DLLTOOL, dlltool, false) test -z "$DLLTOOL" && DLLTOOL=dlltool _LT_DECL([], [DLLTOOL], [1], [DLL creation program]) AC_SUBST([DLLTOOL]) ]) # _LT_DECL_SED # ------------ # Check for a fully-functional sed program, that truncates # as few characters as possible. Prefer GNU sed if found. m4_defun([_LT_DECL_SED], [AC_PROG_SED test -z "$SED" && SED=sed Xsed="$SED -e 1s/^X//" _LT_DECL([], [SED], [1], [A sed program that does not truncate output]) _LT_DECL([], [Xsed], ["\$SED -e 1s/^X//"], [Sed that helps us avoid accidentally triggering echo(1) options like -n]) ])# _LT_DECL_SED m4_ifndef([AC_PROG_SED], [ ############################################################ # NOTE: This macro has been submitted for inclusion into # # GNU Autoconf as AC_PROG_SED. When it is available in # # a released version of Autoconf we should remove this # # macro and use it instead. # ############################################################ m4_defun([AC_PROG_SED], [AC_MSG_CHECKING([for a sed that does not truncate output]) AC_CACHE_VAL(lt_cv_path_SED, [# Loop through the user's path and test for sed and gsed. # Then use that list of sed's as ones to test for truncation. as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for lt_ac_prog in sed gsed; do for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$lt_ac_prog$ac_exec_ext"; then lt_ac_sed_list="$lt_ac_sed_list $as_dir/$lt_ac_prog$ac_exec_ext" fi done done done IFS=$as_save_IFS lt_ac_max=0 lt_ac_count=0 # Add /usr/xpg4/bin/sed as it is typically found on Solaris # along with /bin/sed that truncates output. for lt_ac_sed in $lt_ac_sed_list /usr/xpg4/bin/sed; do test ! -f $lt_ac_sed && continue cat /dev/null > conftest.in lt_ac_count=0 echo $ECHO_N "0123456789$ECHO_C" >conftest.in # Check for GNU sed and select it if it is found. if "$lt_ac_sed" --version 2>&1 < /dev/null | grep 'GNU' > /dev/null; then lt_cv_path_SED=$lt_ac_sed break fi while true; do cat conftest.in conftest.in >conftest.tmp mv conftest.tmp conftest.in cp conftest.in conftest.nl echo >>conftest.nl $lt_ac_sed -e 's/a$//' < conftest.nl >conftest.out || break cmp -s conftest.out conftest.nl || break # 10000 chars as input seems more than enough test $lt_ac_count -gt 10 && break lt_ac_count=`expr $lt_ac_count + 1` if test $lt_ac_count -gt $lt_ac_max; then lt_ac_max=$lt_ac_count lt_cv_path_SED=$lt_ac_sed fi done done ]) SED=$lt_cv_path_SED AC_SUBST([SED]) AC_MSG_RESULT([$SED]) ])#AC_PROG_SED ])#m4_ifndef # Old name: AU_ALIAS([LT_AC_PROG_SED], [AC_PROG_SED]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_SED], []) # _LT_CHECK_SHELL_FEATURES # ------------------------ # Find out whether the shell is Bourne or XSI compatible, # or has some other useful features. m4_defun([_LT_CHECK_SHELL_FEATURES], [AC_MSG_CHECKING([whether the shell understands some XSI constructs]) # Try some XSI features xsi_shell=no ( _lt_dummy="a/b/c" test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \ = c,a/b,b/c, \ && eval 'test $(( 1 + 1 )) -eq 2 \ && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \ && xsi_shell=yes AC_MSG_RESULT([$xsi_shell]) _LT_CONFIG_LIBTOOL_INIT([xsi_shell='$xsi_shell']) AC_MSG_CHECKING([whether the shell understands "+="]) lt_shell_append=no ( foo=bar; set foo baz; eval "$[1]+=\$[2]" && test "$foo" = barbaz ) \ >/dev/null 2>&1 \ && lt_shell_append=yes AC_MSG_RESULT([$lt_shell_append]) _LT_CONFIG_LIBTOOL_INIT([lt_shell_append='$lt_shell_append']) if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then lt_unset=unset else lt_unset=false fi _LT_DECL([], [lt_unset], [0], [whether the shell understands "unset"])dnl # test EBCDIC or ASCII case `echo X|tr X '\101'` in A) # ASCII based system # \n is not interpreted correctly by Solaris 8 /usr/ucb/tr lt_SP2NL='tr \040 \012' lt_NL2SP='tr \015\012 \040\040' ;; *) # EBCDIC based system lt_SP2NL='tr \100 \n' lt_NL2SP='tr \r\n \100\100' ;; esac _LT_DECL([SP2NL], [lt_SP2NL], [1], [turn spaces into newlines])dnl _LT_DECL([NL2SP], [lt_NL2SP], [1], [turn newlines into spaces])dnl ])# _LT_CHECK_SHELL_FEATURES # _LT_PROG_FUNCTION_REPLACE (FUNCNAME, REPLACEMENT-BODY) # ------------------------------------------------------ # In `$cfgfile', look for function FUNCNAME delimited by `^FUNCNAME ()$' and # '^} FUNCNAME ', and replace its body with REPLACEMENT-BODY. m4_defun([_LT_PROG_FUNCTION_REPLACE], [dnl { sed -e '/^$1 ()$/,/^} # $1 /c\ $1 ()\ {\ m4_bpatsubsts([$2], [$], [\\], [^\([ ]\)], [\\\1]) } # Extended-shell $1 implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: ]) # _LT_PROG_REPLACE_SHELLFNS # ------------------------- # Replace existing portable implementations of several shell functions with # equivalent extended shell implementations where those features are available.. m4_defun([_LT_PROG_REPLACE_SHELLFNS], [if test x"$xsi_shell" = xyes; then _LT_PROG_FUNCTION_REPLACE([func_dirname], [dnl case ${1} in */*) func_dirname_result="${1%/*}${2}" ;; * ) func_dirname_result="${3}" ;; esac]) _LT_PROG_FUNCTION_REPLACE([func_basename], [dnl func_basename_result="${1##*/}"]) _LT_PROG_FUNCTION_REPLACE([func_dirname_and_basename], [dnl case ${1} in */*) func_dirname_result="${1%/*}${2}" ;; * ) func_dirname_result="${3}" ;; esac func_basename_result="${1##*/}"]) _LT_PROG_FUNCTION_REPLACE([func_stripname], [dnl # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are # positional parameters, so assign one to ordinary parameter first. func_stripname_result=${3} func_stripname_result=${func_stripname_result#"${1}"} func_stripname_result=${func_stripname_result%"${2}"}]) _LT_PROG_FUNCTION_REPLACE([func_split_long_opt], [dnl func_split_long_opt_name=${1%%=*} func_split_long_opt_arg=${1#*=}]) _LT_PROG_FUNCTION_REPLACE([func_split_short_opt], [dnl func_split_short_opt_arg=${1#??} func_split_short_opt_name=${1%"$func_split_short_opt_arg"}]) _LT_PROG_FUNCTION_REPLACE([func_lo2o], [dnl case ${1} in *.lo) func_lo2o_result=${1%.lo}.${objext} ;; *) func_lo2o_result=${1} ;; esac]) _LT_PROG_FUNCTION_REPLACE([func_xform], [ func_xform_result=${1%.*}.lo]) _LT_PROG_FUNCTION_REPLACE([func_arith], [ func_arith_result=$(( $[*] ))]) _LT_PROG_FUNCTION_REPLACE([func_len], [ func_len_result=${#1}]) fi if test x"$lt_shell_append" = xyes; then _LT_PROG_FUNCTION_REPLACE([func_append], [ eval "${1}+=\\${2}"]) _LT_PROG_FUNCTION_REPLACE([func_append_quoted], [dnl func_quote_for_eval "${2}" dnl m4 expansion turns \\\\ into \\, and then the shell eval turns that into \ eval "${1}+=\\\\ \\$func_quote_for_eval_result"]) # Save a `func_append' function call where possible by direct use of '+=' sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: else # Save a `func_append' function call even when '+=' is not available sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: fi if test x"$_lt_function_replace_fail" = x":"; then AC_MSG_WARN([Unable to substitute extended shell functions in $ofile]) fi ]) # _LT_PATH_CONVERSION_FUNCTIONS # ----------------------------- # Determine which file name conversion functions should be used by # func_to_host_file (and, implicitly, by func_to_host_path). These are needed # for certain cross-compile configurations and native mingw. m4_defun([_LT_PATH_CONVERSION_FUNCTIONS], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl AC_MSG_CHECKING([how to convert $build file names to $host format]) AC_CACHE_VAL(lt_cv_to_host_file_cmd, [case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32 ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32 ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32 ;; esac ;; *-*-cygwin* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_noop ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin ;; esac ;; * ) # unhandled hosts (and "normal" native builds) lt_cv_to_host_file_cmd=func_convert_file_noop ;; esac ]) to_host_file_cmd=$lt_cv_to_host_file_cmd AC_MSG_RESULT([$lt_cv_to_host_file_cmd]) _LT_DECL([to_host_file_cmd], [lt_cv_to_host_file_cmd], [0], [convert $build file names to $host format])dnl AC_MSG_CHECKING([how to convert $build file names to toolchain format]) AC_CACHE_VAL(lt_cv_to_tool_file_cmd, [#assume ordinary cross tools, or native build. lt_cv_to_tool_file_cmd=func_convert_file_noop case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32 ;; esac ;; esac ]) to_tool_file_cmd=$lt_cv_to_tool_file_cmd AC_MSG_RESULT([$lt_cv_to_tool_file_cmd]) _LT_DECL([to_tool_file_cmd], [lt_cv_to_tool_file_cmd], [0], [convert $build files to toolchain format])dnl ])# _LT_PATH_CONVERSION_FUNCTIONS slurm-slurm-15-08-7-1/auxdir/ltmain.sh000066400000000000000000010520301265000126300174420ustar00rootroot00000000000000 # libtool (GNU libtool) 2.4.2 # Written by Gordon Matzigkeit , 1996 # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, 2006, # 2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc. # This is free software; see the source for copying conditions. There is NO # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # GNU Libtool is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # As a special exception to the GNU General Public License, # if you distribute this file as part of a program or library that # is built using GNU Libtool, you may include this file under the # same distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with GNU Libtool; see the file COPYING. If not, a copy # can be downloaded from http://www.gnu.org/licenses/gpl.html, # or obtained by writing to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # Usage: $progname [OPTION]... [MODE-ARG]... # # Provide generalized library-building support services. # # --config show all configuration variables # --debug enable verbose shell tracing # -n, --dry-run display commands without modifying any files # --features display basic configuration information and exit # --mode=MODE use operation mode MODE # --preserve-dup-deps don't remove duplicate dependency libraries # --quiet, --silent don't print informational messages # --no-quiet, --no-silent # print informational messages (default) # --no-warn don't display warning messages # --tag=TAG use configuration variables from tag TAG # -v, --verbose print more informational messages than default # --no-verbose don't print the extra informational messages # --version print version information # -h, --help, --help-all print short, long, or detailed help message # # MODE must be one of the following: # # clean remove files from the build directory # compile compile a source file into a libtool object # execute automatically set library path, then run a program # finish complete the installation of libtool libraries # install install libraries or executables # link create a library or an executable # uninstall remove libraries from an installed directory # # MODE-ARGS vary depending on the MODE. When passed as first option, # `--mode=MODE' may be abbreviated as `MODE' or a unique abbreviation of that. # Try `$progname --help --mode=MODE' for a more detailed description of MODE. # # When reporting a bug, please describe a test case to reproduce it and # include the following information: # # host-triplet: $host # shell: $SHELL # compiler: $LTCC # compiler flags: $LTCFLAGS # linker: $LD (gnu? $with_gnu_ld) # $progname: (GNU libtool) 2.4.2 Debian-2.4.2-1.11 # automake: $automake_version # autoconf: $autoconf_version # # Report bugs to . # GNU libtool home page: . # General help using GNU software: . PROGRAM=libtool PACKAGE=libtool VERSION="2.4.2 Debian-2.4.2-1.11" TIMESTAMP="" package_revision=1.3337 # Be Bourne compatible if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in *posix*) set -o posix;; esac fi BIN_SH=xpg4; export BIN_SH # for Tru64 DUALCASE=1; export DUALCASE # for MKS sh # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $1 _LTECHO_EOF' } # NLS nuisances: We save the old values to restore during execute mode. lt_user_locale= lt_safe_locale= for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES do eval "if test \"\${$lt_var+set}\" = set; then save_$lt_var=\$$lt_var $lt_var=C export $lt_var lt_user_locale=\"$lt_var=\\\$save_\$lt_var; \$lt_user_locale\" lt_safe_locale=\"$lt_var=C; \$lt_safe_locale\" fi" done LC_ALL=C LANGUAGE=C export LANGUAGE LC_ALL $lt_unset CDPATH # Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh # is ksh but when the shell is invoked as "sh" and the current value of # the _XPG environment variable is not equal to 1 (one), the special # positional parameter $0, within a function call, is the name of the # function. progpath="$0" : ${CP="cp -f"} test "${ECHO+set}" = set || ECHO=${as_echo-'printf %s\n'} : ${MAKE="make"} : ${MKDIR="mkdir"} : ${MV="mv -f"} : ${RM="rm -f"} : ${SHELL="${CONFIG_SHELL-/bin/sh}"} : ${Xsed="$SED -e 1s/^X//"} # Global variables: EXIT_SUCCESS=0 EXIT_FAILURE=1 EXIT_MISMATCH=63 # $? = 63 is used to indicate version mismatch to missing. EXIT_SKIP=77 # $? = 77 is used to indicate a skipped test to automake. exit_status=$EXIT_SUCCESS # Make sure IFS has a sensible default lt_nl=' ' IFS=" $lt_nl" dirname="s,/[^/]*$,," basename="s,^.*/,," # func_dirname file append nondir_replacement # Compute the dirname of FILE. If nonempty, add APPEND to the result, # otherwise set result to NONDIR_REPLACEMENT. func_dirname () { func_dirname_result=`$ECHO "${1}" | $SED "$dirname"` if test "X$func_dirname_result" = "X${1}"; then func_dirname_result="${3}" else func_dirname_result="$func_dirname_result${2}" fi } # func_dirname may be replaced by extended shell implementation # func_basename file func_basename () { func_basename_result=`$ECHO "${1}" | $SED "$basename"` } # func_basename may be replaced by extended shell implementation # func_dirname_and_basename file append nondir_replacement # perform func_basename and func_dirname in a single function # call: # dirname: Compute the dirname of FILE. If nonempty, # add APPEND to the result, otherwise set result # to NONDIR_REPLACEMENT. # value returned in "$func_dirname_result" # basename: Compute filename of FILE. # value retuned in "$func_basename_result" # Implementation must be kept synchronized with func_dirname # and func_basename. For efficiency, we do not delegate to # those functions but instead duplicate the functionality here. func_dirname_and_basename () { # Extract subdirectory from the argument. func_dirname_result=`$ECHO "${1}" | $SED -e "$dirname"` if test "X$func_dirname_result" = "X${1}"; then func_dirname_result="${3}" else func_dirname_result="$func_dirname_result${2}" fi func_basename_result=`$ECHO "${1}" | $SED -e "$basename"` } # func_dirname_and_basename may be replaced by extended shell implementation # func_stripname prefix suffix name # strip PREFIX and SUFFIX off of NAME. # PREFIX and SUFFIX must not contain globbing or regex special # characters, hashes, percent signs, but SUFFIX may contain a leading # dot (in which case that matches only a dot). # func_strip_suffix prefix name func_stripname () { case ${2} in .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;; *) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;; esac } # func_stripname may be replaced by extended shell implementation # These SED scripts presuppose an absolute path with a trailing slash. pathcar='s,^/\([^/]*\).*$,\1,' pathcdr='s,^/[^/]*,,' removedotparts=':dotsl s@/\./@/@g t dotsl s,/\.$,/,' collapseslashes='s@/\{1,\}@/@g' finalslash='s,/*$,/,' # func_normal_abspath PATH # Remove doubled-up and trailing slashes, "." path components, # and cancel out any ".." path components in PATH after making # it an absolute path. # value returned in "$func_normal_abspath_result" func_normal_abspath () { # Start from root dir and reassemble the path. func_normal_abspath_result= func_normal_abspath_tpath=$1 func_normal_abspath_altnamespace= case $func_normal_abspath_tpath in "") # Empty path, that just means $cwd. func_stripname '' '/' "`pwd`" func_normal_abspath_result=$func_stripname_result return ;; # The next three entries are used to spot a run of precisely # two leading slashes without using negated character classes; # we take advantage of case's first-match behaviour. ///*) # Unusual form of absolute path, do nothing. ;; //*) # Not necessarily an ordinary path; POSIX reserves leading '//' # and for example Cygwin uses it to access remote file shares # over CIFS/SMB, so we conserve a leading double slash if found. func_normal_abspath_altnamespace=/ ;; /*) # Absolute path, do nothing. ;; *) # Relative path, prepend $cwd. func_normal_abspath_tpath=`pwd`/$func_normal_abspath_tpath ;; esac # Cancel out all the simple stuff to save iterations. We also want # the path to end with a slash for ease of parsing, so make sure # there is one (and only one) here. func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$removedotparts" -e "$collapseslashes" -e "$finalslash"` while :; do # Processed it all yet? if test "$func_normal_abspath_tpath" = / ; then # If we ascended to the root using ".." the result may be empty now. if test -z "$func_normal_abspath_result" ; then func_normal_abspath_result=/ fi break fi func_normal_abspath_tcomponent=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$pathcar"` func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$pathcdr"` # Figure out what to do with it case $func_normal_abspath_tcomponent in "") # Trailing empty path component, ignore it. ;; ..) # Parent dir; strip last assembled component from result. func_dirname "$func_normal_abspath_result" func_normal_abspath_result=$func_dirname_result ;; *) # Actual path component, append it. func_normal_abspath_result=$func_normal_abspath_result/$func_normal_abspath_tcomponent ;; esac done # Restore leading double-slash if one was found on entry. func_normal_abspath_result=$func_normal_abspath_altnamespace$func_normal_abspath_result } # func_relative_path SRCDIR DSTDIR # generates a relative path from SRCDIR to DSTDIR, with a trailing # slash if non-empty, suitable for immediately appending a filename # without needing to append a separator. # value returned in "$func_relative_path_result" func_relative_path () { func_relative_path_result= func_normal_abspath "$1" func_relative_path_tlibdir=$func_normal_abspath_result func_normal_abspath "$2" func_relative_path_tbindir=$func_normal_abspath_result # Ascend the tree starting from libdir while :; do # check if we have found a prefix of bindir case $func_relative_path_tbindir in $func_relative_path_tlibdir) # found an exact match func_relative_path_tcancelled= break ;; $func_relative_path_tlibdir*) # found a matching prefix func_stripname "$func_relative_path_tlibdir" '' "$func_relative_path_tbindir" func_relative_path_tcancelled=$func_stripname_result if test -z "$func_relative_path_result"; then func_relative_path_result=. fi break ;; *) func_dirname $func_relative_path_tlibdir func_relative_path_tlibdir=${func_dirname_result} if test "x$func_relative_path_tlibdir" = x ; then # Have to descend all the way to the root! func_relative_path_result=../$func_relative_path_result func_relative_path_tcancelled=$func_relative_path_tbindir break fi func_relative_path_result=../$func_relative_path_result ;; esac done # Now calculate path; take care to avoid doubling-up slashes. func_stripname '' '/' "$func_relative_path_result" func_relative_path_result=$func_stripname_result func_stripname '/' '/' "$func_relative_path_tcancelled" if test "x$func_stripname_result" != x ; then func_relative_path_result=${func_relative_path_result}/${func_stripname_result} fi # Normalisation. If bindir is libdir, return empty string, # else relative path ending with a slash; either way, target # file name can be directly appended. if test ! -z "$func_relative_path_result"; then func_stripname './' '' "$func_relative_path_result/" func_relative_path_result=$func_stripname_result fi } # The name of this program: func_dirname_and_basename "$progpath" progname=$func_basename_result # Make sure we have an absolute path for reexecution: case $progpath in [\\/]*|[A-Za-z]:\\*) ;; *[\\/]*) progdir=$func_dirname_result progdir=`cd "$progdir" && pwd` progpath="$progdir/$progname" ;; *) save_IFS="$IFS" IFS=${PATH_SEPARATOR-:} for progdir in $PATH; do IFS="$save_IFS" test -x "$progdir/$progname" && break done IFS="$save_IFS" test -n "$progdir" || progdir=`pwd` progpath="$progdir/$progname" ;; esac # Sed substitution that helps us do robust quoting. It backslashifies # metacharacters that are still active within double-quoted strings. Xsed="${SED}"' -e 1s/^X//' sed_quote_subst='s/\([`"$\\]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\(["`\\]\)/\\\1/g' # Sed substitution that turns a string into a regex matching for the # string literally. sed_make_literal_regex='s,[].[^$\\*\/],\\&,g' # Sed substitution that converts a w32 file name or path # which contains forward slashes, into one that contains # (escaped) backslashes. A very naive implementation. lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g' # Re-`\' parameter expansions in output of double_quote_subst that were # `\'-ed in input to the same. If an odd number of `\' preceded a '$' # in input to double_quote_subst, that '$' was protected from expansion. # Since each input `\' is now two `\'s, look for any number of runs of # four `\'s followed by two `\'s and then a '$'. `\' that '$'. bs='\\' bs2='\\\\' bs4='\\\\\\\\' dollar='\$' sed_double_backslash="\ s/$bs4/&\\ /g s/^$bs2$dollar/$bs&/ s/\\([^$bs]\\)$bs2$dollar/\\1$bs2$bs$dollar/g s/\n//g" # Standard options: opt_dry_run=false opt_help=false opt_quiet=false opt_verbose=false opt_warning=: # func_echo arg... # Echo program name prefixed message, along with the current mode # name if it has been set yet. func_echo () { $ECHO "$progname: ${opt_mode+$opt_mode: }$*" } # func_verbose arg... # Echo program name prefixed message in verbose mode only. func_verbose () { $opt_verbose && func_echo ${1+"$@"} # A bug in bash halts the script if the last line of a function # fails when set -e is in force, so we need another command to # work around that: : } # func_echo_all arg... # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "$*" } # func_error arg... # Echo program name prefixed message to standard error. func_error () { $ECHO "$progname: ${opt_mode+$opt_mode: }"${1+"$@"} 1>&2 } # func_warning arg... # Echo program name prefixed warning message to standard error. func_warning () { $opt_warning && $ECHO "$progname: ${opt_mode+$opt_mode: }warning: "${1+"$@"} 1>&2 # bash bug again: : } # func_fatal_error arg... # Echo program name prefixed message to standard error, and exit. func_fatal_error () { func_error ${1+"$@"} exit $EXIT_FAILURE } # func_fatal_help arg... # Echo program name prefixed message to standard error, followed by # a help hint, and exit. func_fatal_help () { func_error ${1+"$@"} func_fatal_error "$help" } help="Try \`$progname --help' for more information." ## default # func_grep expression filename # Check whether EXPRESSION matches any line of FILENAME, without output. func_grep () { $GREP "$1" "$2" >/dev/null 2>&1 } # func_mkdir_p directory-path # Make sure the entire path to DIRECTORY-PATH is available. func_mkdir_p () { my_directory_path="$1" my_dir_list= if test -n "$my_directory_path" && test "$opt_dry_run" != ":"; then # Protect directory names starting with `-' case $my_directory_path in -*) my_directory_path="./$my_directory_path" ;; esac # While some portion of DIR does not yet exist... while test ! -d "$my_directory_path"; do # ...make a list in topmost first order. Use a colon delimited # list incase some portion of path contains whitespace. my_dir_list="$my_directory_path:$my_dir_list" # If the last portion added has no slash in it, the list is done case $my_directory_path in */*) ;; *) break ;; esac # ...otherwise throw away the child directory and loop my_directory_path=`$ECHO "$my_directory_path" | $SED -e "$dirname"` done my_dir_list=`$ECHO "$my_dir_list" | $SED 's,:*$,,'` save_mkdir_p_IFS="$IFS"; IFS=':' for my_dir in $my_dir_list; do IFS="$save_mkdir_p_IFS" # mkdir can fail with a `File exist' error if two processes # try to create one of the directories concurrently. Don't # stop in that case! $MKDIR "$my_dir" 2>/dev/null || : done IFS="$save_mkdir_p_IFS" # Bail out if we (or some other process) failed to create a directory. test -d "$my_directory_path" || \ func_fatal_error "Failed to create \`$1'" fi } # func_mktempdir [string] # Make a temporary directory that won't clash with other running # libtool processes, and avoids race conditions if possible. If # given, STRING is the basename for that directory. func_mktempdir () { my_template="${TMPDIR-/tmp}/${1-$progname}" if test "$opt_dry_run" = ":"; then # Return a directory name, but don't create it in dry-run mode my_tmpdir="${my_template}-$$" else # If mktemp works, use that first and foremost my_tmpdir=`mktemp -d "${my_template}-XXXXXXXX" 2>/dev/null` if test ! -d "$my_tmpdir"; then # Failing that, at least try and use $RANDOM to avoid a race my_tmpdir="${my_template}-${RANDOM-0}$$" save_mktempdir_umask=`umask` umask 0077 $MKDIR "$my_tmpdir" umask $save_mktempdir_umask fi # If we're not in dry-run mode, bomb out on failure test -d "$my_tmpdir" || \ func_fatal_error "cannot create temporary directory \`$my_tmpdir'" fi $ECHO "$my_tmpdir" } # func_quote_for_eval arg # Aesthetically quote ARG to be evaled later. # This function returns two values: FUNC_QUOTE_FOR_EVAL_RESULT # is double-quoted, suitable for a subsequent eval, whereas # FUNC_QUOTE_FOR_EVAL_UNQUOTED_RESULT has merely all characters # which are still active within double quotes backslashified. func_quote_for_eval () { case $1 in *[\\\`\"\$]*) func_quote_for_eval_unquoted_result=`$ECHO "$1" | $SED "$sed_quote_subst"` ;; *) func_quote_for_eval_unquoted_result="$1" ;; esac case $func_quote_for_eval_unquoted_result in # Double-quote args containing shell metacharacters to delay # word splitting, command substitution and and variable # expansion for a subsequent eval. # Many Bourne shells cannot handle close brackets correctly # in scan sets, so we specify it separately. *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"") func_quote_for_eval_result="\"$func_quote_for_eval_unquoted_result\"" ;; *) func_quote_for_eval_result="$func_quote_for_eval_unquoted_result" esac } # func_quote_for_expand arg # Aesthetically quote ARG to be evaled later; same as above, # but do not quote variable references. func_quote_for_expand () { case $1 in *[\\\`\"]*) my_arg=`$ECHO "$1" | $SED \ -e "$double_quote_subst" -e "$sed_double_backslash"` ;; *) my_arg="$1" ;; esac case $my_arg in # Double-quote args containing shell metacharacters to delay # word splitting and command substitution for a subsequent eval. # Many Bourne shells cannot handle close brackets correctly # in scan sets, so we specify it separately. *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"") my_arg="\"$my_arg\"" ;; esac func_quote_for_expand_result="$my_arg" } # func_show_eval cmd [fail_exp] # Unless opt_silent is true, then output CMD. Then, if opt_dryrun is # not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP # is given, then evaluate it. func_show_eval () { my_cmd="$1" my_fail_exp="${2-:}" ${opt_silent-false} || { func_quote_for_expand "$my_cmd" eval "func_echo $func_quote_for_expand_result" } if ${opt_dry_run-false}; then :; else eval "$my_cmd" my_status=$? if test "$my_status" -eq 0; then :; else eval "(exit $my_status); $my_fail_exp" fi fi } # func_show_eval_locale cmd [fail_exp] # Unless opt_silent is true, then output CMD. Then, if opt_dryrun is # not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP # is given, then evaluate it. Use the saved locale for evaluation. func_show_eval_locale () { my_cmd="$1" my_fail_exp="${2-:}" ${opt_silent-false} || { func_quote_for_expand "$my_cmd" eval "func_echo $func_quote_for_expand_result" } if ${opt_dry_run-false}; then :; else eval "$lt_user_locale $my_cmd" my_status=$? eval "$lt_safe_locale" if test "$my_status" -eq 0; then :; else eval "(exit $my_status); $my_fail_exp" fi fi } # func_tr_sh # Turn $1 into a string suitable for a shell variable name. # Result is stored in $func_tr_sh_result. All characters # not in the set a-zA-Z0-9_ are replaced with '_'. Further, # if $1 begins with a digit, a '_' is prepended as well. func_tr_sh () { case $1 in [0-9]* | *[!a-zA-Z0-9_]*) func_tr_sh_result=`$ECHO "$1" | $SED 's/^\([0-9]\)/_\1/; s/[^a-zA-Z0-9_]/_/g'` ;; * ) func_tr_sh_result=$1 ;; esac } # func_version # Echo version message to standard output and exit. func_version () { $opt_debug $SED -n '/(C)/!b go :more /\./!{ N s/\n# / / b more } :go /^# '$PROGRAM' (GNU /,/# warranty; / { s/^# // s/^# *$// s/\((C)\)[ 0-9,-]*\( [1-9][0-9]*\)/\1\2/ p }' < "$progpath" exit $? } # func_usage # Echo short help message to standard output and exit. func_usage () { $opt_debug $SED -n '/^# Usage:/,/^# *.*--help/ { s/^# // s/^# *$// s/\$progname/'$progname'/ p }' < "$progpath" echo $ECHO "run \`$progname --help | more' for full usage" exit $? } # func_help [NOEXIT] # Echo long help message to standard output and exit, # unless 'noexit' is passed as argument. func_help () { $opt_debug $SED -n '/^# Usage:/,/# Report bugs to/ { :print s/^# // s/^# *$// s*\$progname*'$progname'* s*\$host*'"$host"'* s*\$SHELL*'"$SHELL"'* s*\$LTCC*'"$LTCC"'* s*\$LTCFLAGS*'"$LTCFLAGS"'* s*\$LD*'"$LD"'* s/\$with_gnu_ld/'"$with_gnu_ld"'/ s/\$automake_version/'"`(${AUTOMAKE-automake} --version) 2>/dev/null |$SED 1q`"'/ s/\$autoconf_version/'"`(${AUTOCONF-autoconf} --version) 2>/dev/null |$SED 1q`"'/ p d } /^# .* home page:/b print /^# General help using/b print ' < "$progpath" ret=$? if test -z "$1"; then exit $ret fi } # func_missing_arg argname # Echo program name prefixed message to standard error and set global # exit_cmd. func_missing_arg () { $opt_debug func_error "missing argument for $1." exit_cmd=exit } # func_split_short_opt shortopt # Set func_split_short_opt_name and func_split_short_opt_arg shell # variables after splitting SHORTOPT after the 2nd character. func_split_short_opt () { my_sed_short_opt='1s/^\(..\).*$/\1/;q' my_sed_short_rest='1s/^..\(.*\)$/\1/;q' func_split_short_opt_name=`$ECHO "$1" | $SED "$my_sed_short_opt"` func_split_short_opt_arg=`$ECHO "$1" | $SED "$my_sed_short_rest"` } # func_split_short_opt may be replaced by extended shell implementation # func_split_long_opt longopt # Set func_split_long_opt_name and func_split_long_opt_arg shell # variables after splitting LONGOPT at the `=' sign. func_split_long_opt () { my_sed_long_opt='1s/^\(--[^=]*\)=.*/\1/;q' my_sed_long_arg='1s/^--[^=]*=//' func_split_long_opt_name=`$ECHO "$1" | $SED "$my_sed_long_opt"` func_split_long_opt_arg=`$ECHO "$1" | $SED "$my_sed_long_arg"` } # func_split_long_opt may be replaced by extended shell implementation exit_cmd=: magic="%%%MAGIC variable%%%" magic_exe="%%%MAGIC EXE variable%%%" # Global variables. nonopt= preserve_args= lo2o="s/\\.lo\$/.${objext}/" o2lo="s/\\.${objext}\$/.lo/" extracted_archives= extracted_serial=0 # If this variable is set in any of the actions, the command in it # will be execed at the end. This prevents here-documents from being # left over by shells. exec_cmd= # func_append var value # Append VALUE to the end of shell variable VAR. func_append () { eval "${1}=\$${1}\${2}" } # func_append may be replaced by extended shell implementation # func_append_quoted var value # Quote VALUE and append to the end of shell variable VAR, separated # by a space. func_append_quoted () { func_quote_for_eval "${2}" eval "${1}=\$${1}\\ \$func_quote_for_eval_result" } # func_append_quoted may be replaced by extended shell implementation # func_arith arithmetic-term... func_arith () { func_arith_result=`expr "${@}"` } # func_arith may be replaced by extended shell implementation # func_len string # STRING may not start with a hyphen. func_len () { func_len_result=`expr "${1}" : ".*" 2>/dev/null || echo $max_cmd_len` } # func_len may be replaced by extended shell implementation # func_lo2o object func_lo2o () { func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"` } # func_lo2o may be replaced by extended shell implementation # func_xform libobj-or-source func_xform () { func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'` } # func_xform may be replaced by extended shell implementation # func_fatal_configuration arg... # Echo program name prefixed message to standard error, followed by # a configuration failure hint, and exit. func_fatal_configuration () { func_error ${1+"$@"} func_error "See the $PACKAGE documentation for more information." func_fatal_error "Fatal configuration error." } # func_config # Display the configuration for all the tags in this script. func_config () { re_begincf='^# ### BEGIN LIBTOOL' re_endcf='^# ### END LIBTOOL' # Default configuration. $SED "1,/$re_begincf CONFIG/d;/$re_endcf CONFIG/,\$d" < "$progpath" # Now print the configurations for the tags. for tagname in $taglist; do $SED -n "/$re_begincf TAG CONFIG: $tagname\$/,/$re_endcf TAG CONFIG: $tagname\$/p" < "$progpath" done exit $? } # func_features # Display the features supported by this script. func_features () { echo "host: $host" if test "$build_libtool_libs" = yes; then echo "enable shared libraries" else echo "disable shared libraries" fi if test "$build_old_libs" = yes; then echo "enable static libraries" else echo "disable static libraries" fi exit $? } # func_enable_tag tagname # Verify that TAGNAME is valid, and either flag an error and exit, or # enable the TAGNAME tag. We also add TAGNAME to the global $taglist # variable here. func_enable_tag () { # Global variable: tagname="$1" re_begincf="^# ### BEGIN LIBTOOL TAG CONFIG: $tagname\$" re_endcf="^# ### END LIBTOOL TAG CONFIG: $tagname\$" sed_extractcf="/$re_begincf/,/$re_endcf/p" # Validate tagname. case $tagname in *[!-_A-Za-z0-9,/]*) func_fatal_error "invalid tag name: $tagname" ;; esac # Don't test for the "default" C tag, as we know it's # there but not specially marked. case $tagname in CC) ;; *) if $GREP "$re_begincf" "$progpath" >/dev/null 2>&1; then taglist="$taglist $tagname" # Evaluate the configuration. Be careful to quote the path # and the sed script, to avoid splitting on whitespace, but # also don't use non-portable quotes within backquotes within # quotes we have to do it in 2 steps: extractedcf=`$SED -n -e "$sed_extractcf" < "$progpath"` eval "$extractedcf" else func_error "ignoring unknown tag $tagname" fi ;; esac } # func_check_version_match # Ensure that we are using m4 macros, and libtool script from the same # release of libtool. func_check_version_match () { if test "$package_revision" != "$macro_revision"; then if test "$VERSION" != "$macro_version"; then if test -z "$macro_version"; then cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, but the $progname: definition of this LT_INIT comes from an older release. $progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION $progname: and run autoconf again. _LT_EOF else cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, but the $progname: definition of this LT_INIT comes from $PACKAGE $macro_version. $progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION $progname: and run autoconf again. _LT_EOF fi else cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, revision $package_revision, $progname: but the definition of this LT_INIT comes from revision $macro_revision. $progname: You should recreate aclocal.m4 with macros from revision $package_revision $progname: of $PACKAGE $VERSION and run autoconf again. _LT_EOF fi exit $EXIT_MISMATCH fi } # Shorthand for --mode=foo, only valid as the first argument case $1 in clean|clea|cle|cl) shift; set dummy --mode clean ${1+"$@"}; shift ;; compile|compil|compi|comp|com|co|c) shift; set dummy --mode compile ${1+"$@"}; shift ;; execute|execut|execu|exec|exe|ex|e) shift; set dummy --mode execute ${1+"$@"}; shift ;; finish|finis|fini|fin|fi|f) shift; set dummy --mode finish ${1+"$@"}; shift ;; install|instal|insta|inst|ins|in|i) shift; set dummy --mode install ${1+"$@"}; shift ;; link|lin|li|l) shift; set dummy --mode link ${1+"$@"}; shift ;; uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u) shift; set dummy --mode uninstall ${1+"$@"}; shift ;; esac # Option defaults: opt_debug=: opt_dry_run=false opt_config=false opt_preserve_dup_deps=false opt_features=false opt_finish=false opt_help=false opt_help_all=false opt_silent=: opt_warning=: opt_verbose=: opt_silent=false opt_verbose=false # Parse options once, thoroughly. This comes as soon as possible in the # script to make things like `--version' happen as quickly as we can. { # this just eases exit handling while test $# -gt 0; do opt="$1" shift case $opt in --debug|-x) opt_debug='set -x' func_echo "enabling shell trace mode" $opt_debug ;; --dry-run|--dryrun|-n) opt_dry_run=: ;; --config) opt_config=: func_config ;; --dlopen|-dlopen) optarg="$1" opt_dlopen="${opt_dlopen+$opt_dlopen }$optarg" shift ;; --preserve-dup-deps) opt_preserve_dup_deps=: ;; --features) opt_features=: func_features ;; --finish) opt_finish=: set dummy --mode finish ${1+"$@"}; shift ;; --help) opt_help=: ;; --help-all) opt_help_all=: opt_help=': help-all' ;; --mode) test $# = 0 && func_missing_arg $opt && break optarg="$1" opt_mode="$optarg" case $optarg in # Valid mode arguments: clean|compile|execute|finish|install|link|relink|uninstall) ;; # Catch anything else as an error *) func_error "invalid argument for $opt" exit_cmd=exit break ;; esac shift ;; --no-silent|--no-quiet) opt_silent=false func_append preserve_args " $opt" ;; --no-warning|--no-warn) opt_warning=false func_append preserve_args " $opt" ;; --no-verbose) opt_verbose=false func_append preserve_args " $opt" ;; --silent|--quiet) opt_silent=: func_append preserve_args " $opt" opt_verbose=false ;; --verbose|-v) opt_verbose=: func_append preserve_args " $opt" opt_silent=false ;; --tag) test $# = 0 && func_missing_arg $opt && break optarg="$1" opt_tag="$optarg" func_append preserve_args " $opt $optarg" func_enable_tag "$optarg" shift ;; -\?|-h) func_usage ;; --help) func_help ;; --version) func_version ;; # Separate optargs to long options: --*=*) func_split_long_opt "$opt" set dummy "$func_split_long_opt_name" "$func_split_long_opt_arg" ${1+"$@"} shift ;; # Separate non-argument short options: -\?*|-h*|-n*|-v*) func_split_short_opt "$opt" set dummy "$func_split_short_opt_name" "-$func_split_short_opt_arg" ${1+"$@"} shift ;; --) break ;; -*) func_fatal_help "unrecognized option \`$opt'" ;; *) set dummy "$opt" ${1+"$@"}; shift; break ;; esac done # Validate options: # save first non-option argument if test "$#" -gt 0; then nonopt="$opt" shift fi # preserve --debug test "$opt_debug" = : || func_append preserve_args " --debug" case $host in *cygwin* | *mingw* | *pw32* | *cegcc*) # don't eliminate duplications in $postdeps and $predeps opt_duplicate_compiler_generated_deps=: ;; *) opt_duplicate_compiler_generated_deps=$opt_preserve_dup_deps ;; esac $opt_help || { # Sanity checks first: func_check_version_match if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then func_fatal_configuration "not configured to build any kind of library" fi # Darwin sucks eval std_shrext=\"$shrext_cmds\" # Only execute mode is allowed to have -dlopen flags. if test -n "$opt_dlopen" && test "$opt_mode" != execute; then func_error "unrecognized option \`-dlopen'" $ECHO "$help" 1>&2 exit $EXIT_FAILURE fi # Change the help message to a mode-specific one. generic_help="$help" help="Try \`$progname --help --mode=$opt_mode' for more information." } # Bail if the options were screwed $exit_cmd $EXIT_FAILURE } ## ----------- ## ## Main. ## ## ----------- ## # func_lalib_p file # True iff FILE is a libtool `.la' library or `.lo' object file. # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_lalib_p () { test -f "$1" && $SED -e 4q "$1" 2>/dev/null \ | $GREP "^# Generated by .*$PACKAGE" > /dev/null 2>&1 } # func_lalib_unsafe_p file # True iff FILE is a libtool `.la' library or `.lo' object file. # This function implements the same check as func_lalib_p without # resorting to external programs. To this end, it redirects stdin and # closes it afterwards, without saving the original file descriptor. # As a safety measure, use it only where a negative result would be # fatal anyway. Works if `file' does not exist. func_lalib_unsafe_p () { lalib_p=no if test -f "$1" && test -r "$1" && exec 5<&0 <"$1"; then for lalib_p_l in 1 2 3 4 do read lalib_p_line case "$lalib_p_line" in \#\ Generated\ by\ *$PACKAGE* ) lalib_p=yes; break;; esac done exec 0<&5 5<&- fi test "$lalib_p" = yes } # func_ltwrapper_script_p file # True iff FILE is a libtool wrapper script # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_script_p () { func_lalib_p "$1" } # func_ltwrapper_executable_p file # True iff FILE is a libtool wrapper executable # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_executable_p () { func_ltwrapper_exec_suffix= case $1 in *.exe) ;; *) func_ltwrapper_exec_suffix=.exe ;; esac $GREP "$magic_exe" "$1$func_ltwrapper_exec_suffix" >/dev/null 2>&1 } # func_ltwrapper_scriptname file # Assumes file is an ltwrapper_executable # uses $file to determine the appropriate filename for a # temporary ltwrapper_script. func_ltwrapper_scriptname () { func_dirname_and_basename "$1" "" "." func_stripname '' '.exe' "$func_basename_result" func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper" } # func_ltwrapper_p file # True iff FILE is a libtool wrapper script or wrapper executable # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_p () { func_ltwrapper_script_p "$1" || func_ltwrapper_executable_p "$1" } # func_execute_cmds commands fail_cmd # Execute tilde-delimited COMMANDS. # If FAIL_CMD is given, eval that upon failure. # FAIL_CMD may read-access the current command in variable CMD! func_execute_cmds () { $opt_debug save_ifs=$IFS; IFS='~' for cmd in $1; do IFS=$save_ifs eval cmd=\"$cmd\" func_show_eval "$cmd" "${2-:}" done IFS=$save_ifs } # func_source file # Source FILE, adding directory component if necessary. # Note that it is not necessary on cygwin/mingw to append a dot to # FILE even if both FILE and FILE.exe exist: automatic-append-.exe # behavior happens only for exec(3), not for open(2)! Also, sourcing # `FILE.' does not work on cygwin managed mounts. func_source () { $opt_debug case $1 in */* | *\\*) . "$1" ;; *) . "./$1" ;; esac } # func_resolve_sysroot PATH # Replace a leading = in PATH with a sysroot. Store the result into # func_resolve_sysroot_result func_resolve_sysroot () { func_resolve_sysroot_result=$1 case $func_resolve_sysroot_result in =*) func_stripname '=' '' "$func_resolve_sysroot_result" func_resolve_sysroot_result=$lt_sysroot$func_stripname_result ;; esac } # func_replace_sysroot PATH # If PATH begins with the sysroot, replace it with = and # store the result into func_replace_sysroot_result. func_replace_sysroot () { case "$lt_sysroot:$1" in ?*:"$lt_sysroot"*) func_stripname "$lt_sysroot" '' "$1" func_replace_sysroot_result="=$func_stripname_result" ;; *) # Including no sysroot. func_replace_sysroot_result=$1 ;; esac } # func_infer_tag arg # Infer tagged configuration to use if any are available and # if one wasn't chosen via the "--tag" command line option. # Only attempt this if the compiler in the base compile # command doesn't match the default compiler. # arg is usually of the form 'gcc ...' func_infer_tag () { $opt_debug if test -n "$available_tags" && test -z "$tagname"; then CC_quoted= for arg in $CC; do func_append_quoted CC_quoted "$arg" done CC_expanded=`func_echo_all $CC` CC_quoted_expanded=`func_echo_all $CC_quoted` case $@ in # Blanks in the command may have been stripped by the calling shell, # but not from the CC environment variable when configure was run. " $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \ " $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*) ;; # Blanks at the start of $base_compile will cause this to fail # if we don't check for them as well. *) for z in $available_tags; do if $GREP "^# ### BEGIN LIBTOOL TAG CONFIG: $z$" < "$progpath" > /dev/null; then # Evaluate the configuration. eval "`${SED} -n -e '/^# ### BEGIN LIBTOOL TAG CONFIG: '$z'$/,/^# ### END LIBTOOL TAG CONFIG: '$z'$/p' < $progpath`" CC_quoted= for arg in $CC; do # Double-quote args containing other shell metacharacters. func_append_quoted CC_quoted "$arg" done CC_expanded=`func_echo_all $CC` CC_quoted_expanded=`func_echo_all $CC_quoted` case "$@ " in " $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \ " $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*) # The compiler in the base compile command matches # the one in the tagged configuration. # Assume this is the tagged configuration we want. tagname=$z break ;; esac fi done # If $tagname still isn't set, then no tagged configuration # was found and let the user know that the "--tag" command # line option must be used. if test -z "$tagname"; then func_echo "unable to infer tagged configuration" func_fatal_error "specify a tag with \`--tag'" # else # func_verbose "using $tagname tagged configuration" fi ;; esac fi } # func_write_libtool_object output_name pic_name nonpic_name # Create a libtool object file (analogous to a ".la" file), # but don't create it if we're doing a dry run. func_write_libtool_object () { write_libobj=${1} if test "$build_libtool_libs" = yes; then write_lobj=\'${2}\' else write_lobj=none fi if test "$build_old_libs" = yes; then write_oldobj=\'${3}\' else write_oldobj=none fi $opt_dry_run || { cat >${write_libobj}T </dev/null` if test "$?" -eq 0 && test -n "${func_convert_core_file_wine_to_w32_tmp}"; then func_convert_core_file_wine_to_w32_result=`$ECHO "$func_convert_core_file_wine_to_w32_tmp" | $SED -e "$lt_sed_naive_backslashify"` else func_convert_core_file_wine_to_w32_result= fi fi } # end: func_convert_core_file_wine_to_w32 # func_convert_core_path_wine_to_w32 ARG # Helper function used by path conversion functions when $build is *nix, and # $host is mingw, cygwin, or some other w32 environment. Relies on a correctly # configured wine environment available, with the winepath program in $build's # $PATH. Assumes ARG has no leading or trailing path separator characters. # # ARG is path to be converted from $build format to win32. # Result is available in $func_convert_core_path_wine_to_w32_result. # Unconvertible file (directory) names in ARG are skipped; if no directory names # are convertible, then the result may be empty. func_convert_core_path_wine_to_w32 () { $opt_debug # unfortunately, winepath doesn't convert paths, only file names func_convert_core_path_wine_to_w32_result="" if test -n "$1"; then oldIFS=$IFS IFS=: for func_convert_core_path_wine_to_w32_f in $1; do IFS=$oldIFS func_convert_core_file_wine_to_w32 "$func_convert_core_path_wine_to_w32_f" if test -n "$func_convert_core_file_wine_to_w32_result" ; then if test -z "$func_convert_core_path_wine_to_w32_result"; then func_convert_core_path_wine_to_w32_result="$func_convert_core_file_wine_to_w32_result" else func_append func_convert_core_path_wine_to_w32_result ";$func_convert_core_file_wine_to_w32_result" fi fi done IFS=$oldIFS fi } # end: func_convert_core_path_wine_to_w32 # func_cygpath ARGS... # Wrapper around calling the cygpath program via LT_CYGPATH. This is used when # when (1) $build is *nix and Cygwin is hosted via a wine environment; or (2) # $build is MSYS and $host is Cygwin, or (3) $build is Cygwin. In case (1) or # (2), returns the Cygwin file name or path in func_cygpath_result (input # file name or path is assumed to be in w32 format, as previously converted # from $build's *nix or MSYS format). In case (3), returns the w32 file name # or path in func_cygpath_result (input file name or path is assumed to be in # Cygwin format). Returns an empty string on error. # # ARGS are passed to cygpath, with the last one being the file name or path to # be converted. # # Specify the absolute *nix (or w32) name to cygpath in the LT_CYGPATH # environment variable; do not put it in $PATH. func_cygpath () { $opt_debug if test -n "$LT_CYGPATH" && test -f "$LT_CYGPATH"; then func_cygpath_result=`$LT_CYGPATH "$@" 2>/dev/null` if test "$?" -ne 0; then # on failure, ensure result is empty func_cygpath_result= fi else func_cygpath_result= func_error "LT_CYGPATH is empty or specifies non-existent file: \`$LT_CYGPATH'" fi } #end: func_cygpath # func_convert_core_msys_to_w32 ARG # Convert file name or path ARG from MSYS format to w32 format. Return # result in func_convert_core_msys_to_w32_result. func_convert_core_msys_to_w32 () { $opt_debug # awkward: cmd appends spaces to result func_convert_core_msys_to_w32_result=`( cmd //c echo "$1" ) 2>/dev/null | $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"` } #end: func_convert_core_msys_to_w32 # func_convert_file_check ARG1 ARG2 # Verify that ARG1 (a file name in $build format) was converted to $host # format in ARG2. Otherwise, emit an error message, but continue (resetting # func_to_host_file_result to ARG1). func_convert_file_check () { $opt_debug if test -z "$2" && test -n "$1" ; then func_error "Could not determine host file name corresponding to" func_error " \`$1'" func_error "Continuing, but uninstalled executables may not work." # Fallback: func_to_host_file_result="$1" fi } # end func_convert_file_check # func_convert_path_check FROM_PATHSEP TO_PATHSEP FROM_PATH TO_PATH # Verify that FROM_PATH (a path in $build format) was converted to $host # format in TO_PATH. Otherwise, emit an error message, but continue, resetting # func_to_host_file_result to a simplistic fallback value (see below). func_convert_path_check () { $opt_debug if test -z "$4" && test -n "$3"; then func_error "Could not determine the host path corresponding to" func_error " \`$3'" func_error "Continuing, but uninstalled executables may not work." # Fallback. This is a deliberately simplistic "conversion" and # should not be "improved". See libtool.info. if test "x$1" != "x$2"; then lt_replace_pathsep_chars="s|$1|$2|g" func_to_host_path_result=`echo "$3" | $SED -e "$lt_replace_pathsep_chars"` else func_to_host_path_result="$3" fi fi } # end func_convert_path_check # func_convert_path_front_back_pathsep FRONTPAT BACKPAT REPL ORIG # Modifies func_to_host_path_result by prepending REPL if ORIG matches FRONTPAT # and appending REPL if ORIG matches BACKPAT. func_convert_path_front_back_pathsep () { $opt_debug case $4 in $1 ) func_to_host_path_result="$3$func_to_host_path_result" ;; esac case $4 in $2 ) func_append func_to_host_path_result "$3" ;; esac } # end func_convert_path_front_back_pathsep ################################################## # $build to $host FILE NAME CONVERSION FUNCTIONS # ################################################## # invoked via `$to_host_file_cmd ARG' # # In each case, ARG is the path to be converted from $build to $host format. # Result will be available in $func_to_host_file_result. # func_to_host_file ARG # Converts the file name ARG from $build format to $host format. Return result # in func_to_host_file_result. func_to_host_file () { $opt_debug $to_host_file_cmd "$1" } # end func_to_host_file # func_to_tool_file ARG LAZY # converts the file name ARG from $build format to toolchain format. Return # result in func_to_tool_file_result. If the conversion in use is listed # in (the comma separated) LAZY, no conversion takes place. func_to_tool_file () { $opt_debug case ,$2, in *,"$to_tool_file_cmd",*) func_to_tool_file_result=$1 ;; *) $to_tool_file_cmd "$1" func_to_tool_file_result=$func_to_host_file_result ;; esac } # end func_to_tool_file # func_convert_file_noop ARG # Copy ARG to func_to_host_file_result. func_convert_file_noop () { func_to_host_file_result="$1" } # end func_convert_file_noop # func_convert_file_msys_to_w32 ARG # Convert file name ARG from (mingw) MSYS to (mingw) w32 format; automatic # conversion to w32 is not available inside the cwrapper. Returns result in # func_to_host_file_result. func_convert_file_msys_to_w32 () { $opt_debug func_to_host_file_result="$1" if test -n "$1"; then func_convert_core_msys_to_w32 "$1" func_to_host_file_result="$func_convert_core_msys_to_w32_result" fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_msys_to_w32 # func_convert_file_cygwin_to_w32 ARG # Convert file name ARG from Cygwin to w32 format. Returns result in # func_to_host_file_result. func_convert_file_cygwin_to_w32 () { $opt_debug func_to_host_file_result="$1" if test -n "$1"; then # because $build is cygwin, we call "the" cygpath in $PATH; no need to use # LT_CYGPATH in this case. func_to_host_file_result=`cygpath -m "$1"` fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_cygwin_to_w32 # func_convert_file_nix_to_w32 ARG # Convert file name ARG from *nix to w32 format. Requires a wine environment # and a working winepath. Returns result in func_to_host_file_result. func_convert_file_nix_to_w32 () { $opt_debug func_to_host_file_result="$1" if test -n "$1"; then func_convert_core_file_wine_to_w32 "$1" func_to_host_file_result="$func_convert_core_file_wine_to_w32_result" fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_nix_to_w32 # func_convert_file_msys_to_cygwin ARG # Convert file name ARG from MSYS to Cygwin format. Requires LT_CYGPATH set. # Returns result in func_to_host_file_result. func_convert_file_msys_to_cygwin () { $opt_debug func_to_host_file_result="$1" if test -n "$1"; then func_convert_core_msys_to_w32 "$1" func_cygpath -u "$func_convert_core_msys_to_w32_result" func_to_host_file_result="$func_cygpath_result" fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_msys_to_cygwin # func_convert_file_nix_to_cygwin ARG # Convert file name ARG from *nix to Cygwin format. Requires Cygwin installed # in a wine environment, working winepath, and LT_CYGPATH set. Returns result # in func_to_host_file_result. func_convert_file_nix_to_cygwin () { $opt_debug func_to_host_file_result="$1" if test -n "$1"; then # convert from *nix to w32, then use cygpath to convert from w32 to cygwin. func_convert_core_file_wine_to_w32 "$1" func_cygpath -u "$func_convert_core_file_wine_to_w32_result" func_to_host_file_result="$func_cygpath_result" fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_nix_to_cygwin ############################################# # $build to $host PATH CONVERSION FUNCTIONS # ############################################# # invoked via `$to_host_path_cmd ARG' # # In each case, ARG is the path to be converted from $build to $host format. # The result will be available in $func_to_host_path_result. # # Path separators are also converted from $build format to $host format. If # ARG begins or ends with a path separator character, it is preserved (but # converted to $host format) on output. # # All path conversion functions are named using the following convention: # file name conversion function : func_convert_file_X_to_Y () # path conversion function : func_convert_path_X_to_Y () # where, for any given $build/$host combination the 'X_to_Y' value is the # same. If conversion functions are added for new $build/$host combinations, # the two new functions must follow this pattern, or func_init_to_host_path_cmd # will break. # func_init_to_host_path_cmd # Ensures that function "pointer" variable $to_host_path_cmd is set to the # appropriate value, based on the value of $to_host_file_cmd. to_host_path_cmd= func_init_to_host_path_cmd () { $opt_debug if test -z "$to_host_path_cmd"; then func_stripname 'func_convert_file_' '' "$to_host_file_cmd" to_host_path_cmd="func_convert_path_${func_stripname_result}" fi } # func_to_host_path ARG # Converts the path ARG from $build format to $host format. Return result # in func_to_host_path_result. func_to_host_path () { $opt_debug func_init_to_host_path_cmd $to_host_path_cmd "$1" } # end func_to_host_path # func_convert_path_noop ARG # Copy ARG to func_to_host_path_result. func_convert_path_noop () { func_to_host_path_result="$1" } # end func_convert_path_noop # func_convert_path_msys_to_w32 ARG # Convert path ARG from (mingw) MSYS to (mingw) w32 format; automatic # conversion to w32 is not available inside the cwrapper. Returns result in # func_to_host_path_result. func_convert_path_msys_to_w32 () { $opt_debug func_to_host_path_result="$1" if test -n "$1"; then # Remove leading and trailing path separator characters from ARG. MSYS # behavior is inconsistent here; cygpath turns them into '.;' and ';.'; # and winepath ignores them completely. func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_msys_to_w32 "$func_to_host_path_tmp1" func_to_host_path_result="$func_convert_core_msys_to_w32_result" func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_msys_to_w32 # func_convert_path_cygwin_to_w32 ARG # Convert path ARG from Cygwin to w32 format. Returns result in # func_to_host_file_result. func_convert_path_cygwin_to_w32 () { $opt_debug func_to_host_path_result="$1" if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_to_host_path_result=`cygpath -m -p "$func_to_host_path_tmp1"` func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_cygwin_to_w32 # func_convert_path_nix_to_w32 ARG # Convert path ARG from *nix to w32 format. Requires a wine environment and # a working winepath. Returns result in func_to_host_file_result. func_convert_path_nix_to_w32 () { $opt_debug func_to_host_path_result="$1" if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1" func_to_host_path_result="$func_convert_core_path_wine_to_w32_result" func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_nix_to_w32 # func_convert_path_msys_to_cygwin ARG # Convert path ARG from MSYS to Cygwin format. Requires LT_CYGPATH set. # Returns result in func_to_host_file_result. func_convert_path_msys_to_cygwin () { $opt_debug func_to_host_path_result="$1" if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_msys_to_w32 "$func_to_host_path_tmp1" func_cygpath -u -p "$func_convert_core_msys_to_w32_result" func_to_host_path_result="$func_cygpath_result" func_convert_path_check : : \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" : "$1" fi } # end func_convert_path_msys_to_cygwin # func_convert_path_nix_to_cygwin ARG # Convert path ARG from *nix to Cygwin format. Requires Cygwin installed in a # a wine environment, working winepath, and LT_CYGPATH set. Returns result in # func_to_host_file_result. func_convert_path_nix_to_cygwin () { $opt_debug func_to_host_path_result="$1" if test -n "$1"; then # Remove leading and trailing path separator characters from # ARG. msys behavior is inconsistent here, cygpath turns them # into '.;' and ';.', and winepath ignores them completely. func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1" func_cygpath -u -p "$func_convert_core_path_wine_to_w32_result" func_to_host_path_result="$func_cygpath_result" func_convert_path_check : : \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" : "$1" fi } # end func_convert_path_nix_to_cygwin # func_mode_compile arg... func_mode_compile () { $opt_debug # Get the compilation command and the source file. base_compile= srcfile="$nonopt" # always keep a non-empty value in "srcfile" suppress_opt=yes suppress_output= arg_mode=normal libobj= later= pie_flag= for arg do case $arg_mode in arg ) # do not "continue". Instead, add this to base_compile lastarg="$arg" arg_mode=normal ;; target ) libobj="$arg" arg_mode=normal continue ;; normal ) # Accept any command-line options. case $arg in -o) test -n "$libobj" && \ func_fatal_error "you cannot specify \`-o' more than once" arg_mode=target continue ;; -pie | -fpie | -fPIE) func_append pie_flag " $arg" continue ;; -shared | -static | -prefer-pic | -prefer-non-pic) func_append later " $arg" continue ;; -no-suppress) suppress_opt=no continue ;; -Xcompiler) arg_mode=arg # the next one goes into the "base_compile" arg list continue # The current "srcfile" will either be retained or ;; # replaced later. I would guess that would be a bug. -Wc,*) func_stripname '-Wc,' '' "$arg" args=$func_stripname_result lastarg= save_ifs="$IFS"; IFS=',' for arg in $args; do IFS="$save_ifs" func_append_quoted lastarg "$arg" done IFS="$save_ifs" func_stripname ' ' '' "$lastarg" lastarg=$func_stripname_result # Add the arguments to base_compile. func_append base_compile " $lastarg" continue ;; *) # Accept the current argument as the source file. # The previous "srcfile" becomes the current argument. # lastarg="$srcfile" srcfile="$arg" ;; esac # case $arg ;; esac # case $arg_mode # Aesthetically quote the previous argument. func_append_quoted base_compile "$lastarg" done # for arg case $arg_mode in arg) func_fatal_error "you must specify an argument for -Xcompile" ;; target) func_fatal_error "you must specify a target with \`-o'" ;; *) # Get the name of the library object. test -z "$libobj" && { func_basename "$srcfile" libobj="$func_basename_result" } ;; esac # Recognize several different file suffixes. # If the user specifies -o file.o, it is replaced with file.lo case $libobj in *.[cCFSifmso] | \ *.ada | *.adb | *.ads | *.asm | \ *.c++ | *.cc | *.ii | *.class | *.cpp | *.cxx | \ *.[fF][09]? | *.for | *.java | *.go | *.obj | *.sx | *.cu | *.cup) func_xform "$libobj" libobj=$func_xform_result ;; esac case $libobj in *.lo) func_lo2o "$libobj"; obj=$func_lo2o_result ;; *) func_fatal_error "cannot determine name of library object from \`$libobj'" ;; esac func_infer_tag $base_compile for arg in $later; do case $arg in -shared) test "$build_libtool_libs" != yes && \ func_fatal_configuration "can not build a shared library" build_old_libs=no continue ;; -static) build_libtool_libs=no build_old_libs=yes continue ;; -prefer-pic) pic_mode=yes continue ;; -prefer-non-pic) pic_mode=no continue ;; esac done func_quote_for_eval "$libobj" test "X$libobj" != "X$func_quote_for_eval_result" \ && $ECHO "X$libobj" | $GREP '[]~#^*{};<>?"'"'"' &()|`$[]' \ && func_warning "libobj name \`$libobj' may not contain shell special characters." func_dirname_and_basename "$obj" "/" "" objname="$func_basename_result" xdir="$func_dirname_result" lobj=${xdir}$objdir/$objname test -z "$base_compile" && \ func_fatal_help "you must specify a compilation command" # Delete any leftover library objects. if test "$build_old_libs" = yes; then removelist="$obj $lobj $libobj ${libobj}T" else removelist="$lobj $libobj ${libobj}T" fi # On Cygwin there's no "real" PIC flag so we must build both object types case $host_os in cygwin* | mingw* | pw32* | os2* | cegcc*) pic_mode=default ;; esac if test "$pic_mode" = no && test "$deplibs_check_method" != pass_all; then # non-PIC code in shared libraries is not supported pic_mode=default fi # Calculate the filename of the output object if compiler does # not support -o with -c if test "$compiler_c_o" = no; then output_obj=`$ECHO "$srcfile" | $SED 's%^.*/%%; s%\.[^.]*$%%'`.${objext} lockfile="$output_obj.lock" else output_obj= need_locks=no lockfile= fi # Lock this critical section if it is needed # We use this script file to make the link, it avoids creating a new file if test "$need_locks" = yes; then until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do func_echo "Waiting for $lockfile to be removed" sleep 2 done elif test "$need_locks" = warn; then if test -f "$lockfile"; then $ECHO "\ *** ERROR, $lockfile exists and contains: `cat $lockfile 2>/dev/null` This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support \`-c' and \`-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi func_append removelist " $output_obj" $ECHO "$srcfile" > "$lockfile" fi $opt_dry_run || $RM $removelist func_append removelist " $lockfile" trap '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' 1 2 15 func_to_tool_file "$srcfile" func_convert_file_msys_to_w32 srcfile=$func_to_tool_file_result func_quote_for_eval "$srcfile" qsrcfile=$func_quote_for_eval_result # Only build a PIC object if we are building libtool libraries. if test "$build_libtool_libs" = yes; then # Without this assignment, base_compile gets emptied. fbsd_hideous_sh_bug=$base_compile if test "$pic_mode" != no; then command="$base_compile $qsrcfile $pic_flag" else # Don't build PIC code command="$base_compile $qsrcfile" fi func_mkdir_p "$xdir$objdir" if test -z "$output_obj"; then # Place PIC objects in $objdir func_append command " -o $lobj" fi func_show_eval_locale "$command" \ 'test -n "$output_obj" && $RM $removelist; exit $EXIT_FAILURE' if test "$need_locks" = warn && test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then $ECHO "\ *** ERROR, $lockfile contains: `cat $lockfile 2>/dev/null` but it should contain: $srcfile This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support \`-c' and \`-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi # Just move the object if needed, then go on to compile the next one if test -n "$output_obj" && test "X$output_obj" != "X$lobj"; then func_show_eval '$MV "$output_obj" "$lobj"' \ 'error=$?; $opt_dry_run || $RM $removelist; exit $error' fi # Allow error messages only from the first compilation. if test "$suppress_opt" = yes; then suppress_output=' >/dev/null 2>&1' fi fi # Only build a position-dependent object if we build old libraries. if test "$build_old_libs" = yes; then if test "$pic_mode" != yes; then # Don't build PIC code command="$base_compile $qsrcfile$pie_flag" else command="$base_compile $qsrcfile $pic_flag" fi if test "$compiler_c_o" = yes; then func_append command " -o $obj" fi # Suppress compiler output if we already did a PIC compilation. func_append command "$suppress_output" func_show_eval_locale "$command" \ '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' if test "$need_locks" = warn && test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then $ECHO "\ *** ERROR, $lockfile contains: `cat $lockfile 2>/dev/null` but it should contain: $srcfile This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support \`-c' and \`-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi # Just move the object if needed if test -n "$output_obj" && test "X$output_obj" != "X$obj"; then func_show_eval '$MV "$output_obj" "$obj"' \ 'error=$?; $opt_dry_run || $RM $removelist; exit $error' fi fi $opt_dry_run || { func_write_libtool_object "$libobj" "$objdir/$objname" "$objname" # Unlock the critical section if it was locked if test "$need_locks" != no; then removelist=$lockfile $RM "$lockfile" fi } exit $EXIT_SUCCESS } $opt_help || { test "$opt_mode" = compile && func_mode_compile ${1+"$@"} } func_mode_help () { # We need to display help for each of the modes. case $opt_mode in "") # Generic help is extracted from the usage comments # at the start of this file. func_help ;; clean) $ECHO \ "Usage: $progname [OPTION]... --mode=clean RM [RM-OPTION]... FILE... Remove files from the build directory. RM is the name of the program to use to delete files associated with each FILE (typically \`/bin/rm'). RM-OPTIONS are options (such as \`-f') to be passed to RM. If FILE is a libtool library, object or program, all the files associated with it are deleted. Otherwise, only FILE itself is deleted using RM." ;; compile) $ECHO \ "Usage: $progname [OPTION]... --mode=compile COMPILE-COMMAND... SOURCEFILE Compile a source file into a libtool library object. This mode accepts the following additional options: -o OUTPUT-FILE set the output file name to OUTPUT-FILE -no-suppress do not suppress compiler output for multiple passes -prefer-pic try to build PIC objects only -prefer-non-pic try to build non-PIC objects only -shared do not build a \`.o' file suitable for static linking -static only build a \`.o' file suitable for static linking -Wc,FLAG pass FLAG directly to the compiler COMPILE-COMMAND is a command to be used in creating a \`standard' object file from the given SOURCEFILE. The output file name is determined by removing the directory component from SOURCEFILE, then substituting the C source code suffix \`.c' with the library object suffix, \`.lo'." ;; execute) $ECHO \ "Usage: $progname [OPTION]... --mode=execute COMMAND [ARGS]... Automatically set library path, then run a program. This mode accepts the following additional options: -dlopen FILE add the directory containing FILE to the library path This mode sets the library path environment variable according to \`-dlopen' flags. If any of the ARGS are libtool executable wrappers, then they are translated into their corresponding uninstalled binary, and any of their required library directories are added to the library path. Then, COMMAND is executed, with ARGS as arguments." ;; finish) $ECHO \ "Usage: $progname [OPTION]... --mode=finish [LIBDIR]... Complete the installation of libtool libraries. Each LIBDIR is a directory that contains libtool libraries. The commands that this mode executes may require superuser privileges. Use the \`--dry-run' option if you just want to see what would be executed." ;; install) $ECHO \ "Usage: $progname [OPTION]... --mode=install INSTALL-COMMAND... Install executables or libraries. INSTALL-COMMAND is the installation command. The first component should be either the \`install' or \`cp' program. The following components of INSTALL-COMMAND are treated specially: -inst-prefix-dir PREFIX-DIR Use PREFIX-DIR as a staging area for installation The rest of the components are interpreted as arguments to that command (only BSD-compatible install options are recognized)." ;; link) $ECHO \ "Usage: $progname [OPTION]... --mode=link LINK-COMMAND... Link object files or libraries together to form another library, or to create an executable program. LINK-COMMAND is a command using the C compiler that you would use to create a program from several object files. The following components of LINK-COMMAND are treated specially: -all-static do not do any dynamic linking at all -avoid-version do not add a version suffix if possible -bindir BINDIR specify path to binaries directory (for systems where libraries must be found in the PATH setting at runtime) -dlopen FILE \`-dlpreopen' FILE if it cannot be dlopened at runtime -dlpreopen FILE link in FILE and add its symbols to lt_preloaded_symbols -export-dynamic allow symbols from OUTPUT-FILE to be resolved with dlsym(3) -export-symbols SYMFILE try to export only the symbols listed in SYMFILE -export-symbols-regex REGEX try to export only the symbols matching REGEX -LLIBDIR search LIBDIR for required installed libraries -lNAME OUTPUT-FILE requires the installed library libNAME -module build a library that can dlopened -no-fast-install disable the fast-install mode -no-install link a not-installable executable -no-undefined declare that a library does not refer to external symbols -o OUTPUT-FILE create OUTPUT-FILE from the specified objects -objectlist FILE Use a list of object files found in FILE to specify objects -precious-files-regex REGEX don't remove output files matching REGEX -release RELEASE specify package release information -rpath LIBDIR the created library will eventually be installed in LIBDIR -R[ ]LIBDIR add LIBDIR to the runtime path of programs and libraries -shared only do dynamic linking of libtool libraries -shrext SUFFIX override the standard shared library file extension -static do not do any dynamic linking of uninstalled libtool libraries -static-libtool-libs do not do any dynamic linking of libtool libraries -version-info CURRENT[:REVISION[:AGE]] specify library version info [each variable defaults to 0] -weak LIBNAME declare that the target provides the LIBNAME interface -Wc,FLAG -Xcompiler FLAG pass linker-specific FLAG directly to the compiler -Wl,FLAG -Xlinker FLAG pass linker-specific FLAG directly to the linker -XCClinker FLAG pass link-specific FLAG to the compiler driver (CC) All other options (arguments beginning with \`-') are ignored. Every other argument is treated as a filename. Files ending in \`.la' are treated as uninstalled libtool libraries, other files are standard or library object files. If the OUTPUT-FILE ends in \`.la', then a libtool library is created, only library objects (\`.lo' files) may be specified, and \`-rpath' is required, except when creating a convenience library. If OUTPUT-FILE ends in \`.a' or \`.lib', then a standard library is created using \`ar' and \`ranlib', or on Windows using \`lib'. If OUTPUT-FILE ends in \`.lo' or \`.${objext}', then a reloadable object file is created, otherwise an executable program is created." ;; uninstall) $ECHO \ "Usage: $progname [OPTION]... --mode=uninstall RM [RM-OPTION]... FILE... Remove libraries from an installation directory. RM is the name of the program to use to delete files associated with each FILE (typically \`/bin/rm'). RM-OPTIONS are options (such as \`-f') to be passed to RM. If FILE is a libtool library, all the files associated with it are deleted. Otherwise, only FILE itself is deleted using RM." ;; *) func_fatal_help "invalid operation mode \`$opt_mode'" ;; esac echo $ECHO "Try \`$progname --help' for more information about other modes." } # Now that we've collected a possible --mode arg, show help if necessary if $opt_help; then if test "$opt_help" = :; then func_mode_help else { func_help noexit for opt_mode in compile link execute install finish uninstall clean; do func_mode_help done } | sed -n '1p; 2,$s/^Usage:/ or: /p' { func_help noexit for opt_mode in compile link execute install finish uninstall clean; do echo func_mode_help done } | sed '1d /^When reporting/,/^Report/{ H d } $x /information about other modes/d /more detailed .*MODE/d s/^Usage:.*--mode=\([^ ]*\) .*/Description of \1 mode:/' fi exit $? fi # func_mode_execute arg... func_mode_execute () { $opt_debug # The first argument is the command name. cmd="$nonopt" test -z "$cmd" && \ func_fatal_help "you must specify a COMMAND" # Handle -dlopen flags immediately. for file in $opt_dlopen; do test -f "$file" \ || func_fatal_help "\`$file' is not a file" dir= case $file in *.la) func_resolve_sysroot "$file" file=$func_resolve_sysroot_result # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$file" \ || func_fatal_help "\`$lib' is not a valid libtool archive" # Read the libtool library. dlname= library_names= func_source "$file" # Skip this library if it cannot be dlopened. if test -z "$dlname"; then # Warn if it was a shared library. test -n "$library_names" && \ func_warning "\`$file' was not linked with \`-export-dynamic'" continue fi func_dirname "$file" "" "." dir="$func_dirname_result" if test -f "$dir/$objdir/$dlname"; then func_append dir "/$objdir" else if test ! -f "$dir/$dlname"; then func_fatal_error "cannot find \`$dlname' in \`$dir' or \`$dir/$objdir'" fi fi ;; *.lo) # Just add the directory containing the .lo file. func_dirname "$file" "" "." dir="$func_dirname_result" ;; *) func_warning "\`-dlopen' is ignored for non-libtool libraries and objects" continue ;; esac # Get the absolute pathname. absdir=`cd "$dir" && pwd` test -n "$absdir" && dir="$absdir" # Now add the directory to shlibpath_var. if eval "test -z \"\$$shlibpath_var\""; then eval "$shlibpath_var=\"\$dir\"" else eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\"" fi done # This variable tells wrapper scripts just to set shlibpath_var # rather than running their programs. libtool_execute_magic="$magic" # Check if any of the arguments is a wrapper script. args= for file do case $file in -* | *.la | *.lo ) ;; *) # Do a test to see if this is really a libtool program. if func_ltwrapper_script_p "$file"; then func_source "$file" # Transform arg to wrapped name. file="$progdir/$program" elif func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" func_source "$func_ltwrapper_scriptname_result" # Transform arg to wrapped name. file="$progdir/$program" fi ;; esac # Quote arguments (to preserve shell metacharacters). func_append_quoted args "$file" done if test "X$opt_dry_run" = Xfalse; then if test -n "$shlibpath_var"; then # Export the shlibpath_var. eval "export $shlibpath_var" fi # Restore saved environment variables for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES do eval "if test \"\${save_$lt_var+set}\" = set; then $lt_var=\$save_$lt_var; export $lt_var else $lt_unset $lt_var fi" done # Now prepare to actually exec the command. exec_cmd="\$cmd$args" else # Display what would be done. if test -n "$shlibpath_var"; then eval "\$ECHO \"\$shlibpath_var=\$$shlibpath_var\"" echo "export $shlibpath_var" fi $ECHO "$cmd$args" exit $EXIT_SUCCESS fi } test "$opt_mode" = execute && func_mode_execute ${1+"$@"} # func_mode_finish arg... func_mode_finish () { $opt_debug libs= libdirs= admincmds= for opt in "$nonopt" ${1+"$@"} do if test -d "$opt"; then func_append libdirs " $opt" elif test -f "$opt"; then if func_lalib_unsafe_p "$opt"; then func_append libs " $opt" else func_warning "\`$opt' is not a valid libtool archive" fi else func_fatal_error "invalid argument \`$opt'" fi done if test -n "$libs"; then if test -n "$lt_sysroot"; then sysroot_regex=`$ECHO "$lt_sysroot" | $SED "$sed_make_literal_regex"` sysroot_cmd="s/\([ ']\)$sysroot_regex/\1/g;" else sysroot_cmd= fi # Remove sysroot references if $opt_dry_run; then for lib in $libs; do echo "removing references to $lt_sysroot and \`=' prefixes from $lib" done else tmpdir=`func_mktempdir` for lib in $libs; do sed -e "${sysroot_cmd} s/\([ ']-[LR]\)=/\1/g; s/\([ ']\)=/\1/g" $lib \ > $tmpdir/tmp-la mv -f $tmpdir/tmp-la $lib done ${RM}r "$tmpdir" fi fi if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then for libdir in $libdirs; do if test -n "$finish_cmds"; then # Do each command in the finish commands. func_execute_cmds "$finish_cmds" 'admincmds="$admincmds '"$cmd"'"' fi if test -n "$finish_eval"; then # Do the single finish_eval. eval cmds=\"$finish_eval\" $opt_dry_run || eval "$cmds" || func_append admincmds " $cmds" fi done fi # Exit here if they wanted silent mode. $opt_silent && exit $EXIT_SUCCESS if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then echo "----------------------------------------------------------------------" echo "Libraries have been installed in:" for libdir in $libdirs; do $ECHO " $libdir" done echo echo "If you ever happen to want to link against installed libraries" echo "in a given directory, LIBDIR, you must either use libtool, and" echo "specify the full pathname of the library, or use the \`-LLIBDIR'" echo "flag during linking and do at least one of the following:" if test -n "$shlibpath_var"; then echo " - add LIBDIR to the \`$shlibpath_var' environment variable" echo " during execution" fi if test -n "$runpath_var"; then echo " - add LIBDIR to the \`$runpath_var' environment variable" echo " during linking" fi if test -n "$hardcode_libdir_flag_spec"; then libdir=LIBDIR eval flag=\"$hardcode_libdir_flag_spec\" $ECHO " - use the \`$flag' linker flag" fi if test -n "$admincmds"; then $ECHO " - have your system administrator run these commands:$admincmds" fi if test -f /etc/ld.so.conf; then echo " - have your system administrator add LIBDIR to \`/etc/ld.so.conf'" fi echo echo "See any operating system documentation about shared libraries for" case $host in solaris2.[6789]|solaris2.1[0-9]) echo "more information, such as the ld(1), crle(1) and ld.so(8) manual" echo "pages." ;; *) echo "more information, such as the ld(1) and ld.so(8) manual pages." ;; esac echo "----------------------------------------------------------------------" fi exit $EXIT_SUCCESS } test "$opt_mode" = finish && func_mode_finish ${1+"$@"} # func_mode_install arg... func_mode_install () { $opt_debug # There may be an optional sh(1) argument at the beginning of # install_prog (especially on Windows NT). if test "$nonopt" = "$SHELL" || test "$nonopt" = /bin/sh || # Allow the use of GNU shtool's install command. case $nonopt in *shtool*) :;; *) false;; esac; then # Aesthetically quote it. func_quote_for_eval "$nonopt" install_prog="$func_quote_for_eval_result " arg=$1 shift else install_prog= arg=$nonopt fi # The real first argument should be the name of the installation program. # Aesthetically quote it. func_quote_for_eval "$arg" func_append install_prog "$func_quote_for_eval_result" install_shared_prog=$install_prog case " $install_prog " in *[\\\ /]cp\ *) install_cp=: ;; *) install_cp=false ;; esac # We need to accept at least all the BSD install flags. dest= files= opts= prev= install_type= isdir=no stripme= no_mode=: for arg do arg2= if test -n "$dest"; then func_append files " $dest" dest=$arg continue fi case $arg in -d) isdir=yes ;; -f) if $install_cp; then :; else prev=$arg fi ;; -g | -m | -o) prev=$arg ;; -s) stripme=" -s" continue ;; -*) ;; *) # If the previous option needed an argument, then skip it. if test -n "$prev"; then if test "x$prev" = x-m && test -n "$install_override_mode"; then arg2=$install_override_mode no_mode=false fi prev= else dest=$arg continue fi ;; esac # Aesthetically quote the argument. func_quote_for_eval "$arg" func_append install_prog " $func_quote_for_eval_result" if test -n "$arg2"; then func_quote_for_eval "$arg2" fi func_append install_shared_prog " $func_quote_for_eval_result" done test -z "$install_prog" && \ func_fatal_help "you must specify an install program" test -n "$prev" && \ func_fatal_help "the \`$prev' option requires an argument" if test -n "$install_override_mode" && $no_mode; then if $install_cp; then :; else func_quote_for_eval "$install_override_mode" func_append install_shared_prog " -m $func_quote_for_eval_result" fi fi if test -z "$files"; then if test -z "$dest"; then func_fatal_help "no file or destination specified" else func_fatal_help "you must specify a destination" fi fi # Strip any trailing slash from the destination. func_stripname '' '/' "$dest" dest=$func_stripname_result # Check to see that the destination is a directory. test -d "$dest" && isdir=yes if test "$isdir" = yes; then destdir="$dest" destname= else func_dirname_and_basename "$dest" "" "." destdir="$func_dirname_result" destname="$func_basename_result" # Not a directory, so check to see that there is only one file specified. set dummy $files; shift test "$#" -gt 1 && \ func_fatal_help "\`$dest' is not a directory" fi case $destdir in [\\/]* | [A-Za-z]:[\\/]*) ;; *) for file in $files; do case $file in *.lo) ;; *) func_fatal_help "\`$destdir' must be an absolute directory name" ;; esac done ;; esac # This variable tells wrapper scripts just to set variables rather # than running their programs. libtool_install_magic="$magic" staticlibs= future_libdirs= current_libdirs= for file in $files; do # Do each installation. case $file in *.$libext) # Do the static libraries later. func_append staticlibs " $file" ;; *.la) func_resolve_sysroot "$file" file=$func_resolve_sysroot_result # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$file" \ || func_fatal_help "\`$file' is not a valid libtool archive" library_names= old_library= relink_command= func_source "$file" # Add the libdir to current_libdirs if it is the destination. if test "X$destdir" = "X$libdir"; then case "$current_libdirs " in *" $libdir "*) ;; *) func_append current_libdirs " $libdir" ;; esac else # Note the libdir as a future libdir. case "$future_libdirs " in *" $libdir "*) ;; *) func_append future_libdirs " $libdir" ;; esac fi func_dirname "$file" "/" "" dir="$func_dirname_result" func_append dir "$objdir" if test -n "$relink_command"; then # Determine the prefix the user has applied to our future dir. inst_prefix_dir=`$ECHO "$destdir" | $SED -e "s%$libdir\$%%"` # Don't allow the user to place us outside of our expected # location b/c this prevents finding dependent libraries that # are installed to the same prefix. # At present, this check doesn't affect windows .dll's that # are installed into $libdir/../bin (currently, that works fine) # but it's something to keep an eye on. test "$inst_prefix_dir" = "$destdir" && \ func_fatal_error "error: cannot install \`$file' to a directory not ending in $libdir" if test -n "$inst_prefix_dir"; then # Stick the inst_prefix_dir data into the link command. relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%-inst-prefix-dir $inst_prefix_dir%"` else relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%%"` fi func_warning "relinking \`$file'" func_show_eval "$relink_command" \ 'func_fatal_error "error: relink \`$file'\'' with the above command before installing it"' fi # See the names of the shared library. set dummy $library_names; shift if test -n "$1"; then realname="$1" shift srcname="$realname" test -n "$relink_command" && srcname="$realname"T # Install the shared library and build the symlinks. func_show_eval "$install_shared_prog $dir/$srcname $destdir/$realname" \ 'exit $?' tstripme="$stripme" case $host_os in cygwin* | mingw* | pw32* | cegcc*) case $realname in *.dll.a) tstripme="" ;; esac ;; esac if test -n "$tstripme" && test -n "$striplib"; then func_show_eval "$striplib $destdir/$realname" 'exit $?' fi if test "$#" -gt 0; then # Delete the old symlinks, and create new ones. # Try `ln -sf' first, because the `ln' binary might depend on # the symlink we replace! Solaris /bin/ln does not understand -f, # so we also need to try rm && ln -s. for linkname do test "$linkname" != "$realname" \ && func_show_eval "(cd $destdir && { $LN_S -f $realname $linkname || { $RM $linkname && $LN_S $realname $linkname; }; })" done fi # Do each command in the postinstall commands. lib="$destdir/$realname" func_execute_cmds "$postinstall_cmds" 'exit $?' fi # Install the pseudo-library for information purposes. func_basename "$file" name="$func_basename_result" instname="$dir/$name"i func_show_eval "$install_prog $instname $destdir/$name" 'exit $?' # Maybe install the static library, too. test -n "$old_library" && func_append staticlibs " $dir/$old_library" ;; *.lo) # Install (i.e. copy) a libtool object. # Figure out destination file name, if it wasn't already specified. if test -n "$destname"; then destfile="$destdir/$destname" else func_basename "$file" destfile="$func_basename_result" destfile="$destdir/$destfile" fi # Deduce the name of the destination old-style object file. case $destfile in *.lo) func_lo2o "$destfile" staticdest=$func_lo2o_result ;; *.$objext) staticdest="$destfile" destfile= ;; *) func_fatal_help "cannot copy a libtool object to \`$destfile'" ;; esac # Install the libtool object if requested. test -n "$destfile" && \ func_show_eval "$install_prog $file $destfile" 'exit $?' # Install the old object if enabled. if test "$build_old_libs" = yes; then # Deduce the name of the old-style object file. func_lo2o "$file" staticobj=$func_lo2o_result func_show_eval "$install_prog \$staticobj \$staticdest" 'exit $?' fi exit $EXIT_SUCCESS ;; *) # Figure out destination file name, if it wasn't already specified. if test -n "$destname"; then destfile="$destdir/$destname" else func_basename "$file" destfile="$func_basename_result" destfile="$destdir/$destfile" fi # If the file is missing, and there is a .exe on the end, strip it # because it is most likely a libtool script we actually want to # install stripped_ext="" case $file in *.exe) if test ! -f "$file"; then func_stripname '' '.exe' "$file" file=$func_stripname_result stripped_ext=".exe" fi ;; esac # Do a test to see if this is really a libtool program. case $host in *cygwin* | *mingw*) if func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" wrapper=$func_ltwrapper_scriptname_result else func_stripname '' '.exe' "$file" wrapper=$func_stripname_result fi ;; *) wrapper=$file ;; esac if func_ltwrapper_script_p "$wrapper"; then notinst_deplibs= relink_command= func_source "$wrapper" # Check the variables that should have been set. test -z "$generated_by_libtool_version" && \ func_fatal_error "invalid libtool wrapper script \`$wrapper'" finalize=yes for lib in $notinst_deplibs; do # Check to see that each library is installed. libdir= if test -f "$lib"; then func_source "$lib" fi libfile="$libdir/"`$ECHO "$lib" | $SED 's%^.*/%%g'` ### testsuite: skip nested quoting test if test -n "$libdir" && test ! -f "$libfile"; then func_warning "\`$lib' has not been installed in \`$libdir'" finalize=no fi done relink_command= func_source "$wrapper" outputname= if test "$fast_install" = no && test -n "$relink_command"; then $opt_dry_run || { if test "$finalize" = yes; then tmpdir=`func_mktempdir` func_basename "$file$stripped_ext" file="$func_basename_result" outputname="$tmpdir/$file" # Replace the output file specification. relink_command=`$ECHO "$relink_command" | $SED 's%@OUTPUT@%'"$outputname"'%g'` $opt_silent || { func_quote_for_expand "$relink_command" eval "func_echo $func_quote_for_expand_result" } if eval "$relink_command"; then : else func_error "error: relink \`$file' with the above command before installing it" $opt_dry_run || ${RM}r "$tmpdir" continue fi file="$outputname" else func_warning "cannot relink \`$file'" fi } else # Install the binary that we compiled earlier. file=`$ECHO "$file$stripped_ext" | $SED "s%\([^/]*\)$%$objdir/\1%"` fi fi # remove .exe since cygwin /usr/bin/install will append another # one anyway case $install_prog,$host in */usr/bin/install*,*cygwin*) case $file:$destfile in *.exe:*.exe) # this is ok ;; *.exe:*) destfile=$destfile.exe ;; *:*.exe) func_stripname '' '.exe' "$destfile" destfile=$func_stripname_result ;; esac ;; esac func_show_eval "$install_prog\$stripme \$file \$destfile" 'exit $?' $opt_dry_run || if test -n "$outputname"; then ${RM}r "$tmpdir" fi ;; esac done for file in $staticlibs; do func_basename "$file" name="$func_basename_result" # Set up the ranlib parameters. oldlib="$destdir/$name" func_to_tool_file "$oldlib" func_convert_file_msys_to_w32 tool_oldlib=$func_to_tool_file_result func_show_eval "$install_prog \$file \$oldlib" 'exit $?' if test -n "$stripme" && test -n "$old_striplib"; then func_show_eval "$old_striplib $tool_oldlib" 'exit $?' fi # Do each command in the postinstall commands. func_execute_cmds "$old_postinstall_cmds" 'exit $?' done test -n "$future_libdirs" && \ func_warning "remember to run \`$progname --finish$future_libdirs'" if test -n "$current_libdirs"; then # Maybe just do a dry run. $opt_dry_run && current_libdirs=" -n$current_libdirs" exec_cmd='$SHELL $progpath $preserve_args --finish$current_libdirs' else exit $EXIT_SUCCESS fi } test "$opt_mode" = install && func_mode_install ${1+"$@"} # func_generate_dlsyms outputname originator pic_p # Extract symbols from dlprefiles and create ${outputname}S.o with # a dlpreopen symbol table. func_generate_dlsyms () { $opt_debug my_outputname="$1" my_originator="$2" my_pic_p="${3-no}" my_prefix=`$ECHO "$my_originator" | sed 's%[^a-zA-Z0-9]%_%g'` my_dlsyms= if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then if test -n "$NM" && test -n "$global_symbol_pipe"; then my_dlsyms="${my_outputname}S.c" else func_error "not configured to extract global symbols from dlpreopened files" fi fi if test -n "$my_dlsyms"; then case $my_dlsyms in "") ;; *.c) # Discover the nlist of each of the dlfiles. nlist="$output_objdir/${my_outputname}.nm" func_show_eval "$RM $nlist ${nlist}S ${nlist}T" # Parse the name list into a source file. func_verbose "creating $output_objdir/$my_dlsyms" $opt_dry_run || $ECHO > "$output_objdir/$my_dlsyms" "\ /* $my_dlsyms - symbol resolution table for \`$my_outputname' dlsym emulation. */ /* Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION */ #ifdef __cplusplus extern \"C\" { #endif #if defined(__GNUC__) && (((__GNUC__ == 4) && (__GNUC_MINOR__ >= 4)) || (__GNUC__ > 4)) #pragma GCC diagnostic ignored \"-Wstrict-prototypes\" #endif /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE) /* DATA imports from DLLs on WIN32 con't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT_DLSYM_CONST #elif defined(__osf__) /* This system does not cope well with relocations in const data. */ # define LT_DLSYM_CONST #else # define LT_DLSYM_CONST const #endif /* External symbol declarations for the compiler. */\ " if test "$dlself" = yes; then func_verbose "generating symbol list for \`$output'" $opt_dry_run || echo ': @PROGRAM@ ' > "$nlist" # Add our own program objects to the symbol list. progfiles=`$ECHO "$objs$old_deplibs" | $SP2NL | $SED "$lo2o" | $NL2SP` for progfile in $progfiles; do func_to_tool_file "$progfile" func_convert_file_msys_to_w32 func_verbose "extracting global C symbols from \`$func_to_tool_file_result'" $opt_dry_run || eval "$NM $func_to_tool_file_result | $global_symbol_pipe >> '$nlist'" done if test -n "$exclude_expsyms"; then $opt_dry_run || { eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' } fi if test -n "$export_symbols_regex"; then $opt_dry_run || { eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' } fi # Prepare the list of exported symbols if test -z "$export_symbols"; then export_symbols="$output_objdir/$outputname.exp" $opt_dry_run || { $RM $export_symbols eval "${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"' case $host in *cygwin* | *mingw* | *cegcc* ) eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' eval 'cat "$export_symbols" >> "$output_objdir/$outputname.def"' ;; esac } else $opt_dry_run || { eval "${SED} -e 's/\([].[*^$]\)/\\\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$outputname.exp"' eval '$GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' case $host in *cygwin* | *mingw* | *cegcc* ) eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' eval 'cat "$nlist" >> "$output_objdir/$outputname.def"' ;; esac } fi fi for dlprefile in $dlprefiles; do func_verbose "extracting global C symbols from \`$dlprefile'" func_basename "$dlprefile" name="$func_basename_result" case $host in *cygwin* | *mingw* | *cegcc* ) # if an import library, we need to obtain dlname if func_win32_import_lib_p "$dlprefile"; then func_tr_sh "$dlprefile" eval "curr_lafile=\$libfile_$func_tr_sh_result" dlprefile_dlbasename="" if test -n "$curr_lafile" && func_lalib_p "$curr_lafile"; then # Use subshell, to avoid clobbering current variable values dlprefile_dlname=`source "$curr_lafile" && echo "$dlname"` if test -n "$dlprefile_dlname" ; then func_basename "$dlprefile_dlname" dlprefile_dlbasename="$func_basename_result" else # no lafile. user explicitly requested -dlpreopen . $sharedlib_from_linklib_cmd "$dlprefile" dlprefile_dlbasename=$sharedlib_from_linklib_result fi fi $opt_dry_run || { if test -n "$dlprefile_dlbasename" ; then eval '$ECHO ": $dlprefile_dlbasename" >> "$nlist"' else func_warning "Could not compute DLL name from $name" eval '$ECHO ": $name " >> "$nlist"' fi func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe | $SED -e '/I __imp/d' -e 's/I __nm_/D /;s/_nm__//' >> '$nlist'" } else # not an import lib $opt_dry_run || { eval '$ECHO ": $name " >> "$nlist"' func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'" } fi ;; *) $opt_dry_run || { eval '$ECHO ": $name " >> "$nlist"' func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'" } ;; esac done $opt_dry_run || { # Make sure we have at least an empty file. test -f "$nlist" || : > "$nlist" if test -n "$exclude_expsyms"; then $EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T $MV "$nlist"T "$nlist" fi # Try sorting and uniquifying the output. if $GREP -v "^: " < "$nlist" | if sort -k 3 /dev/null 2>&1; then sort -k 3 else sort +2 fi | uniq > "$nlist"S; then : else $GREP -v "^: " < "$nlist" > "$nlist"S fi if test -f "$nlist"S; then eval "$global_symbol_to_cdecl"' < "$nlist"S >> "$output_objdir/$my_dlsyms"' else echo '/* NONE */' >> "$output_objdir/$my_dlsyms" fi echo >> "$output_objdir/$my_dlsyms" "\ /* The mapping between symbol names and symbols. */ typedef struct { const char *name; void *address; } lt_dlsymlist; extern LT_DLSYM_CONST lt_dlsymlist lt_${my_prefix}_LTX_preloaded_symbols[]; LT_DLSYM_CONST lt_dlsymlist lt_${my_prefix}_LTX_preloaded_symbols[] = {\ { \"$my_originator\", (void *) 0 }," case $need_lib_prefix in no) eval "$global_symbol_to_c_name_address" < "$nlist" >> "$output_objdir/$my_dlsyms" ;; *) eval "$global_symbol_to_c_name_address_lib_prefix" < "$nlist" >> "$output_objdir/$my_dlsyms" ;; esac echo >> "$output_objdir/$my_dlsyms" "\ {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt_${my_prefix}_LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif\ " } # !$opt_dry_run pic_flag_for_symtable= case "$compile_command " in *" -static "*) ;; *) case $host in # compiling the symbol table file with pic_flag works around # a FreeBSD bug that causes programs to crash when -lm is # linked before any other PIC object. But we must not use # pic_flag when linking with -static. The problem exists in # FreeBSD 2.2.6 and is fixed in FreeBSD 3.1. *-*-freebsd2.*|*-*-freebsd3.0*|*-*-freebsdelf3.0*) pic_flag_for_symtable=" $pic_flag -DFREEBSD_WORKAROUND" ;; *-*-hpux*) pic_flag_for_symtable=" $pic_flag" ;; *) if test "X$my_pic_p" != Xno; then pic_flag_for_symtable=" $pic_flag" fi ;; esac ;; esac symtab_cflags= for arg in $LTCFLAGS; do case $arg in -pie | -fpie | -fPIE) ;; *) func_append symtab_cflags " $arg" ;; esac done # Now compile the dynamic symbol file. func_show_eval '(cd $output_objdir && $LTCC$symtab_cflags -c$no_builtin_flag$pic_flag_for_symtable "$my_dlsyms")' 'exit $?' # Clean up the generated files. func_show_eval '$RM "$output_objdir/$my_dlsyms" "$nlist" "${nlist}S" "${nlist}T"' # Transform the symbol file into the correct name. symfileobj="$output_objdir/${my_outputname}S.$objext" case $host in *cygwin* | *mingw* | *cegcc* ) if test -f "$output_objdir/$my_outputname.def"; then compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` else compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"` fi ;; *) compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"` ;; esac ;; *) func_fatal_error "unknown suffix for \`$my_dlsyms'" ;; esac else # We keep going just in case the user didn't refer to # lt_preloaded_symbols. The linker will fail if global_symbol_pipe # really was required. # Nullify the symbol file. compile_command=`$ECHO "$compile_command" | $SED "s% @SYMFILE@%%"` finalize_command=`$ECHO "$finalize_command" | $SED "s% @SYMFILE@%%"` fi } # func_win32_libid arg # return the library type of file 'arg' # # Need a lot of goo to handle *both* DLLs and import libs # Has to be a shell function in order to 'eat' the argument # that is supplied when $file_magic_command is called. # Despite the name, also deal with 64 bit binaries. func_win32_libid () { $opt_debug win32_libid_type="unknown" win32_fileres=`file -L $1 2>/dev/null` case $win32_fileres in *ar\ archive\ import\ library*) # definitely import win32_libid_type="x86 archive import" ;; *ar\ archive*) # could be an import, or static # Keep the egrep pattern in sync with the one in _LT_CHECK_MAGIC_METHOD. if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null | $EGREP 'file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then func_to_tool_file "$1" func_convert_file_msys_to_w32 win32_nmres=`eval $NM -f posix -A \"$func_to_tool_file_result\" | $SED -n -e ' 1,100{ / I /{ s,.*,import, p q } }'` case $win32_nmres in import*) win32_libid_type="x86 archive import";; *) win32_libid_type="x86 archive static";; esac fi ;; *DLL*) win32_libid_type="x86 DLL" ;; *executable*) # but shell scripts are "executable" too... case $win32_fileres in *MS\ Windows\ PE\ Intel*) win32_libid_type="x86 DLL" ;; esac ;; esac $ECHO "$win32_libid_type" } # func_cygming_dll_for_implib ARG # # Platform-specific function to extract the # name of the DLL associated with the specified # import library ARG. # Invoked by eval'ing the libtool variable # $sharedlib_from_linklib_cmd # Result is available in the variable # $sharedlib_from_linklib_result func_cygming_dll_for_implib () { $opt_debug sharedlib_from_linklib_result=`$DLLTOOL --identify-strict --identify "$1"` } # func_cygming_dll_for_implib_fallback_core SECTION_NAME LIBNAMEs # # The is the core of a fallback implementation of a # platform-specific function to extract the name of the # DLL associated with the specified import library LIBNAME. # # SECTION_NAME is either .idata$6 or .idata$7, depending # on the platform and compiler that created the implib. # # Echos the name of the DLL associated with the # specified import library. func_cygming_dll_for_implib_fallback_core () { $opt_debug match_literal=`$ECHO "$1" | $SED "$sed_make_literal_regex"` $OBJDUMP -s --section "$1" "$2" 2>/dev/null | $SED '/^Contents of section '"$match_literal"':/{ # Place marker at beginning of archive member dllname section s/.*/====MARK====/ p d } # These lines can sometimes be longer than 43 characters, but # are always uninteresting /:[ ]*file format pe[i]\{,1\}-/d /^In archive [^:]*:/d # Ensure marker is printed /^====MARK====/p # Remove all lines with less than 43 characters /^.\{43\}/!d # From remaining lines, remove first 43 characters s/^.\{43\}//' | $SED -n ' # Join marker and all lines until next marker into a single line /^====MARK====/ b para H $ b para b :para x s/\n//g # Remove the marker s/^====MARK====// # Remove trailing dots and whitespace s/[\. \t]*$// # Print /./p' | # we now have a list, one entry per line, of the stringified # contents of the appropriate section of all members of the # archive which possess that section. Heuristic: eliminate # all those which have a first or second character that is # a '.' (that is, objdump's representation of an unprintable # character.) This should work for all archives with less than # 0x302f exports -- but will fail for DLLs whose name actually # begins with a literal '.' or a single character followed by # a '.'. # # Of those that remain, print the first one. $SED -e '/^\./d;/^.\./d;q' } # func_cygming_gnu_implib_p ARG # This predicate returns with zero status (TRUE) if # ARG is a GNU/binutils-style import library. Returns # with nonzero status (FALSE) otherwise. func_cygming_gnu_implib_p () { $opt_debug func_to_tool_file "$1" func_convert_file_msys_to_w32 func_cygming_gnu_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $EGREP ' (_head_[A-Za-z0-9_]+_[ad]l*|[A-Za-z0-9_]+_[ad]l*_iname)$'` test -n "$func_cygming_gnu_implib_tmp" } # func_cygming_ms_implib_p ARG # This predicate returns with zero status (TRUE) if # ARG is an MS-style import library. Returns # with nonzero status (FALSE) otherwise. func_cygming_ms_implib_p () { $opt_debug func_to_tool_file "$1" func_convert_file_msys_to_w32 func_cygming_ms_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $GREP '_NULL_IMPORT_DESCRIPTOR'` test -n "$func_cygming_ms_implib_tmp" } # func_cygming_dll_for_implib_fallback ARG # Platform-specific function to extract the # name of the DLL associated with the specified # import library ARG. # # This fallback implementation is for use when $DLLTOOL # does not support the --identify-strict option. # Invoked by eval'ing the libtool variable # $sharedlib_from_linklib_cmd # Result is available in the variable # $sharedlib_from_linklib_result func_cygming_dll_for_implib_fallback () { $opt_debug if func_cygming_gnu_implib_p "$1" ; then # binutils import library sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$7' "$1"` elif func_cygming_ms_implib_p "$1" ; then # ms-generated import library sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$6' "$1"` else # unknown sharedlib_from_linklib_result="" fi } # func_extract_an_archive dir oldlib func_extract_an_archive () { $opt_debug f_ex_an_ar_dir="$1"; shift f_ex_an_ar_oldlib="$1" if test "$lock_old_archive_extraction" = yes; then lockfile=$f_ex_an_ar_oldlib.lock until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do func_echo "Waiting for $lockfile to be removed" sleep 2 done fi func_show_eval "(cd \$f_ex_an_ar_dir && $AR x \"\$f_ex_an_ar_oldlib\")" \ 'stat=$?; rm -f "$lockfile"; exit $stat' if test "$lock_old_archive_extraction" = yes; then $opt_dry_run || rm -f "$lockfile" fi if ($AR t "$f_ex_an_ar_oldlib" | sort | sort -uc >/dev/null 2>&1); then : else func_fatal_error "object name conflicts in archive: $f_ex_an_ar_dir/$f_ex_an_ar_oldlib" fi } # func_extract_archives gentop oldlib ... func_extract_archives () { $opt_debug my_gentop="$1"; shift my_oldlibs=${1+"$@"} my_oldobjs="" my_xlib="" my_xabs="" my_xdir="" for my_xlib in $my_oldlibs; do # Extract the objects. case $my_xlib in [\\/]* | [A-Za-z]:[\\/]*) my_xabs="$my_xlib" ;; *) my_xabs=`pwd`"/$my_xlib" ;; esac func_basename "$my_xlib" my_xlib="$func_basename_result" my_xlib_u=$my_xlib while :; do case " $extracted_archives " in *" $my_xlib_u "*) func_arith $extracted_serial + 1 extracted_serial=$func_arith_result my_xlib_u=lt$extracted_serial-$my_xlib ;; *) break ;; esac done extracted_archives="$extracted_archives $my_xlib_u" my_xdir="$my_gentop/$my_xlib_u" func_mkdir_p "$my_xdir" case $host in *-darwin*) func_verbose "Extracting $my_xabs" # Do not bother doing anything if just a dry run $opt_dry_run || { darwin_orig_dir=`pwd` cd $my_xdir || exit $? darwin_archive=$my_xabs darwin_curdir=`pwd` darwin_base_archive=`basename "$darwin_archive"` darwin_arches=`$LIPO -info "$darwin_archive" 2>/dev/null | $GREP Architectures 2>/dev/null || true` if test -n "$darwin_arches"; then darwin_arches=`$ECHO "$darwin_arches" | $SED -e 's/.*are://'` darwin_arch= func_verbose "$darwin_base_archive has multiple architectures $darwin_arches" for darwin_arch in $darwin_arches ; do func_mkdir_p "unfat-$$/${darwin_base_archive}-${darwin_arch}" $LIPO -thin $darwin_arch -output "unfat-$$/${darwin_base_archive}-${darwin_arch}/${darwin_base_archive}" "${darwin_archive}" cd "unfat-$$/${darwin_base_archive}-${darwin_arch}" func_extract_an_archive "`pwd`" "${darwin_base_archive}" cd "$darwin_curdir" $RM "unfat-$$/${darwin_base_archive}-${darwin_arch}/${darwin_base_archive}" done # $darwin_arches ## Okay now we've a bunch of thin objects, gotta fatten them up :) darwin_filelist=`find unfat-$$ -type f -name \*.o -print -o -name \*.lo -print | $SED -e "$basename" | sort -u` darwin_file= darwin_files= for darwin_file in $darwin_filelist; do darwin_files=`find unfat-$$ -name $darwin_file -print | sort | $NL2SP` $LIPO -create -output "$darwin_file" $darwin_files done # $darwin_filelist $RM -rf unfat-$$ cd "$darwin_orig_dir" else cd $darwin_orig_dir func_extract_an_archive "$my_xdir" "$my_xabs" fi # $darwin_arches } # !$opt_dry_run ;; *) func_extract_an_archive "$my_xdir" "$my_xabs" ;; esac my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | sort | $NL2SP` done func_extract_archives_result="$my_oldobjs" } # func_emit_wrapper [arg=no] # # Emit a libtool wrapper script on stdout. # Don't directly open a file because we may want to # incorporate the script contents within a cygwin/mingw # wrapper executable. Must ONLY be called from within # func_mode_link because it depends on a number of variables # set therein. # # ARG is the value that the WRAPPER_SCRIPT_BELONGS_IN_OBJDIR # variable will take. If 'yes', then the emitted script # will assume that the directory in which it is stored is # the $objdir directory. This is a cygwin/mingw-specific # behavior. func_emit_wrapper () { func_emit_wrapper_arg1=${1-no} $ECHO "\ #! $SHELL # $output - temporary wrapper script for $objdir/$outputname # Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION # # The $output program cannot be directly executed until all the libtool # libraries that it depends on are installed. # # This wrapper script should never be moved out of the build directory. # If it is, it will not operate correctly. # Sed substitution that helps us do robust quoting. It backslashifies # metacharacters that are still active within double-quoted strings. sed_quote_subst='$sed_quote_subst' # Be Bourne compatible if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else case \`(set -o) 2>/dev/null\` in *posix*) set -o posix;; esac fi BIN_SH=xpg4; export BIN_SH # for Tru64 DUALCASE=1; export DUALCASE # for MKS sh # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH relink_command=\"$relink_command\" # This environment variable determines our operation mode. if test \"\$libtool_install_magic\" = \"$magic\"; then # install mode needs the following variables: generated_by_libtool_version='$macro_version' notinst_deplibs='$notinst_deplibs' else # When we are sourced in execute mode, \$file and \$ECHO are already set. if test \"\$libtool_execute_magic\" != \"$magic\"; then file=\"\$0\"" qECHO=`$ECHO "$ECHO" | $SED "$sed_quote_subst"` $ECHO "\ # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$1 _LTECHO_EOF' } ECHO=\"$qECHO\" fi # Very basic option parsing. These options are (a) specific to # the libtool wrapper, (b) are identical between the wrapper # /script/ and the wrapper /executable/ which is used only on # windows platforms, and (c) all begin with the string "--lt-" # (application programs are unlikely to have options which match # this pattern). # # There are only two supported options: --lt-debug and # --lt-dump-script. There is, deliberately, no --lt-help. # # The first argument to this parsing function should be the # script's $0 value, followed by "$@". lt_option_debug= func_parse_lt_options () { lt_script_arg0=\$0 shift for lt_opt do case \"\$lt_opt\" in --lt-debug) lt_option_debug=1 ;; --lt-dump-script) lt_dump_D=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%/[^/]*$%%'\` test \"X\$lt_dump_D\" = \"X\$lt_script_arg0\" && lt_dump_D=. lt_dump_F=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%^.*/%%'\` cat \"\$lt_dump_D/\$lt_dump_F\" exit 0 ;; --lt-*) \$ECHO \"Unrecognized --lt- option: '\$lt_opt'\" 1>&2 exit 1 ;; esac done # Print the debug banner immediately: if test -n \"\$lt_option_debug\"; then echo \"${outputname}:${output}:\${LINENO}: libtool wrapper (GNU $PACKAGE$TIMESTAMP) $VERSION\" 1>&2 fi } # Used when --lt-debug. Prints its arguments to stdout # (redirection is the responsibility of the caller) func_lt_dump_args () { lt_dump_args_N=1; for lt_arg do \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[\$lt_dump_args_N]: \$lt_arg\" lt_dump_args_N=\`expr \$lt_dump_args_N + 1\` done } # Core function for launching the target application func_exec_program_core () { " case $host in # Backslashes separate directories on plain windows *-*-mingw | *-*-os2* | *-cegcc*) $ECHO "\ if test -n \"\$lt_option_debug\"; then \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir\\\\\$program\" 1>&2 func_lt_dump_args \${1+\"\$@\"} 1>&2 fi exec \"\$progdir\\\\\$program\" \${1+\"\$@\"} " ;; *) $ECHO "\ if test -n \"\$lt_option_debug\"; then \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir/\$program\" 1>&2 func_lt_dump_args \${1+\"\$@\"} 1>&2 fi exec \"\$progdir/\$program\" \${1+\"\$@\"} " ;; esac $ECHO "\ \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2 exit 1 } # A function to encapsulate launching the target application # Strips options in the --lt-* namespace from \$@ and # launches target application with the remaining arguments. func_exec_program () { case \" \$* \" in *\\ --lt-*) for lt_wr_arg do case \$lt_wr_arg in --lt-*) ;; *) set x \"\$@\" \"\$lt_wr_arg\"; shift;; esac shift done ;; esac func_exec_program_core \${1+\"\$@\"} } # Parse options func_parse_lt_options \"\$0\" \${1+\"\$@\"} # Find the directory that this script lives in. thisdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*$%%'\` test \"x\$thisdir\" = \"x\$file\" && thisdir=. # Follow symbolic links until we get to the real thisdir. file=\`ls -ld \"\$file\" | $SED -n 's/.*-> //p'\` while test -n \"\$file\"; do destdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*\$%%'\` # If there was a directory component, then change thisdir. if test \"x\$destdir\" != \"x\$file\"; then case \"\$destdir\" in [\\\\/]* | [A-Za-z]:[\\\\/]*) thisdir=\"\$destdir\" ;; *) thisdir=\"\$thisdir/\$destdir\" ;; esac fi file=\`\$ECHO \"\$file\" | $SED 's%^.*/%%'\` file=\`ls -ld \"\$thisdir/\$file\" | $SED -n 's/.*-> //p'\` done # Usually 'no', except on cygwin/mingw when embedded into # the cwrapper. WRAPPER_SCRIPT_BELONGS_IN_OBJDIR=$func_emit_wrapper_arg1 if test \"\$WRAPPER_SCRIPT_BELONGS_IN_OBJDIR\" = \"yes\"; then # special case for '.' if test \"\$thisdir\" = \".\"; then thisdir=\`pwd\` fi # remove .libs from thisdir case \"\$thisdir\" in *[\\\\/]$objdir ) thisdir=\`\$ECHO \"\$thisdir\" | $SED 's%[\\\\/][^\\\\/]*$%%'\` ;; $objdir ) thisdir=. ;; esac fi # Try to get the absolute directory name. absdir=\`cd \"\$thisdir\" && pwd\` test -n \"\$absdir\" && thisdir=\"\$absdir\" " if test "$fast_install" = yes; then $ECHO "\ program=lt-'$outputname'$exeext progdir=\"\$thisdir/$objdir\" if test ! -f \"\$progdir/\$program\" || { file=\`ls -1dt \"\$progdir/\$program\" \"\$progdir/../\$program\" 2>/dev/null | ${SED} 1q\`; \\ test \"X\$file\" != \"X\$progdir/\$program\"; }; then file=\"\$\$-\$program\" if test ! -d \"\$progdir\"; then $MKDIR \"\$progdir\" else $RM \"\$progdir/\$file\" fi" $ECHO "\ # relink executable if necessary if test -n \"\$relink_command\"; then if relink_command_output=\`eval \$relink_command 2>&1\`; then : else $ECHO \"\$relink_command_output\" >&2 $RM \"\$progdir/\$file\" exit 1 fi fi $MV \"\$progdir/\$file\" \"\$progdir/\$program\" 2>/dev/null || { $RM \"\$progdir/\$program\"; $MV \"\$progdir/\$file\" \"\$progdir/\$program\"; } $RM \"\$progdir/\$file\" fi" else $ECHO "\ program='$outputname' progdir=\"\$thisdir/$objdir\" " fi $ECHO "\ if test -f \"\$progdir/\$program\"; then" # fixup the dll searchpath if we need to. # # Fix the DLL searchpath if we need to. Do this before prepending # to shlibpath, because on Windows, both are PATH and uninstalled # libraries must come first. if test -n "$dllsearchpath"; then $ECHO "\ # Add the dll search path components to the executable PATH PATH=$dllsearchpath:\$PATH " fi # Export our shlibpath_var if we have one. if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then $ECHO "\ # Add our own library path to $shlibpath_var $shlibpath_var=\"$temp_rpath\$$shlibpath_var\" # Some systems cannot cope with colon-terminated $shlibpath_var # The second colon is a workaround for a bug in BeOS R4 sed $shlibpath_var=\`\$ECHO \"\$$shlibpath_var\" | $SED 's/::*\$//'\` export $shlibpath_var " fi $ECHO "\ if test \"\$libtool_execute_magic\" != \"$magic\"; then # Run the actual program with our arguments. func_exec_program \${1+\"\$@\"} fi else # The program doesn't exist. \$ECHO \"\$0: error: \\\`\$progdir/\$program' does not exist\" 1>&2 \$ECHO \"This script is just a wrapper for \$program.\" 1>&2 \$ECHO \"See the $PACKAGE documentation for more information.\" 1>&2 exit 1 fi fi\ " } # func_emit_cwrapperexe_src # emit the source code for a wrapper executable on stdout # Must ONLY be called from within func_mode_link because # it depends on a number of variable set therein. func_emit_cwrapperexe_src () { cat < #include #ifdef _MSC_VER # include # include # include #else # include # include # ifdef __CYGWIN__ # include # endif #endif #include #include #include #include #include #include #include #include /* declarations of non-ANSI functions */ #if defined(__MINGW32__) # ifdef __STRICT_ANSI__ int _putenv (const char *); # endif #elif defined(__CYGWIN__) # ifdef __STRICT_ANSI__ char *realpath (const char *, char *); int putenv (char *); int setenv (const char *, const char *, int); # endif /* #elif defined (other platforms) ... */ #endif /* portability defines, excluding path handling macros */ #if defined(_MSC_VER) # define setmode _setmode # define stat _stat # define chmod _chmod # define getcwd _getcwd # define putenv _putenv # define S_IXUSR _S_IEXEC # ifndef _INTPTR_T_DEFINED # define _INTPTR_T_DEFINED # define intptr_t int # endif #elif defined(__MINGW32__) # define setmode _setmode # define stat _stat # define chmod _chmod # define getcwd _getcwd # define putenv _putenv #elif defined(__CYGWIN__) # define HAVE_SETENV # define FOPEN_WB "wb" /* #elif defined (other platforms) ... */ #endif #if defined(PATH_MAX) # define LT_PATHMAX PATH_MAX #elif defined(MAXPATHLEN) # define LT_PATHMAX MAXPATHLEN #else # define LT_PATHMAX 1024 #endif #ifndef S_IXOTH # define S_IXOTH 0 #endif #ifndef S_IXGRP # define S_IXGRP 0 #endif /* path handling portability macros */ #ifndef DIR_SEPARATOR # define DIR_SEPARATOR '/' # define PATH_SEPARATOR ':' #endif #if defined (_WIN32) || defined (__MSDOS__) || defined (__DJGPP__) || \ defined (__OS2__) # define HAVE_DOS_BASED_FILE_SYSTEM # define FOPEN_WB "wb" # ifndef DIR_SEPARATOR_2 # define DIR_SEPARATOR_2 '\\' # endif # ifndef PATH_SEPARATOR_2 # define PATH_SEPARATOR_2 ';' # endif #endif #ifndef DIR_SEPARATOR_2 # define IS_DIR_SEPARATOR(ch) ((ch) == DIR_SEPARATOR) #else /* DIR_SEPARATOR_2 */ # define IS_DIR_SEPARATOR(ch) \ (((ch) == DIR_SEPARATOR) || ((ch) == DIR_SEPARATOR_2)) #endif /* DIR_SEPARATOR_2 */ #ifndef PATH_SEPARATOR_2 # define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR) #else /* PATH_SEPARATOR_2 */ # define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR_2) #endif /* PATH_SEPARATOR_2 */ #ifndef FOPEN_WB # define FOPEN_WB "w" #endif #ifndef _O_BINARY # define _O_BINARY 0 #endif #define XMALLOC(type, num) ((type *) xmalloc ((num) * sizeof(type))) #define XFREE(stale) do { \ if (stale) { free ((void *) stale); stale = 0; } \ } while (0) #if defined(LT_DEBUGWRAPPER) static int lt_debug = 1; #else static int lt_debug = 0; #endif const char *program_name = "libtool-wrapper"; /* in case xstrdup fails */ void *xmalloc (size_t num); char *xstrdup (const char *string); const char *base_name (const char *name); char *find_executable (const char *wrapper); char *chase_symlinks (const char *pathspec); int make_executable (const char *path); int check_executable (const char *path); char *strendzap (char *str, const char *pat); void lt_debugprintf (const char *file, int line, const char *fmt, ...); void lt_fatal (const char *file, int line, const char *message, ...); static const char *nonnull (const char *s); static const char *nonempty (const char *s); void lt_setenv (const char *name, const char *value); char *lt_extend_str (const char *orig_value, const char *add, int to_end); void lt_update_exe_path (const char *name, const char *value); void lt_update_lib_path (const char *name, const char *value); char **prepare_spawn (char **argv); void lt_dump_script (FILE *f); EOF cat <= 0) && (st.st_mode & (S_IXUSR | S_IXGRP | S_IXOTH))) return 1; else return 0; } int make_executable (const char *path) { int rval = 0; struct stat st; lt_debugprintf (__FILE__, __LINE__, "(make_executable): %s\n", nonempty (path)); if ((!path) || (!*path)) return 0; if (stat (path, &st) >= 0) { rval = chmod (path, st.st_mode | S_IXOTH | S_IXGRP | S_IXUSR); } return rval; } /* Searches for the full path of the wrapper. Returns newly allocated full path name if found, NULL otherwise Does not chase symlinks, even on platforms that support them. */ char * find_executable (const char *wrapper) { int has_slash = 0; const char *p; const char *p_next; /* static buffer for getcwd */ char tmp[LT_PATHMAX + 1]; int tmp_len; char *concat_name; lt_debugprintf (__FILE__, __LINE__, "(find_executable): %s\n", nonempty (wrapper)); if ((wrapper == NULL) || (*wrapper == '\0')) return NULL; /* Absolute path? */ #if defined (HAVE_DOS_BASED_FILE_SYSTEM) if (isalpha ((unsigned char) wrapper[0]) && wrapper[1] == ':') { concat_name = xstrdup (wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } else { #endif if (IS_DIR_SEPARATOR (wrapper[0])) { concat_name = xstrdup (wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } #if defined (HAVE_DOS_BASED_FILE_SYSTEM) } #endif for (p = wrapper; *p; p++) if (*p == '/') { has_slash = 1; break; } if (!has_slash) { /* no slashes; search PATH */ const char *path = getenv ("PATH"); if (path != NULL) { for (p = path; *p; p = p_next) { const char *q; size_t p_len; for (q = p; *q; q++) if (IS_PATH_SEPARATOR (*q)) break; p_len = q - p; p_next = (*q == '\0' ? q : q + 1); if (p_len == 0) { /* empty path: current directory */ if (getcwd (tmp, LT_PATHMAX) == NULL) lt_fatal (__FILE__, __LINE__, "getcwd failed: %s", nonnull (strerror (errno))); tmp_len = strlen (tmp); concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, tmp, tmp_len); concat_name[tmp_len] = '/'; strcpy (concat_name + tmp_len + 1, wrapper); } else { concat_name = XMALLOC (char, p_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, p, p_len); concat_name[p_len] = '/'; strcpy (concat_name + p_len + 1, wrapper); } if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } } /* not found in PATH; assume curdir */ } /* Relative path | not found in path: prepend cwd */ if (getcwd (tmp, LT_PATHMAX) == NULL) lt_fatal (__FILE__, __LINE__, "getcwd failed: %s", nonnull (strerror (errno))); tmp_len = strlen (tmp); concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, tmp, tmp_len); concat_name[tmp_len] = '/'; strcpy (concat_name + tmp_len + 1, wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); return NULL; } char * chase_symlinks (const char *pathspec) { #ifndef S_ISLNK return xstrdup (pathspec); #else char buf[LT_PATHMAX]; struct stat s; char *tmp_pathspec = xstrdup (pathspec); char *p; int has_symlinks = 0; while (strlen (tmp_pathspec) && !has_symlinks) { lt_debugprintf (__FILE__, __LINE__, "checking path component for symlinks: %s\n", tmp_pathspec); if (lstat (tmp_pathspec, &s) == 0) { if (S_ISLNK (s.st_mode) != 0) { has_symlinks = 1; break; } /* search backwards for last DIR_SEPARATOR */ p = tmp_pathspec + strlen (tmp_pathspec) - 1; while ((p > tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) p--; if ((p == tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) { /* no more DIR_SEPARATORS left */ break; } *p = '\0'; } else { lt_fatal (__FILE__, __LINE__, "error accessing file \"%s\": %s", tmp_pathspec, nonnull (strerror (errno))); } } XFREE (tmp_pathspec); if (!has_symlinks) { return xstrdup (pathspec); } tmp_pathspec = realpath (pathspec, buf); if (tmp_pathspec == 0) { lt_fatal (__FILE__, __LINE__, "could not follow symlinks for %s", pathspec); } return xstrdup (tmp_pathspec); #endif } char * strendzap (char *str, const char *pat) { size_t len, patlen; assert (str != NULL); assert (pat != NULL); len = strlen (str); patlen = strlen (pat); if (patlen <= len) { str += len - patlen; if (strcmp (str, pat) == 0) *str = '\0'; } return str; } void lt_debugprintf (const char *file, int line, const char *fmt, ...) { va_list args; if (lt_debug) { (void) fprintf (stderr, "%s:%s:%d: ", program_name, file, line); va_start (args, fmt); (void) vfprintf (stderr, fmt, args); va_end (args); } } static void lt_error_core (int exit_status, const char *file, int line, const char *mode, const char *message, va_list ap) { fprintf (stderr, "%s:%s:%d: %s: ", program_name, file, line, mode); vfprintf (stderr, message, ap); fprintf (stderr, ".\n"); if (exit_status >= 0) exit (exit_status); } void lt_fatal (const char *file, int line, const char *message, ...) { va_list ap; va_start (ap, message); lt_error_core (EXIT_FAILURE, file, line, "FATAL", message, ap); va_end (ap); } static const char * nonnull (const char *s) { return s ? s : "(null)"; } static const char * nonempty (const char *s) { return (s && !*s) ? "(empty)" : nonnull (s); } void lt_setenv (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_setenv) setting '%s' to '%s'\n", nonnull (name), nonnull (value)); { #ifdef HAVE_SETENV /* always make a copy, for consistency with !HAVE_SETENV */ char *str = xstrdup (value); setenv (name, str, 1); #else int len = strlen (name) + 1 + strlen (value) + 1; char *str = XMALLOC (char, len); sprintf (str, "%s=%s", name, value); if (putenv (str) != EXIT_SUCCESS) { XFREE (str); } #endif } } char * lt_extend_str (const char *orig_value, const char *add, int to_end) { char *new_value; if (orig_value && *orig_value) { int orig_value_len = strlen (orig_value); int add_len = strlen (add); new_value = XMALLOC (char, add_len + orig_value_len + 1); if (to_end) { strcpy (new_value, orig_value); strcpy (new_value + orig_value_len, add); } else { strcpy (new_value, add); strcpy (new_value + add_len, orig_value); } } else { new_value = xstrdup (add); } return new_value; } void lt_update_exe_path (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_update_exe_path) modifying '%s' by prepending '%s'\n", nonnull (name), nonnull (value)); if (name && *name && value && *value) { char *new_value = lt_extend_str (getenv (name), value, 0); /* some systems can't cope with a ':'-terminated path #' */ int len = strlen (new_value); while (((len = strlen (new_value)) > 0) && IS_PATH_SEPARATOR (new_value[len-1])) { new_value[len-1] = '\0'; } lt_setenv (name, new_value); XFREE (new_value); } } void lt_update_lib_path (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_update_lib_path) modifying '%s' by prepending '%s'\n", nonnull (name), nonnull (value)); if (name && *name && value && *value) { char *new_value = lt_extend_str (getenv (name), value, 0); lt_setenv (name, new_value); XFREE (new_value); } } EOF case $host_os in mingw*) cat <<"EOF" /* Prepares an argument vector before calling spawn(). Note that spawn() does not by itself call the command interpreter (getenv ("COMSPEC") != NULL ? getenv ("COMSPEC") : ({ OSVERSIONINFO v; v.dwOSVersionInfoSize = sizeof(OSVERSIONINFO); GetVersionEx(&v); v.dwPlatformId == VER_PLATFORM_WIN32_NT; }) ? "cmd.exe" : "command.com"). Instead it simply concatenates the arguments, separated by ' ', and calls CreateProcess(). We must quote the arguments since Win32 CreateProcess() interprets characters like ' ', '\t', '\\', '"' (but not '<' and '>') in a special way: - Space and tab are interpreted as delimiters. They are not treated as delimiters if they are surrounded by double quotes: "...". - Unescaped double quotes are removed from the input. Their only effect is that within double quotes, space and tab are treated like normal characters. - Backslashes not followed by double quotes are not special. - But 2*n+1 backslashes followed by a double quote become n backslashes followed by a double quote (n >= 0): \" -> " \\\" -> \" \\\\\" -> \\" */ #define SHELL_SPECIAL_CHARS "\"\\ \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037" #define SHELL_SPACE_CHARS " \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037" char ** prepare_spawn (char **argv) { size_t argc; char **new_argv; size_t i; /* Count number of arguments. */ for (argc = 0; argv[argc] != NULL; argc++) ; /* Allocate new argument vector. */ new_argv = XMALLOC (char *, argc + 1); /* Put quoted arguments into the new argument vector. */ for (i = 0; i < argc; i++) { const char *string = argv[i]; if (string[0] == '\0') new_argv[i] = xstrdup ("\"\""); else if (strpbrk (string, SHELL_SPECIAL_CHARS) != NULL) { int quote_around = (strpbrk (string, SHELL_SPACE_CHARS) != NULL); size_t length; unsigned int backslashes; const char *s; char *quoted_string; char *p; length = 0; backslashes = 0; if (quote_around) length++; for (s = string; *s != '\0'; s++) { char c = *s; if (c == '"') length += backslashes + 1; length++; if (c == '\\') backslashes++; else backslashes = 0; } if (quote_around) length += backslashes + 1; quoted_string = XMALLOC (char, length + 1); p = quoted_string; backslashes = 0; if (quote_around) *p++ = '"'; for (s = string; *s != '\0'; s++) { char c = *s; if (c == '"') { unsigned int j; for (j = backslashes + 1; j > 0; j--) *p++ = '\\'; } *p++ = c; if (c == '\\') backslashes++; else backslashes = 0; } if (quote_around) { unsigned int j; for (j = backslashes; j > 0; j--) *p++ = '\\'; *p++ = '"'; } *p = '\0'; new_argv[i] = quoted_string; } else new_argv[i] = (char *) string; } new_argv[argc] = NULL; return new_argv; } EOF ;; esac cat <<"EOF" void lt_dump_script (FILE* f) { EOF func_emit_wrapper yes | $SED -n -e ' s/^\(.\{79\}\)\(..*\)/\1\ \2/ h s/\([\\"]\)/\\\1/g s/$/\\n/ s/\([^\n]*\).*/ fputs ("\1", f);/p g D' cat <<"EOF" } EOF } # end: func_emit_cwrapperexe_src # func_win32_import_lib_p ARG # True if ARG is an import lib, as indicated by $file_magic_cmd func_win32_import_lib_p () { $opt_debug case `eval $file_magic_cmd \"\$1\" 2>/dev/null | $SED -e 10q` in *import*) : ;; *) false ;; esac } # func_mode_link arg... func_mode_link () { $opt_debug case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) # It is impossible to link a dll without this setting, and # we shouldn't force the makefile maintainer to figure out # which system we are compiling for in order to pass an extra # flag for every libtool invocation. # allow_undefined=no # FIXME: Unfortunately, there are problems with the above when trying # to make a dll which has undefined symbols, in which case not # even a static library is built. For now, we need to specify # -no-undefined on the libtool link line when we can be certain # that all symbols are satisfied, otherwise we get a static library. allow_undefined=yes ;; *) allow_undefined=yes ;; esac libtool_args=$nonopt base_compile="$nonopt $@" compile_command=$nonopt finalize_command=$nonopt compile_rpath= finalize_rpath= compile_shlibpath= finalize_shlibpath= convenience= old_convenience= deplibs= old_deplibs= compiler_flags= linker_flags= dllsearchpath= lib_search_path=`pwd` inst_prefix_dir= new_inherited_linker_flags= avoid_version=no bindir= dlfiles= dlprefiles= dlself=no export_dynamic=no export_symbols= export_symbols_regex= generated= libobjs= ltlibs= module=no no_install=no objs= non_pic_objects= precious_files_regex= prefer_static_libs=no preload=no prev= prevarg= release= rpath= xrpath= perm_rpath= temp_rpath= thread_safe=no vinfo= vinfo_number=no weak_libs= single_module="${wl}-single_module" func_infer_tag $base_compile # We need to know -static, to get the right output filenames. for arg do case $arg in -shared) test "$build_libtool_libs" != yes && \ func_fatal_configuration "can not build a shared library" build_old_libs=no break ;; -all-static | -static | -static-libtool-libs) case $arg in -all-static) if test "$build_libtool_libs" = yes && test -z "$link_static_flag"; then func_warning "complete static linking is impossible in this configuration" fi if test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=yes ;; -static) if test -z "$pic_flag" && test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=built ;; -static-libtool-libs) if test -z "$pic_flag" && test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=yes ;; esac build_libtool_libs=no build_old_libs=yes break ;; esac done # See if our shared archives depend on static archives. test -n "$old_archive_from_new_cmds" && build_old_libs=yes # Go through the arguments, transforming them on the way. while test "$#" -gt 0; do arg="$1" shift func_quote_for_eval "$arg" qarg=$func_quote_for_eval_unquoted_result func_append libtool_args " $func_quote_for_eval_result" # If the previous option needs an argument, assign it. if test -n "$prev"; then case $prev in output) func_append compile_command " @OUTPUT@" func_append finalize_command " @OUTPUT@" ;; esac case $prev in bindir) bindir="$arg" prev= continue ;; dlfiles|dlprefiles) if test "$preload" = no; then # Add the symbol object into the linking commands. func_append compile_command " @SYMFILE@" func_append finalize_command " @SYMFILE@" preload=yes fi case $arg in *.la | *.lo) ;; # We handle these cases below. force) if test "$dlself" = no; then dlself=needless export_dynamic=yes fi prev= continue ;; self) if test "$prev" = dlprefiles; then dlself=yes elif test "$prev" = dlfiles && test "$dlopen_self" != yes; then dlself=yes else dlself=needless export_dynamic=yes fi prev= continue ;; *) if test "$prev" = dlfiles; then func_append dlfiles " $arg" else func_append dlprefiles " $arg" fi prev= continue ;; esac ;; expsyms) export_symbols="$arg" test -f "$arg" \ || func_fatal_error "symbol file \`$arg' does not exist" prev= continue ;; expsyms_regex) export_symbols_regex="$arg" prev= continue ;; framework) case $host in *-*-darwin*) case "$deplibs " in *" $qarg.ltframework "*) ;; *) func_append deplibs " $qarg.ltframework" # this is fixed later ;; esac ;; esac prev= continue ;; inst_prefix) inst_prefix_dir="$arg" prev= continue ;; objectlist) if test -f "$arg"; then save_arg=$arg moreargs= for fil in `cat "$save_arg"` do # func_append moreargs " $fil" arg=$fil # A libtool-controlled object. # Check to see that this really is a libtool object. if func_lalib_unsafe_p "$arg"; then pic_object= non_pic_object= # Read the .lo file func_source "$arg" if test -z "$pic_object" || test -z "$non_pic_object" || test "$pic_object" = none && test "$non_pic_object" = none; then func_fatal_error "cannot find name of object for \`$arg'" fi # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir="$func_dirname_result" if test "$pic_object" != none; then # Prepend the subdirectory the object is found in. pic_object="$xdir$pic_object" if test "$prev" = dlfiles; then if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then func_append dlfiles " $pic_object" prev= continue else # If libtool objects are unsupported, then we need to preload. prev=dlprefiles fi fi # CHECK ME: I think I busted this. -Ossama if test "$prev" = dlprefiles; then # Preload the old-style object. func_append dlprefiles " $pic_object" prev= fi # A PIC object. func_append libobjs " $pic_object" arg="$pic_object" fi # Non-PIC object. if test "$non_pic_object" != none; then # Prepend the subdirectory the object is found in. non_pic_object="$xdir$non_pic_object" # A standard non-PIC object func_append non_pic_objects " $non_pic_object" if test -z "$pic_object" || test "$pic_object" = none ; then arg="$non_pic_object" fi else # If the PIC object exists, use it instead. # $xdir was prepended to $pic_object above. non_pic_object="$pic_object" func_append non_pic_objects " $non_pic_object" fi else # Only an error if not doing a dry-run. if $opt_dry_run; then # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir="$func_dirname_result" func_lo2o "$arg" pic_object=$xdir$objdir/$func_lo2o_result non_pic_object=$xdir$func_lo2o_result func_append libobjs " $pic_object" func_append non_pic_objects " $non_pic_object" else func_fatal_error "\`$arg' is not a valid libtool object" fi fi done else func_fatal_error "link input file \`$arg' does not exist" fi arg=$save_arg prev= continue ;; precious_regex) precious_files_regex="$arg" prev= continue ;; release) release="-$arg" prev= continue ;; rpath | xrpath) # We need an absolute path. case $arg in [\\/]* | [A-Za-z]:[\\/]*) ;; *) func_fatal_error "only absolute run-paths are allowed" ;; esac if test "$prev" = rpath; then case "$rpath " in *" $arg "*) ;; *) func_append rpath " $arg" ;; esac else case "$xrpath " in *" $arg "*) ;; *) func_append xrpath " $arg" ;; esac fi prev= continue ;; shrext) shrext_cmds="$arg" prev= continue ;; weak) func_append weak_libs " $arg" prev= continue ;; xcclinker) func_append linker_flags " $qarg" func_append compiler_flags " $qarg" prev= func_append compile_command " $qarg" func_append finalize_command " $qarg" continue ;; xcompiler) func_append compiler_flags " $qarg" prev= func_append compile_command " $qarg" func_append finalize_command " $qarg" continue ;; xlinker) func_append linker_flags " $qarg" func_append compiler_flags " $wl$qarg" prev= func_append compile_command " $wl$qarg" func_append finalize_command " $wl$qarg" continue ;; *) eval "$prev=\"\$arg\"" prev= continue ;; esac fi # test -n "$prev" prevarg="$arg" case $arg in -all-static) if test -n "$link_static_flag"; then # See comment for -static flag below, for more details. func_append compile_command " $link_static_flag" func_append finalize_command " $link_static_flag" fi continue ;; -allow-undefined) # FIXME: remove this flag sometime in the future. func_fatal_error "\`-allow-undefined' must not be used because it is the default" ;; -avoid-version) avoid_version=yes continue ;; -bindir) prev=bindir continue ;; -dlopen) prev=dlfiles continue ;; -dlpreopen) prev=dlprefiles continue ;; -export-dynamic) export_dynamic=yes continue ;; -export-symbols | -export-symbols-regex) if test -n "$export_symbols" || test -n "$export_symbols_regex"; then func_fatal_error "more than one -exported-symbols argument is not allowed" fi if test "X$arg" = "X-export-symbols"; then prev=expsyms else prev=expsyms_regex fi continue ;; -framework) prev=framework continue ;; -inst-prefix-dir) prev=inst_prefix continue ;; # The native IRIX linker understands -LANG:*, -LIST:* and -LNO:* # so, if we see these flags be careful not to treat them like -L -L[A-Z][A-Z]*:*) case $with_gcc/$host in no/*-*-irix* | /*-*-irix*) func_append compile_command " $arg" func_append finalize_command " $arg" ;; esac continue ;; -L*) func_stripname "-L" '' "$arg" if test -z "$func_stripname_result"; then if test "$#" -gt 0; then func_fatal_error "require no space between \`-L' and \`$1'" else func_fatal_error "need path for \`-L' option" fi fi func_resolve_sysroot "$func_stripname_result" dir=$func_resolve_sysroot_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) ;; *) absdir=`cd "$dir" && pwd` test -z "$absdir" && \ func_fatal_error "cannot determine absolute directory name of \`$dir'" dir="$absdir" ;; esac case "$deplibs " in *" -L$dir "* | *" $arg "*) # Will only happen for absolute or sysroot arguments ;; *) # Preserve sysroot, but never include relative directories case $dir in [\\/]* | [A-Za-z]:[\\/]* | =*) func_append deplibs " $arg" ;; *) func_append deplibs " -L$dir" ;; esac func_append lib_search_path " $dir" ;; esac case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) testbindir=`$ECHO "$dir" | $SED 's*/lib$*/bin*'` case :$dllsearchpath: in *":$dir:"*) ;; ::) dllsearchpath=$dir;; *) func_append dllsearchpath ":$dir";; esac case :$dllsearchpath: in *":$testbindir:"*) ;; ::) dllsearchpath=$testbindir;; *) func_append dllsearchpath ":$testbindir";; esac ;; esac continue ;; -l*) if test "X$arg" = "X-lc" || test "X$arg" = "X-lm"; then case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-beos* | *-cegcc* | *-*-haiku*) # These systems don't actually have a C or math library (as such) continue ;; *-*-os2*) # These systems don't actually have a C library (as such) test "X$arg" = "X-lc" && continue ;; *-*-openbsd* | *-*-freebsd* | *-*-dragonfly*) # Do not include libc due to us having libc/libc_r. test "X$arg" = "X-lc" && continue ;; *-*-rhapsody* | *-*-darwin1.[012]) # Rhapsody C and math libraries are in the System framework func_append deplibs " System.ltframework" continue ;; *-*-sco3.2v5* | *-*-sco5v6*) # Causes problems with __ctype test "X$arg" = "X-lc" && continue ;; *-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*) # Compiler inserts libc in the correct place for threads to work test "X$arg" = "X-lc" && continue ;; esac elif test "X$arg" = "X-lc_r"; then case $host in *-*-openbsd* | *-*-freebsd* | *-*-dragonfly*) # Do not include libc_r directly, use -pthread flag. continue ;; esac fi func_append deplibs " $arg" continue ;; -module) module=yes continue ;; # Tru64 UNIX uses -model [arg] to determine the layout of C++ # classes, name mangling, and exception handling. # Darwin uses the -arch flag to determine output architecture. -model|-arch|-isysroot|--sysroot) func_append compiler_flags " $arg" func_append compile_command " $arg" func_append finalize_command " $arg" prev=xcompiler continue ;; -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe \ |-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*) func_append compiler_flags " $arg" func_append compile_command " $arg" func_append finalize_command " $arg" case "$new_inherited_linker_flags " in *" $arg "*) ;; * ) func_append new_inherited_linker_flags " $arg" ;; esac continue ;; -multi_module) single_module="${wl}-multi_module" continue ;; -no-fast-install) fast_install=no continue ;; -no-install) case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-darwin* | *-cegcc*) # The PATH hackery in wrapper scripts is required on Windows # and Darwin in order for the loader to find any dlls it needs. func_warning "\`-no-install' is ignored for $host" func_warning "assuming \`-no-fast-install' instead" fast_install=no ;; *) no_install=yes ;; esac continue ;; -no-undefined) allow_undefined=no continue ;; -objectlist) prev=objectlist continue ;; -o) prev=output ;; -precious-files-regex) prev=precious_regex continue ;; -release) prev=release continue ;; -rpath) prev=rpath continue ;; -R) prev=xrpath continue ;; -R*) func_stripname '-R' '' "$arg" dir=$func_stripname_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) ;; =*) func_stripname '=' '' "$dir" dir=$lt_sysroot$func_stripname_result ;; *) func_fatal_error "only absolute run-paths are allowed" ;; esac case "$xrpath " in *" $dir "*) ;; *) func_append xrpath " $dir" ;; esac continue ;; -shared) # The effects of -shared are defined in a previous loop. continue ;; -shrext) prev=shrext continue ;; -static | -static-libtool-libs) # The effects of -static are defined in a previous loop. # We used to do the same as -all-static on platforms that # didn't have a PIC flag, but the assumption that the effects # would be equivalent was wrong. It would break on at least # Digital Unix and AIX. continue ;; -thread-safe) thread_safe=yes continue ;; -version-info) prev=vinfo continue ;; -version-number) prev=vinfo vinfo_number=yes continue ;; -weak) prev=weak continue ;; -Wc,*) func_stripname '-Wc,' '' "$arg" args=$func_stripname_result arg= save_ifs="$IFS"; IFS=',' for flag in $args; do IFS="$save_ifs" func_quote_for_eval "$flag" func_append arg " $func_quote_for_eval_result" func_append compiler_flags " $func_quote_for_eval_result" done IFS="$save_ifs" func_stripname ' ' '' "$arg" arg=$func_stripname_result ;; -Wl,*) func_stripname '-Wl,' '' "$arg" args=$func_stripname_result arg= save_ifs="$IFS"; IFS=',' for flag in $args; do IFS="$save_ifs" func_quote_for_eval "$flag" func_append arg " $wl$func_quote_for_eval_result" func_append compiler_flags " $wl$func_quote_for_eval_result" func_append linker_flags " $func_quote_for_eval_result" done IFS="$save_ifs" func_stripname ' ' '' "$arg" arg=$func_stripname_result ;; -Xcompiler) prev=xcompiler continue ;; -Xlinker) prev=xlinker continue ;; -XCClinker) prev=xcclinker continue ;; # -msg_* for osf cc -msg_*) func_quote_for_eval "$arg" arg="$func_quote_for_eval_result" ;; # Flags to be passed through unchanged, with rationale: # -64, -mips[0-9] enable 64-bit mode for the SGI compiler # -r[0-9][0-9]* specify processor for the SGI compiler # -xarch=*, -xtarget=* enable 64-bit mode for the Sun compiler # +DA*, +DD* enable 64-bit mode for the HP compiler # -q* compiler args for the IBM compiler # -m*, -t[45]*, -txscale* architecture-specific flags for GCC # -F/path path to uninstalled frameworks, gcc on darwin # -p, -pg, --coverage, -fprofile-* profiling flags for GCC # @file GCC response files # -tp=* Portland pgcc target processor selection # --sysroot=* for sysroot support # -O*, -flto*, -fwhopr*, -fuse-linker-plugin GCC link-time optimization -64|-mips[0-9]|-r[0-9][0-9]*|-xarch=*|-xtarget=*|+DA*|+DD*|-q*|-m*| \ -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*|--sysroot=*| \ -O*|-flto*|-fwhopr*|-fuse-linker-plugin) func_quote_for_eval "$arg" arg="$func_quote_for_eval_result" func_append compile_command " $arg" func_append finalize_command " $arg" func_append compiler_flags " $arg" continue ;; # Some other compiler flag. -* | +*) func_quote_for_eval "$arg" arg="$func_quote_for_eval_result" ;; *.$objext) # A standard object. func_append objs " $arg" ;; *.lo) # A libtool-controlled object. # Check to see that this really is a libtool object. if func_lalib_unsafe_p "$arg"; then pic_object= non_pic_object= # Read the .lo file func_source "$arg" if test -z "$pic_object" || test -z "$non_pic_object" || test "$pic_object" = none && test "$non_pic_object" = none; then func_fatal_error "cannot find name of object for \`$arg'" fi # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir="$func_dirname_result" if test "$pic_object" != none; then # Prepend the subdirectory the object is found in. pic_object="$xdir$pic_object" if test "$prev" = dlfiles; then if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then func_append dlfiles " $pic_object" prev= continue else # If libtool objects are unsupported, then we need to preload. prev=dlprefiles fi fi # CHECK ME: I think I busted this. -Ossama if test "$prev" = dlprefiles; then # Preload the old-style object. func_append dlprefiles " $pic_object" prev= fi # A PIC object. func_append libobjs " $pic_object" arg="$pic_object" fi # Non-PIC object. if test "$non_pic_object" != none; then # Prepend the subdirectory the object is found in. non_pic_object="$xdir$non_pic_object" # A standard non-PIC object func_append non_pic_objects " $non_pic_object" if test -z "$pic_object" || test "$pic_object" = none ; then arg="$non_pic_object" fi else # If the PIC object exists, use it instead. # $xdir was prepended to $pic_object above. non_pic_object="$pic_object" func_append non_pic_objects " $non_pic_object" fi else # Only an error if not doing a dry-run. if $opt_dry_run; then # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir="$func_dirname_result" func_lo2o "$arg" pic_object=$xdir$objdir/$func_lo2o_result non_pic_object=$xdir$func_lo2o_result func_append libobjs " $pic_object" func_append non_pic_objects " $non_pic_object" else func_fatal_error "\`$arg' is not a valid libtool object" fi fi ;; *.$libext) # An archive. func_append deplibs " $arg" func_append old_deplibs " $arg" continue ;; *.la) # A libtool-controlled library. func_resolve_sysroot "$arg" if test "$prev" = dlfiles; then # This library was specified with -dlopen. func_append dlfiles " $func_resolve_sysroot_result" prev= elif test "$prev" = dlprefiles; then # The library was specified with -dlpreopen. func_append dlprefiles " $func_resolve_sysroot_result" prev= else func_append deplibs " $func_resolve_sysroot_result" fi continue ;; # Some other compiler argument. *) # Unknown arguments in both finalize_command and compile_command need # to be aesthetically quoted because they are evaled later. func_quote_for_eval "$arg" arg="$func_quote_for_eval_result" ;; esac # arg # Now actually substitute the argument into the commands. if test -n "$arg"; then func_append compile_command " $arg" func_append finalize_command " $arg" fi done # argument parsing loop test -n "$prev" && \ func_fatal_help "the \`$prevarg' option requires an argument" if test "$export_dynamic" = yes && test -n "$export_dynamic_flag_spec"; then eval arg=\"$export_dynamic_flag_spec\" func_append compile_command " $arg" func_append finalize_command " $arg" fi oldlibs= # calculate the name of the file, without its directory func_basename "$output" outputname="$func_basename_result" libobjs_save="$libobjs" if test -n "$shlibpath_var"; then # get the directories listed in $shlibpath_var eval shlib_search_path=\`\$ECHO \"\${$shlibpath_var}\" \| \$SED \'s/:/ /g\'\` else shlib_search_path= fi eval sys_lib_search_path=\"$sys_lib_search_path_spec\" eval sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\" func_dirname "$output" "/" "" output_objdir="$func_dirname_result$objdir" func_to_tool_file "$output_objdir/" tool_output_objdir=$func_to_tool_file_result # Create the object directory. func_mkdir_p "$output_objdir" # Determine the type of output case $output in "") func_fatal_help "you must specify an output file" ;; *.$libext) linkmode=oldlib ;; *.lo | *.$objext) linkmode=obj ;; *.la) linkmode=lib ;; *) linkmode=prog ;; # Anything else should be a program. esac specialdeplibs= libs= # Find all interdependent deplibs by searching for libraries # that are linked more than once (e.g. -la -lb -la) for deplib in $deplibs; do if $opt_preserve_dup_deps ; then case "$libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append libs " $deplib" done if test "$linkmode" = lib; then libs="$predeps $libs $compiler_lib_search_path $postdeps" # Compute libraries that are listed more than once in $predeps # $postdeps and mark them as special (i.e., whose duplicates are # not to be eliminated). pre_post_deps= if $opt_duplicate_compiler_generated_deps; then for pre_post_dep in $predeps $postdeps; do case "$pre_post_deps " in *" $pre_post_dep "*) func_append specialdeplibs " $pre_post_deps" ;; esac func_append pre_post_deps " $pre_post_dep" done fi pre_post_deps= fi deplibs= newdependency_libs= newlib_search_path= need_relink=no # whether we're linking any uninstalled libtool libraries notinst_deplibs= # not-installed libtool libraries notinst_path= # paths that contain not-installed libtool libraries case $linkmode in lib) passes="conv dlpreopen link" for file in $dlfiles $dlprefiles; do case $file in *.la) ;; *) func_fatal_help "libraries can \`-dlopen' only libtool libraries: $file" ;; esac done ;; prog) compile_deplibs= finalize_deplibs= alldeplibs=no newdlfiles= newdlprefiles= passes="conv scan dlopen dlpreopen link" ;; *) passes="conv" ;; esac for pass in $passes; do # The preopen pass in lib mode reverses $deplibs; put it back here # so that -L comes before libs that need it for instance... if test "$linkmode,$pass" = "lib,link"; then ## FIXME: Find the place where the list is rebuilt in the wrong ## order, and fix it there properly tmp_deplibs= for deplib in $deplibs; do tmp_deplibs="$deplib $tmp_deplibs" done deplibs="$tmp_deplibs" fi if test "$linkmode,$pass" = "lib,link" || test "$linkmode,$pass" = "prog,scan"; then libs="$deplibs" deplibs= fi if test "$linkmode" = prog; then case $pass in dlopen) libs="$dlfiles" ;; dlpreopen) libs="$dlprefiles" ;; link) libs="$deplibs %DEPLIBS%" test "X$link_all_deplibs" != Xno && libs="$libs $dependency_libs" ;; esac fi if test "$linkmode,$pass" = "lib,dlpreopen"; then # Collect and forward deplibs of preopened libtool libs for lib in $dlprefiles; do # Ignore non-libtool-libs dependency_libs= func_resolve_sysroot "$lib" case $lib in *.la) func_source "$func_resolve_sysroot_result" ;; esac # Collect preopened libtool deplibs, except any this library # has declared as weak libs for deplib in $dependency_libs; do func_basename "$deplib" deplib_base=$func_basename_result case " $weak_libs " in *" $deplib_base "*) ;; *) func_append deplibs " $deplib" ;; esac done done libs="$dlprefiles" fi if test "$pass" = dlopen; then # Collect dlpreopened libraries save_deplibs="$deplibs" deplibs= fi for deplib in $libs; do lib= found=no case $deplib in -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe \ |-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*) if test "$linkmode,$pass" = "prog,link"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else func_append compiler_flags " $deplib" if test "$linkmode" = lib ; then case "$new_inherited_linker_flags " in *" $deplib "*) ;; * ) func_append new_inherited_linker_flags " $deplib" ;; esac fi fi continue ;; -l*) if test "$linkmode" != lib && test "$linkmode" != prog; then func_warning "\`-l' is ignored for archives/objects" continue fi func_stripname '-l' '' "$deplib" name=$func_stripname_result if test "$linkmode" = lib; then searchdirs="$newlib_search_path $lib_search_path $compiler_lib_search_dirs $sys_lib_search_path $shlib_search_path" else searchdirs="$newlib_search_path $lib_search_path $sys_lib_search_path $shlib_search_path" fi for searchdir in $searchdirs; do for search_ext in .la $std_shrext .so .a; do # Search the libtool library lib="$searchdir/lib${name}${search_ext}" if test -f "$lib"; then if test "$search_ext" = ".la"; then found=yes else found=no fi break 2 fi done done if test "$found" != yes; then # deplib doesn't seem to be a libtool library if test "$linkmode,$pass" = "prog,link"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" test "$linkmode" = lib && newdependency_libs="$deplib $newdependency_libs" fi continue else # deplib is a libtool library # If $allow_libtool_libs_with_static_runtimes && $deplib is a stdlib, # We need to do some special things here, and not later. if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then case " $predeps $postdeps " in *" $deplib "*) if func_lalib_p "$lib"; then library_names= old_library= func_source "$lib" for l in $old_library $library_names; do ll="$l" done if test "X$ll" = "X$old_library" ; then # only static version available found=no func_dirname "$lib" "" "." ladir="$func_dirname_result" lib=$ladir/$old_library if test "$linkmode,$pass" = "prog,link"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" test "$linkmode" = lib && newdependency_libs="$deplib $newdependency_libs" fi continue fi fi ;; *) ;; esac fi fi ;; # -l *.ltframework) if test "$linkmode,$pass" = "prog,link"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" if test "$linkmode" = lib ; then case "$new_inherited_linker_flags " in *" $deplib "*) ;; * ) func_append new_inherited_linker_flags " $deplib" ;; esac fi fi continue ;; -L*) case $linkmode in lib) deplibs="$deplib $deplibs" test "$pass" = conv && continue newdependency_libs="$deplib $newdependency_libs" func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; prog) if test "$pass" = conv; then deplibs="$deplib $deplibs" continue fi if test "$pass" = scan; then deplibs="$deplib $deplibs" else compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" fi func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; *) func_warning "\`-L' is ignored for archives/objects" ;; esac # linkmode continue ;; # -L -R*) if test "$pass" = link; then func_stripname '-R' '' "$deplib" func_resolve_sysroot "$func_stripname_result" dir=$func_resolve_sysroot_result # Make sure the xrpath contains only unique directories. case "$xrpath " in *" $dir "*) ;; *) func_append xrpath " $dir" ;; esac fi deplibs="$deplib $deplibs" continue ;; *.la) func_resolve_sysroot "$deplib" lib=$func_resolve_sysroot_result ;; *.$libext) if test "$pass" = conv; then deplibs="$deplib $deplibs" continue fi case $linkmode in lib) # Linking convenience modules into shared libraries is allowed, # but linking other static libraries is non-portable. case " $dlpreconveniencelibs " in *" $deplib "*) ;; *) valid_a_lib=no case $deplibs_check_method in match_pattern*) set dummy $deplibs_check_method; shift match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"` if eval "\$ECHO \"$deplib\"" 2>/dev/null | $SED 10q \ | $EGREP "$match_pattern_regex" > /dev/null; then valid_a_lib=yes fi ;; pass_all) valid_a_lib=yes ;; esac if test "$valid_a_lib" != yes; then echo $ECHO "*** Warning: Trying to link with static lib archive $deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because the file extensions .$libext of this argument makes me believe" echo "*** that it is just a static archive that I should not use here." else echo $ECHO "*** Warning: Linking the shared library $output against the" $ECHO "*** static library $deplib is not portable!" deplibs="$deplib $deplibs" fi ;; esac continue ;; prog) if test "$pass" != link; then deplibs="$deplib $deplibs" else compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" fi continue ;; esac # linkmode ;; # *.$libext *.lo | *.$objext) if test "$pass" = conv; then deplibs="$deplib $deplibs" elif test "$linkmode" = prog; then if test "$pass" = dlpreopen || test "$dlopen_support" != yes || test "$build_libtool_libs" = no; then # If there is no dlopen support or we're linking statically, # we need to preload. func_append newdlprefiles " $deplib" compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else func_append newdlfiles " $deplib" fi fi continue ;; %DEPLIBS%) alldeplibs=yes continue ;; esac # case $deplib if test "$found" = yes || test -f "$lib"; then : else func_fatal_error "cannot find the library \`$lib' or unhandled argument \`$deplib'" fi # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$lib" \ || func_fatal_error "\`$lib' is not a valid libtool archive" func_dirname "$lib" "" "." ladir="$func_dirname_result" dlname= dlopen= dlpreopen= libdir= library_names= old_library= inherited_linker_flags= # If the library was installed with an old release of libtool, # it will not redefine variables installed, or shouldnotlink installed=yes shouldnotlink=no avoidtemprpath= # Read the .la file func_source "$lib" # Convert "-framework foo" to "foo.ltframework" if test -n "$inherited_linker_flags"; then tmp_inherited_linker_flags=`$ECHO "$inherited_linker_flags" | $SED 's/-framework \([^ $]*\)/\1.ltframework/g'` for tmp_inherited_linker_flag in $tmp_inherited_linker_flags; do case " $new_inherited_linker_flags " in *" $tmp_inherited_linker_flag "*) ;; *) func_append new_inherited_linker_flags " $tmp_inherited_linker_flag";; esac done fi dependency_libs=`$ECHO " $dependency_libs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` if test "$linkmode,$pass" = "lib,link" || test "$linkmode,$pass" = "prog,scan" || { test "$linkmode" != prog && test "$linkmode" != lib; }; then test -n "$dlopen" && func_append dlfiles " $dlopen" test -n "$dlpreopen" && func_append dlprefiles " $dlpreopen" fi if test "$pass" = conv; then # Only check for convenience libraries deplibs="$lib $deplibs" if test -z "$libdir"; then if test -z "$old_library"; then func_fatal_error "cannot find name of link library for \`$lib'" fi # It is a libtool convenience library, so add in its objects. func_append convenience " $ladir/$objdir/$old_library" func_append old_convenience " $ladir/$objdir/$old_library" tmp_libs= for deplib in $dependency_libs; do deplibs="$deplib $deplibs" if $opt_preserve_dup_deps ; then case "$tmp_libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append tmp_libs " $deplib" done elif test "$linkmode" != prog && test "$linkmode" != lib; then func_fatal_error "\`$lib' is not a convenience library" fi continue fi # $pass = conv # Get the name of the library we link against. linklib= if test -n "$old_library" && { test "$prefer_static_libs" = yes || test "$prefer_static_libs,$installed" = "built,no"; }; then linklib=$old_library else for l in $old_library $library_names; do linklib="$l" done fi if test -z "$linklib"; then func_fatal_error "cannot find name of link library for \`$lib'" fi # This library was specified with -dlopen. if test "$pass" = dlopen; then if test -z "$libdir"; then func_fatal_error "cannot -dlopen a convenience library: \`$lib'" fi if test -z "$dlname" || test "$dlopen_support" != yes || test "$build_libtool_libs" = no; then # If there is no dlname, no dlopen support or we're linking # statically, we need to preload. We also need to preload any # dependent libraries so libltdl's deplib preloader doesn't # bomb out in the load deplibs phase. func_append dlprefiles " $lib $dependency_libs" else func_append newdlfiles " $lib" fi continue fi # $pass = dlopen # We need an absolute path. case $ladir in [\\/]* | [A-Za-z]:[\\/]*) abs_ladir="$ladir" ;; *) abs_ladir=`cd "$ladir" && pwd` if test -z "$abs_ladir"; then func_warning "cannot determine absolute directory name of \`$ladir'" func_warning "passing it literally to the linker, although it might fail" abs_ladir="$ladir" fi ;; esac func_basename "$lib" laname="$func_basename_result" # Find the relevant object directory and library name. if test "X$installed" = Xyes; then if test ! -f "$lt_sysroot$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then func_warning "library \`$lib' was moved." dir="$ladir" absdir="$abs_ladir" libdir="$abs_ladir" else dir="$lt_sysroot$libdir" absdir="$lt_sysroot$libdir" fi test "X$hardcode_automatic" = Xyes && avoidtemprpath=yes else if test ! -f "$ladir/$objdir/$linklib" && test -f "$abs_ladir/$linklib"; then dir="$ladir" absdir="$abs_ladir" # Remove this search path later func_append notinst_path " $abs_ladir" else dir="$ladir/$objdir" absdir="$abs_ladir/$objdir" # Remove this search path later func_append notinst_path " $abs_ladir" fi fi # $installed = yes func_stripname 'lib' '.la' "$laname" name=$func_stripname_result # This library was specified with -dlpreopen. if test "$pass" = dlpreopen; then if test -z "$libdir" && test "$linkmode" = prog; then func_fatal_error "only libraries may -dlpreopen a convenience library: \`$lib'" fi case "$host" in # special handling for platforms with PE-DLLs. *cygwin* | *mingw* | *cegcc* ) # Linker will automatically link against shared library if both # static and shared are present. Therefore, ensure we extract # symbols from the import library if a shared library is present # (otherwise, the dlopen module name will be incorrect). We do # this by putting the import library name into $newdlprefiles. # We recover the dlopen module name by 'saving' the la file # name in a special purpose variable, and (later) extracting the # dlname from the la file. if test -n "$dlname"; then func_tr_sh "$dir/$linklib" eval "libfile_$func_tr_sh_result=\$abs_ladir/\$laname" func_append newdlprefiles " $dir/$linklib" else func_append newdlprefiles " $dir/$old_library" # Keep a list of preopened convenience libraries to check # that they are being used correctly in the link pass. test -z "$libdir" && \ func_append dlpreconveniencelibs " $dir/$old_library" fi ;; * ) # Prefer using a static library (so that no silly _DYNAMIC symbols # are required to link). if test -n "$old_library"; then func_append newdlprefiles " $dir/$old_library" # Keep a list of preopened convenience libraries to check # that they are being used correctly in the link pass. test -z "$libdir" && \ func_append dlpreconveniencelibs " $dir/$old_library" # Otherwise, use the dlname, so that lt_dlopen finds it. elif test -n "$dlname"; then func_append newdlprefiles " $dir/$dlname" else func_append newdlprefiles " $dir/$linklib" fi ;; esac fi # $pass = dlpreopen if test -z "$libdir"; then # Link the convenience library if test "$linkmode" = lib; then deplibs="$dir/$old_library $deplibs" elif test "$linkmode,$pass" = "prog,link"; then compile_deplibs="$dir/$old_library $compile_deplibs" finalize_deplibs="$dir/$old_library $finalize_deplibs" else deplibs="$lib $deplibs" # used for prog,scan pass fi continue fi if test "$linkmode" = prog && test "$pass" != link; then func_append newlib_search_path " $ladir" deplibs="$lib $deplibs" linkalldeplibs=no if test "$link_all_deplibs" != no || test -z "$library_names" || test "$build_libtool_libs" = no; then linkalldeplibs=yes fi tmp_libs= for deplib in $dependency_libs; do case $deplib in -L*) func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; esac # Need to link against all dependency_libs? if test "$linkalldeplibs" = yes; then deplibs="$deplib $deplibs" else # Need to hardcode shared library paths # or/and link against static libraries newdependency_libs="$deplib $newdependency_libs" fi if $opt_preserve_dup_deps ; then case "$tmp_libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append tmp_libs " $deplib" done # for deplib continue fi # $linkmode = prog... if test "$linkmode,$pass" = "prog,link"; then if test -n "$library_names" && { { test "$prefer_static_libs" = no || test "$prefer_static_libs,$installed" = "built,yes"; } || test -z "$old_library"; }; then # We need to hardcode the library path if test -n "$shlibpath_var" && test -z "$avoidtemprpath" ; then # Make sure the rpath contains only unique directories. case "$temp_rpath:" in *"$absdir:"*) ;; *) func_append temp_rpath "$absdir:" ;; esac fi # Hardcode the library path. # Skip directories that are in the system default run-time # search path. case " $sys_lib_dlsearch_path " in *" $absdir "*) ;; *) case "$compile_rpath " in *" $absdir "*) ;; *) func_append compile_rpath " $absdir" ;; esac ;; esac case " $sys_lib_dlsearch_path " in *" $libdir "*) ;; *) case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac ;; esac fi # $linkmode,$pass = prog,link... if test "$alldeplibs" = yes && { test "$deplibs_check_method" = pass_all || { test "$build_libtool_libs" = yes && test -n "$library_names"; }; }; then # We only need to search for static libraries continue fi fi link_static=no # Whether the deplib will be linked statically use_static_libs=$prefer_static_libs if test "$use_static_libs" = built && test "$installed" = yes; then use_static_libs=no fi if test -n "$library_names" && { test "$use_static_libs" = no || test -z "$old_library"; }; then case $host in *cygwin* | *mingw* | *cegcc*) # No point in relinking DLLs because paths are not encoded func_append notinst_deplibs " $lib" need_relink=no ;; *) if test "$installed" = no; then func_append notinst_deplibs " $lib" need_relink=yes fi ;; esac # This is a shared library # Warn about portability, can't link against -module's on some # systems (darwin). Don't bleat about dlopened modules though! dlopenmodule="" for dlpremoduletest in $dlprefiles; do if test "X$dlpremoduletest" = "X$lib"; then dlopenmodule="$dlpremoduletest" break fi done if test -z "$dlopenmodule" && test "$shouldnotlink" = yes && test "$pass" = link; then echo if test "$linkmode" = prog; then $ECHO "*** Warning: Linking the executable $output against the loadable module" else $ECHO "*** Warning: Linking the shared library $output against the loadable module" fi $ECHO "*** $linklib is not portable!" fi if test "$linkmode" = lib && test "$hardcode_into_libs" = yes; then # Hardcode the library path. # Skip directories that are in the system default run-time # search path. case " $sys_lib_dlsearch_path " in *" $absdir "*) ;; *) case "$compile_rpath " in *" $absdir "*) ;; *) func_append compile_rpath " $absdir" ;; esac ;; esac case " $sys_lib_dlsearch_path " in *" $libdir "*) ;; *) case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac ;; esac fi if test -n "$old_archive_from_expsyms_cmds"; then # figure out the soname set dummy $library_names shift realname="$1" shift libname=`eval "\\$ECHO \"$libname_spec\""` # use dlname if we got it. it's perfectly good, no? if test -n "$dlname"; then soname="$dlname" elif test -n "$soname_spec"; then # bleh windows case $host in *cygwin* | mingw* | *cegcc*) func_arith $current - $age major=$func_arith_result versuffix="-$major" ;; esac eval soname=\"$soname_spec\" else soname="$realname" fi # Make a new name for the extract_expsyms_cmds to use soroot="$soname" func_basename "$soroot" soname="$func_basename_result" func_stripname 'lib' '.dll' "$soname" newlib=libimp-$func_stripname_result.a # If the library has no export list, then create one now if test -f "$output_objdir/$soname-def"; then : else func_verbose "extracting exported symbol list from \`$soname'" func_execute_cmds "$extract_expsyms_cmds" 'exit $?' fi # Create $newlib if test -f "$output_objdir/$newlib"; then :; else func_verbose "generating import library for \`$soname'" func_execute_cmds "$old_archive_from_expsyms_cmds" 'exit $?' fi # make sure the library variables are pointing to the new library dir=$output_objdir linklib=$newlib fi # test -n "$old_archive_from_expsyms_cmds" if test "$linkmode" = prog || test "$opt_mode" != relink; then add_shlibpath= add_dir= add= lib_linked=yes case $hardcode_action in immediate | unsupported) if test "$hardcode_direct" = no; then add="$dir/$linklib" case $host in *-*-sco3.2v5.0.[024]*) add_dir="-L$dir" ;; *-*-sysv4*uw2*) add_dir="-L$dir" ;; *-*-sysv5OpenUNIX* | *-*-sysv5UnixWare7.[01].[10]* | \ *-*-unixware7*) add_dir="-L$dir" ;; *-*-darwin* ) # if the lib is a (non-dlopened) module then we can not # link against it, someone is ignoring the earlier warnings if /usr/bin/file -L $add 2> /dev/null | $GREP ": [^:]* bundle" >/dev/null ; then if test "X$dlopenmodule" != "X$lib"; then $ECHO "*** Warning: lib $linklib is a module, not a shared library" if test -z "$old_library" ; then echo echo "*** And there doesn't seem to be a static archive available" echo "*** The link will probably fail, sorry" else add="$dir/$old_library" fi elif test -n "$old_library"; then add="$dir/$old_library" fi fi esac elif test "$hardcode_minus_L" = no; then case $host in *-*-sunos*) add_shlibpath="$dir" ;; esac add_dir="-L$dir" add="-l$name" elif test "$hardcode_shlibpath_var" = no; then add_shlibpath="$dir" add="-l$name" else lib_linked=no fi ;; relink) if test "$hardcode_direct" = yes && test "$hardcode_direct_absolute" = no; then add="$dir/$linklib" elif test "$hardcode_minus_L" = yes; then add_dir="-L$absdir" # Try looking first in the location we're being installed to. if test -n "$inst_prefix_dir"; then case $libdir in [\\/]*) func_append add_dir " -L$inst_prefix_dir$libdir" ;; esac fi add="-l$name" elif test "$hardcode_shlibpath_var" = yes; then add_shlibpath="$dir" add="-l$name" else lib_linked=no fi ;; *) lib_linked=no ;; esac if test "$lib_linked" != yes; then func_fatal_configuration "unsupported hardcode properties" fi if test -n "$add_shlibpath"; then case :$compile_shlibpath: in *":$add_shlibpath:"*) ;; *) func_append compile_shlibpath "$add_shlibpath:" ;; esac fi if test "$linkmode" = prog; then test -n "$add_dir" && compile_deplibs="$add_dir $compile_deplibs" test -n "$add" && compile_deplibs="$add $compile_deplibs" else test -n "$add_dir" && deplibs="$add_dir $deplibs" test -n "$add" && deplibs="$add $deplibs" if test "$hardcode_direct" != yes && test "$hardcode_minus_L" != yes && test "$hardcode_shlibpath_var" = yes; then case :$finalize_shlibpath: in *":$libdir:"*) ;; *) func_append finalize_shlibpath "$libdir:" ;; esac fi fi fi if test "$linkmode" = prog || test "$opt_mode" = relink; then add_shlibpath= add_dir= add= # Finalize command for both is simple: just hardcode it. if test "$hardcode_direct" = yes && test "$hardcode_direct_absolute" = no; then add="$libdir/$linklib" elif test "$hardcode_minus_L" = yes; then add_dir="-L$libdir" add="-l$name" elif test "$hardcode_shlibpath_var" = yes; then case :$finalize_shlibpath: in *":$libdir:"*) ;; *) func_append finalize_shlibpath "$libdir:" ;; esac add="-l$name" elif test "$hardcode_automatic" = yes; then if test -n "$inst_prefix_dir" && test -f "$inst_prefix_dir$libdir/$linklib" ; then add="$inst_prefix_dir$libdir/$linklib" else add="$libdir/$linklib" fi else # We cannot seem to hardcode it, guess we'll fake it. add_dir="-L$libdir" # Try looking first in the location we're being installed to. if test -n "$inst_prefix_dir"; then case $libdir in [\\/]*) func_append add_dir " -L$inst_prefix_dir$libdir" ;; esac fi add="-l$name" fi if test "$linkmode" = prog; then test -n "$add_dir" && finalize_deplibs="$add_dir $finalize_deplibs" test -n "$add" && finalize_deplibs="$add $finalize_deplibs" else test -n "$add_dir" && deplibs="$add_dir $deplibs" test -n "$add" && deplibs="$add $deplibs" fi fi elif test "$linkmode" = prog; then # Here we assume that one of hardcode_direct or hardcode_minus_L # is not unsupported. This is valid on all known static and # shared platforms. if test "$hardcode_direct" != unsupported; then test -n "$old_library" && linklib="$old_library" compile_deplibs="$dir/$linklib $compile_deplibs" finalize_deplibs="$dir/$linklib $finalize_deplibs" else compile_deplibs="-l$name -L$dir $compile_deplibs" finalize_deplibs="-l$name -L$dir $finalize_deplibs" fi elif test "$build_libtool_libs" = yes; then # Not a shared library if test "$deplibs_check_method" != pass_all; then # We're trying link a shared library against a static one # but the system doesn't support it. # Just print a warning and add the library to dependency_libs so # that the program can be linked against the static library. echo $ECHO "*** Warning: This system can not link to static lib archive $lib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have." if test "$module" = yes; then echo "*** But as you try to build a module library, libtool will still create " echo "*** a static module, that should work as long as the dlopening application" echo "*** is linked with the -dlopen flag to resolve symbols at runtime." if test -z "$global_symbol_pipe"; then echo echo "*** However, this would only work if libtool was able to extract symbol" echo "*** lists from a program, using \`nm' or equivalent, but libtool could" echo "*** not find such a program. So, this module is probably useless." echo "*** \`nm' from GNU binutils and a full rebuild may help." fi if test "$build_old_libs" = no; then build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi fi else deplibs="$dir/$old_library $deplibs" link_static=yes fi fi # link shared/static library? if test "$linkmode" = lib; then if test -n "$dependency_libs" && { test "$hardcode_into_libs" != yes || test "$build_old_libs" = yes || test "$link_static" = yes; }; then # Extract -R from dependency_libs temp_deplibs= for libdir in $dependency_libs; do case $libdir in -R*) func_stripname '-R' '' "$libdir" temp_xrpath=$func_stripname_result case " $xrpath " in *" $temp_xrpath "*) ;; *) func_append xrpath " $temp_xrpath";; esac;; *) func_append temp_deplibs " $libdir";; esac done dependency_libs="$temp_deplibs" fi func_append newlib_search_path " $absdir" # Link against this library test "$link_static" = no && newdependency_libs="$abs_ladir/$laname $newdependency_libs" # ... and its dependency_libs tmp_libs= for deplib in $dependency_libs; do newdependency_libs="$deplib $newdependency_libs" case $deplib in -L*) func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result";; *) func_resolve_sysroot "$deplib" ;; esac if $opt_preserve_dup_deps ; then case "$tmp_libs " in *" $func_resolve_sysroot_result "*) func_append specialdeplibs " $func_resolve_sysroot_result" ;; esac fi func_append tmp_libs " $func_resolve_sysroot_result" done if test "$link_all_deplibs" != no; then # Add the search paths of all dependency libraries for deplib in $dependency_libs; do path= case $deplib in -L*) path="$deplib" ;; *.la) func_resolve_sysroot "$deplib" deplib=$func_resolve_sysroot_result func_dirname "$deplib" "" "." dir=$func_dirname_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) absdir="$dir" ;; *) absdir=`cd "$dir" && pwd` if test -z "$absdir"; then func_warning "cannot determine absolute directory name of \`$dir'" absdir="$dir" fi ;; esac if $GREP "^installed=no" $deplib > /dev/null; then case $host in *-*-darwin*) depdepl= eval deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib` if test -n "$deplibrary_names" ; then for tmp in $deplibrary_names ; do depdepl=$tmp done if test -f "$absdir/$objdir/$depdepl" ; then depdepl="$absdir/$objdir/$depdepl" darwin_install_name=`${OTOOL} -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'` if test -z "$darwin_install_name"; then darwin_install_name=`${OTOOL64} -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'` fi func_append compiler_flags " ${wl}-dylib_file ${wl}${darwin_install_name}:${depdepl}" func_append linker_flags " -dylib_file ${darwin_install_name}:${depdepl}" path= fi fi ;; *) path="-L$absdir/$objdir" ;; esac else eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib` test -z "$libdir" && \ func_fatal_error "\`$deplib' is not a valid libtool archive" test "$absdir" != "$libdir" && \ func_warning "\`$deplib' seems to be moved" path="-L$absdir" fi ;; esac case " $deplibs " in *" $path "*) ;; *) deplibs="$path $deplibs" ;; esac done fi # link_all_deplibs != no fi # linkmode = lib done # for deplib in $libs if test "$pass" = link; then if test "$linkmode" = "prog"; then compile_deplibs="$new_inherited_linker_flags $compile_deplibs" finalize_deplibs="$new_inherited_linker_flags $finalize_deplibs" else compiler_flags="$compiler_flags "`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` fi fi dependency_libs="$newdependency_libs" if test "$pass" = dlpreopen; then # Link the dlpreopened libraries before other libraries for deplib in $save_deplibs; do deplibs="$deplib $deplibs" done fi if test "$pass" != dlopen; then if test "$pass" != conv; then # Make sure lib_search_path contains only unique directories. lib_search_path= for dir in $newlib_search_path; do case "$lib_search_path " in *" $dir "*) ;; *) func_append lib_search_path " $dir" ;; esac done newlib_search_path= fi if test "$linkmode,$pass" != "prog,link"; then vars="deplibs" else vars="compile_deplibs finalize_deplibs" fi for var in $vars dependency_libs; do # Add libraries to $var in reverse order eval tmp_libs=\"\$$var\" new_libs= for deplib in $tmp_libs; do # FIXME: Pedantically, this is the right thing to do, so # that some nasty dependency loop isn't accidentally # broken: #new_libs="$deplib $new_libs" # Pragmatically, this seems to cause very few problems in # practice: case $deplib in -L*) new_libs="$deplib $new_libs" ;; -R*) ;; *) # And here is the reason: when a library appears more # than once as an explicit dependence of a library, or # is implicitly linked in more than once by the # compiler, it is considered special, and multiple # occurrences thereof are not removed. Compare this # with having the same library being listed as a # dependency of multiple other libraries: in this case, # we know (pedantically, we assume) the library does not # need to be listed more than once, so we keep only the # last copy. This is not always right, but it is rare # enough that we require users that really mean to play # such unportable linking tricks to link the library # using -Wl,-lname, so that libtool does not consider it # for duplicate removal. case " $specialdeplibs " in *" $deplib "*) new_libs="$deplib $new_libs" ;; *) case " $new_libs " in *" $deplib "*) ;; *) new_libs="$deplib $new_libs" ;; esac ;; esac ;; esac done tmp_libs= for deplib in $new_libs; do case $deplib in -L*) case " $tmp_libs " in *" $deplib "*) ;; *) func_append tmp_libs " $deplib" ;; esac ;; *) func_append tmp_libs " $deplib" ;; esac done eval $var=\"$tmp_libs\" done # for var fi # Last step: remove runtime libs from dependency_libs # (they stay in deplibs) tmp_libs= for i in $dependency_libs ; do case " $predeps $postdeps $compiler_lib_search_path " in *" $i "*) i="" ;; esac if test -n "$i" ; then func_append tmp_libs " $i" fi done dependency_libs=$tmp_libs done # for pass if test "$linkmode" = prog; then dlfiles="$newdlfiles" fi if test "$linkmode" = prog || test "$linkmode" = lib; then dlprefiles="$newdlprefiles" fi case $linkmode in oldlib) if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then func_warning "\`-dlopen' is ignored for archives" fi case " $deplibs" in *\ -l* | *\ -L*) func_warning "\`-l' and \`-L' are ignored for archives" ;; esac test -n "$rpath" && \ func_warning "\`-rpath' is ignored for archives" test -n "$xrpath" && \ func_warning "\`-R' is ignored for archives" test -n "$vinfo" && \ func_warning "\`-version-info/-version-number' is ignored for archives" test -n "$release" && \ func_warning "\`-release' is ignored for archives" test -n "$export_symbols$export_symbols_regex" && \ func_warning "\`-export-symbols' is ignored for archives" # Now set the variables for building old libraries. build_libtool_libs=no oldlibs="$output" func_append objs "$old_deplibs" ;; lib) # Make sure we only generate libraries of the form `libNAME.la'. case $outputname in lib*) func_stripname 'lib' '.la' "$outputname" name=$func_stripname_result eval shared_ext=\"$shrext_cmds\" eval libname=\"$libname_spec\" ;; *) test "$module" = no && \ func_fatal_help "libtool library \`$output' must begin with \`lib'" if test "$need_lib_prefix" != no; then # Add the "lib" prefix for modules if required func_stripname '' '.la' "$outputname" name=$func_stripname_result eval shared_ext=\"$shrext_cmds\" eval libname=\"$libname_spec\" else func_stripname '' '.la' "$outputname" libname=$func_stripname_result fi ;; esac if test -n "$objs"; then if test "$deplibs_check_method" != pass_all; then func_fatal_error "cannot build libtool library \`$output' from non-libtool objects on this host:$objs" else echo $ECHO "*** Warning: Linking the shared library $output against the non-libtool" $ECHO "*** objects $objs is not portable!" func_append libobjs " $objs" fi fi test "$dlself" != no && \ func_warning "\`-dlopen self' is ignored for libtool libraries" set dummy $rpath shift test "$#" -gt 1 && \ func_warning "ignoring multiple \`-rpath's for a libtool library" install_libdir="$1" oldlibs= if test -z "$rpath"; then if test "$build_libtool_libs" = yes; then # Building a libtool convenience library. # Some compilers have problems with a `.al' extension so # convenience libraries should have the same extension an # archive normally would. oldlibs="$output_objdir/$libname.$libext $oldlibs" build_libtool_libs=convenience build_old_libs=yes fi test -n "$vinfo" && \ func_warning "\`-version-info/-version-number' is ignored for convenience libraries" test -n "$release" && \ func_warning "\`-release' is ignored for convenience libraries" else # Parse the version information argument. save_ifs="$IFS"; IFS=':' set dummy $vinfo 0 0 0 shift IFS="$save_ifs" test -n "$7" && \ func_fatal_help "too many parameters to \`-version-info'" # convert absolute version numbers to libtool ages # this retains compatibility with .la files and attempts # to make the code below a bit more comprehensible case $vinfo_number in yes) number_major="$1" number_minor="$2" number_revision="$3" # # There are really only two kinds -- those that # use the current revision as the major version # and those that subtract age and use age as # a minor version. But, then there is irix # which has an extra 1 added just for fun # case $version_type in # correct linux to gnu/linux during the next big refactor darwin|linux|osf|windows|none) func_arith $number_major + $number_minor current=$func_arith_result age="$number_minor" revision="$number_revision" ;; freebsd-aout|freebsd-elf|qnx|sunos) current="$number_major" revision="$number_minor" age="0" ;; irix|nonstopux) func_arith $number_major + $number_minor current=$func_arith_result age="$number_minor" revision="$number_minor" lt_irix_increment=no ;; *) func_fatal_configuration "$modename: unknown library version type \`$version_type'" ;; esac ;; no) current="$1" revision="$2" age="$3" ;; esac # Check that each of the things are valid numbers. case $current in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "CURRENT \`$current' must be a nonnegative integer" func_fatal_error "\`$vinfo' is not valid version information" ;; esac case $revision in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "REVISION \`$revision' must be a nonnegative integer" func_fatal_error "\`$vinfo' is not valid version information" ;; esac case $age in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "AGE \`$age' must be a nonnegative integer" func_fatal_error "\`$vinfo' is not valid version information" ;; esac if test "$age" -gt "$current"; then func_error "AGE \`$age' is greater than the current interface number \`$current'" func_fatal_error "\`$vinfo' is not valid version information" fi # Calculate the version variables. major= versuffix= verstring= case $version_type in none) ;; darwin) # Like Linux, but with the current version available in # verstring for coding it into the library header func_arith $current - $age major=.$func_arith_result versuffix="$major.$age.$revision" # Darwin ld doesn't like 0 for these options... func_arith $current + 1 minor_current=$func_arith_result xlcverstring="${wl}-compatibility_version ${wl}$minor_current ${wl}-current_version ${wl}$minor_current.$revision" verstring="-compatibility_version $minor_current -current_version $minor_current.$revision" ;; freebsd-aout) major=".$current" versuffix=".$current.$revision"; ;; freebsd-elf) major=".$current" versuffix=".$current" ;; irix | nonstopux) if test "X$lt_irix_increment" = "Xno"; then func_arith $current - $age else func_arith $current - $age + 1 fi major=$func_arith_result case $version_type in nonstopux) verstring_prefix=nonstopux ;; *) verstring_prefix=sgi ;; esac verstring="$verstring_prefix$major.$revision" # Add in all the interfaces that we are compatible with. loop=$revision while test "$loop" -ne 0; do func_arith $revision - $loop iface=$func_arith_result func_arith $loop - 1 loop=$func_arith_result verstring="$verstring_prefix$major.$iface:$verstring" done # Before this point, $major must not contain `.'. major=.$major versuffix="$major.$revision" ;; linux) # correct to gnu/linux during the next big refactor func_arith $current - $age major=.$func_arith_result versuffix="$major.$age.$revision" ;; osf) func_arith $current - $age major=.$func_arith_result versuffix=".$current.$age.$revision" verstring="$current.$age.$revision" # Add in all the interfaces that we are compatible with. loop=$age while test "$loop" -ne 0; do func_arith $current - $loop iface=$func_arith_result func_arith $loop - 1 loop=$func_arith_result verstring="$verstring:${iface}.0" done # Make executables depend on our current version. func_append verstring ":${current}.0" ;; qnx) major=".$current" versuffix=".$current" ;; sunos) major=".$current" versuffix=".$current.$revision" ;; windows) # Use '-' rather than '.', since we only want one # extension on DOS 8.3 filesystems. func_arith $current - $age major=$func_arith_result versuffix="-$major" ;; *) func_fatal_configuration "unknown library version type \`$version_type'" ;; esac # Clear the version info if we defaulted, and they specified a release. if test -z "$vinfo" && test -n "$release"; then major= case $version_type in darwin) # we can't check for "0.0" in archive_cmds due to quoting # problems, so we reset it completely verstring= ;; *) verstring="0.0" ;; esac if test "$need_version" = no; then versuffix= else versuffix=".0.0" fi fi # Remove version info from name if versioning should be avoided if test "$avoid_version" = yes && test "$need_version" = no; then major= versuffix= verstring="" fi # Check to see if the archive will have undefined symbols. if test "$allow_undefined" = yes; then if test "$allow_undefined_flag" = unsupported; then func_warning "undefined symbols not allowed in $host shared libraries" build_libtool_libs=no build_old_libs=yes fi else # Don't allow undefined symbols. allow_undefined_flag="$no_undefined_flag" fi fi func_generate_dlsyms "$libname" "$libname" "yes" func_append libobjs " $symfileobj" test "X$libobjs" = "X " && libobjs= if test "$opt_mode" != relink; then # Remove our outputs, but don't remove object files since they # may have been created when compiling PIC objects. removelist= tempremovelist=`$ECHO "$output_objdir/*"` for p in $tempremovelist; do case $p in *.$objext | *.gcno) ;; $output_objdir/$outputname | $output_objdir/$libname.* | $output_objdir/${libname}${release}.*) if test "X$precious_files_regex" != "X"; then if $ECHO "$p" | $EGREP -e "$precious_files_regex" >/dev/null 2>&1 then continue fi fi func_append removelist " $p" ;; *) ;; esac done test -n "$removelist" && \ func_show_eval "${RM}r \$removelist" fi # Now set the variables for building old libraries. if test "$build_old_libs" = yes && test "$build_libtool_libs" != convenience ; then func_append oldlibs " $output_objdir/$libname.$libext" # Transform .lo files to .o files. oldobjs="$objs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; $lo2o" | $NL2SP` fi # Eliminate all temporary directories. #for path in $notinst_path; do # lib_search_path=`$ECHO "$lib_search_path " | $SED "s% $path % %g"` # deplibs=`$ECHO "$deplibs " | $SED "s% -L$path % %g"` # dependency_libs=`$ECHO "$dependency_libs " | $SED "s% -L$path % %g"` #done if test -n "$xrpath"; then # If the user specified any rpath flags, then add them. temp_xrpath= for libdir in $xrpath; do func_replace_sysroot "$libdir" func_append temp_xrpath " -R$func_replace_sysroot_result" case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac done if test "$hardcode_into_libs" != yes || test "$build_old_libs" = yes; then dependency_libs="$temp_xrpath $dependency_libs" fi fi # Make sure dlfiles contains only unique files that won't be dlpreopened old_dlfiles="$dlfiles" dlfiles= for lib in $old_dlfiles; do case " $dlprefiles $dlfiles " in *" $lib "*) ;; *) func_append dlfiles " $lib" ;; esac done # Make sure dlprefiles contains only unique files old_dlprefiles="$dlprefiles" dlprefiles= for lib in $old_dlprefiles; do case "$dlprefiles " in *" $lib "*) ;; *) func_append dlprefiles " $lib" ;; esac done if test "$build_libtool_libs" = yes; then if test -n "$rpath"; then case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-beos* | *-cegcc* | *-*-haiku*) # these systems don't actually have a c library (as such)! ;; *-*-rhapsody* | *-*-darwin1.[012]) # Rhapsody C library is in the System framework func_append deplibs " System.ltframework" ;; *-*-netbsd*) # Don't link with libc until the a.out ld.so is fixed. ;; *-*-openbsd* | *-*-freebsd* | *-*-dragonfly*) # Do not include libc due to us having libc/libc_r. ;; *-*-sco3.2v5* | *-*-sco5v6*) # Causes problems with __ctype ;; *-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*) # Compiler inserts libc in the correct place for threads to work ;; *) # Add libc to deplibs on all other systems if necessary. if test "$build_libtool_need_lc" = "yes"; then func_append deplibs " -lc" fi ;; esac fi # Transform deplibs into only deplibs that can be linked in shared. name_save=$name libname_save=$libname release_save=$release versuffix_save=$versuffix major_save=$major # I'm not sure if I'm treating the release correctly. I think # release should show up in the -l (ie -lgmp5) so we don't want to # add it in twice. Is that correct? release="" versuffix="" major="" newdeplibs= droppeddeps=no case $deplibs_check_method in pass_all) # Don't check for shared/static. Everything works. # This might be a little naive. We might want to check # whether the library exists or not. But this is on # osf3 & osf4 and I'm not really sure... Just # implementing what was already the behavior. newdeplibs=$deplibs ;; test_compile) # This code stresses the "libraries are programs" paradigm to its # limits. Maybe even breaks it. We compile a program, linking it # against the deplibs as a proxy for the library. Then we can check # whether they linked in statically or dynamically with ldd. $opt_dry_run || $RM conftest.c cat > conftest.c </dev/null` $nocaseglob else potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null` fi for potent_lib in $potential_libs; do # Follow soft links. if ls -lLd "$potent_lib" 2>/dev/null | $GREP " -> " >/dev/null; then continue fi # The statement above tries to avoid entering an # endless loop below, in case of cyclic links. # We might still enter an endless loop, since a link # loop can be closed while we follow links, # but so what? potlib="$potent_lib" while test -h "$potlib" 2>/dev/null; do potliblink=`ls -ld $potlib | ${SED} 's/.* -> //'` case $potliblink in [\\/]* | [A-Za-z]:[\\/]*) potlib="$potliblink";; *) potlib=`$ECHO "$potlib" | $SED 's,[^/]*$,,'`"$potliblink";; esac done if eval $file_magic_cmd \"\$potlib\" 2>/dev/null | $SED -e 10q | $EGREP "$file_magic_regex" > /dev/null; then func_append newdeplibs " $a_deplib" a_deplib="" break 2 fi done done fi if test -n "$a_deplib" ; then droppeddeps=yes echo $ECHO "*** Warning: linker path does not have real file for library $a_deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because I did check the linker path looking for a file starting" if test -z "$potlib" ; then $ECHO "*** with $libname but no candidates were found. (...for file magic test)" else $ECHO "*** with $libname and none of the candidates passed a file format test" $ECHO "*** using a file magic. Last file checked: $potlib" fi fi ;; *) # Add a -L argument. func_append newdeplibs " $a_deplib" ;; esac done # Gone through all deplibs. ;; match_pattern*) set dummy $deplibs_check_method; shift match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"` for a_deplib in $deplibs; do case $a_deplib in -l*) func_stripname -l '' "$a_deplib" name=$func_stripname_result if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then case " $predeps $postdeps " in *" $a_deplib "*) func_append newdeplibs " $a_deplib" a_deplib="" ;; esac fi if test -n "$a_deplib" ; then libname=`eval "\\$ECHO \"$libname_spec\""` for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do potential_libs=`ls $i/$libname[.-]* 2>/dev/null` for potent_lib in $potential_libs; do potlib="$potent_lib" # see symlink-check above in file_magic test if eval "\$ECHO \"$potent_lib\"" 2>/dev/null | $SED 10q | \ $EGREP "$match_pattern_regex" > /dev/null; then func_append newdeplibs " $a_deplib" a_deplib="" break 2 fi done done fi if test -n "$a_deplib" ; then droppeddeps=yes echo $ECHO "*** Warning: linker path does not have real file for library $a_deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because I did check the linker path looking for a file starting" if test -z "$potlib" ; then $ECHO "*** with $libname but no candidates were found. (...for regex pattern test)" else $ECHO "*** with $libname and none of the candidates passed a file format test" $ECHO "*** using a regex pattern. Last file checked: $potlib" fi fi ;; *) # Add a -L argument. func_append newdeplibs " $a_deplib" ;; esac done # Gone through all deplibs. ;; none | unknown | *) newdeplibs="" tmp_deplibs=`$ECHO " $deplibs" | $SED 's/ -lc$//; s/ -[LR][^ ]*//g'` if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then for i in $predeps $postdeps ; do # can't use Xsed below, because $i might contain '/' tmp_deplibs=`$ECHO " $tmp_deplibs" | $SED "s,$i,,"` done fi case $tmp_deplibs in *[!\ \ ]*) echo if test "X$deplibs_check_method" = "Xnone"; then echo "*** Warning: inter-library dependencies are not supported in this platform." else echo "*** Warning: inter-library dependencies are not known to be supported." fi echo "*** All declared inter-library dependencies are being dropped." droppeddeps=yes ;; esac ;; esac versuffix=$versuffix_save major=$major_save release=$release_save libname=$libname_save name=$name_save case $host in *-*-rhapsody* | *-*-darwin1.[012]) # On Rhapsody replace the C library with the System framework newdeplibs=`$ECHO " $newdeplibs" | $SED 's/ -lc / System.ltframework /'` ;; esac if test "$droppeddeps" = yes; then if test "$module" = yes; then echo echo "*** Warning: libtool could not satisfy all declared inter-library" $ECHO "*** dependencies of module $libname. Therefore, libtool will create" echo "*** a static module, that should work as long as the dlopening" echo "*** application is linked with the -dlopen flag." if test -z "$global_symbol_pipe"; then echo echo "*** However, this would only work if libtool was able to extract symbol" echo "*** lists from a program, using \`nm' or equivalent, but libtool could" echo "*** not find such a program. So, this module is probably useless." echo "*** \`nm' from GNU binutils and a full rebuild may help." fi if test "$build_old_libs" = no; then oldlibs="$output_objdir/$libname.$libext" build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi else echo "*** The inter-library dependencies that have been dropped here will be" echo "*** automatically added whenever a program is linked with this library" echo "*** or is declared to -dlopen it." if test "$allow_undefined" = no; then echo echo "*** Since this library must not contain undefined symbols," echo "*** because either the platform does not support them or" echo "*** it was explicitly requested with -no-undefined," echo "*** libtool will only create a static version of it." if test "$build_old_libs" = no; then oldlibs="$output_objdir/$libname.$libext" build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi fi fi fi # Done checking deplibs! deplibs=$newdeplibs fi # Time to change all our "foo.ltframework" stuff back to "-framework foo" case $host in *-*-darwin*) newdeplibs=`$ECHO " $newdeplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` new_inherited_linker_flags=`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` deplibs=`$ECHO " $deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` ;; esac # move library search paths that coincide with paths to not yet # installed libraries to the beginning of the library search list new_libs= for path in $notinst_path; do case " $new_libs " in *" -L$path/$objdir "*) ;; *) case " $deplibs " in *" -L$path/$objdir "*) func_append new_libs " -L$path/$objdir" ;; esac ;; esac done for deplib in $deplibs; do case $deplib in -L*) case " $new_libs " in *" $deplib "*) ;; *) func_append new_libs " $deplib" ;; esac ;; *) func_append new_libs " $deplib" ;; esac done deplibs="$new_libs" # All the library-specific variables (install_libdir is set above). library_names= old_library= dlname= # Test again, we may have decided not to build it any more if test "$build_libtool_libs" = yes; then # Remove ${wl} instances when linking with ld. # FIXME: should test the right _cmds variable. case $archive_cmds in *\$LD\ *) wl= ;; esac if test "$hardcode_into_libs" = yes; then # Hardcode the library paths hardcode_libdirs= dep_rpath= rpath="$finalize_rpath" test "$opt_mode" != relink && rpath="$compile_rpath$rpath" for libdir in $rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then func_replace_sysroot "$libdir" libdir=$func_replace_sysroot_result if test -z "$hardcode_libdirs"; then hardcode_libdirs="$libdir" else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append dep_rpath " $flag" fi elif test -n "$runpath_var"; then case "$perm_rpath " in *" $libdir "*) ;; *) func_append perm_rpath " $libdir" ;; esac fi done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir="$hardcode_libdirs" eval "dep_rpath=\"$hardcode_libdir_flag_spec\"" fi if test -n "$runpath_var" && test -n "$perm_rpath"; then # We should set the runpath_var. rpath= for dir in $perm_rpath; do func_append rpath "$dir:" done eval "$runpath_var='$rpath\$$runpath_var'; export $runpath_var" fi test -n "$dep_rpath" && deplibs="$dep_rpath $deplibs" fi shlibpath="$finalize_shlibpath" test "$opt_mode" != relink && shlibpath="$compile_shlibpath$shlibpath" if test -n "$shlibpath"; then eval "$shlibpath_var='$shlibpath\$$shlibpath_var'; export $shlibpath_var" fi # Get the real and link names of the library. eval shared_ext=\"$shrext_cmds\" eval library_names=\"$library_names_spec\" set dummy $library_names shift realname="$1" shift if test -n "$soname_spec"; then eval soname=\"$soname_spec\" else soname="$realname" fi if test -z "$dlname"; then dlname=$soname fi lib="$output_objdir/$realname" linknames= for link do func_append linknames " $link" done # Use standard objects if they are pic test -z "$pic_flag" && libobjs=`$ECHO "$libobjs" | $SP2NL | $SED "$lo2o" | $NL2SP` test "X$libobjs" = "X " && libobjs= delfiles= if test -n "$export_symbols" && test -n "$include_expsyms"; then $opt_dry_run || cp "$export_symbols" "$output_objdir/$libname.uexp" export_symbols="$output_objdir/$libname.uexp" func_append delfiles " $export_symbols" fi orig_export_symbols= case $host_os in cygwin* | mingw* | cegcc*) if test -n "$export_symbols" && test -z "$export_symbols_regex"; then # exporting using user supplied symfile if test "x`$SED 1q $export_symbols`" != xEXPORTS; then # and it's NOT already a .def file. Must figure out # which of the given symbols are data symbols and tag # them as such. So, trigger use of export_symbols_cmds. # export_symbols gets reassigned inside the "prepare # the list of exported symbols" if statement, so the # include_expsyms logic still works. orig_export_symbols="$export_symbols" export_symbols= always_export_symbols=yes fi fi ;; esac # Prepare the list of exported symbols if test -z "$export_symbols"; then if test "$always_export_symbols" = yes || test -n "$export_symbols_regex"; then func_verbose "generating symbol list for \`$libname.la'" export_symbols="$output_objdir/$libname.exp" $opt_dry_run || $RM $export_symbols cmds=$export_symbols_cmds save_ifs="$IFS"; IFS='~' for cmd1 in $cmds; do IFS="$save_ifs" # Take the normal branch if the nm_file_list_spec branch # doesn't work or if tool conversion is not needed. case $nm_file_list_spec~$to_tool_file_cmd in *~func_convert_file_noop | *~func_convert_file_msys_to_w32 | ~*) try_normal_branch=yes eval cmd=\"$cmd1\" func_len " $cmd" len=$func_len_result ;; *) try_normal_branch=no ;; esac if test "$try_normal_branch" = yes \ && { test "$len" -lt "$max_cmd_len" \ || test "$max_cmd_len" -le -1; } then func_show_eval "$cmd" 'exit $?' skipped_export=false elif test -n "$nm_file_list_spec"; then func_basename "$output" output_la=$func_basename_result save_libobjs=$libobjs save_output=$output output=${output_objdir}/${output_la}.nm func_to_tool_file "$output" libobjs=$nm_file_list_spec$func_to_tool_file_result func_append delfiles " $output" func_verbose "creating $NM input file list: $output" for obj in $save_libobjs; do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" done > "$output" eval cmd=\"$cmd1\" func_show_eval "$cmd" 'exit $?' output=$save_output libobjs=$save_libobjs skipped_export=false else # The command line is too long to execute in one step. func_verbose "using reloadable object file for export list..." skipped_export=: # Break out early, otherwise skipped_export may be # set to false by a later but shorter cmd. break fi done IFS="$save_ifs" if test -n "$export_symbols_regex" && test "X$skipped_export" != "X:"; then func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"' func_show_eval '$MV "${export_symbols}T" "$export_symbols"' fi fi fi if test -n "$export_symbols" && test -n "$include_expsyms"; then tmp_export_symbols="$export_symbols" test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols" $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"' fi if test "X$skipped_export" != "X:" && test -n "$orig_export_symbols"; then # The given exports_symbols file has to be filtered, so filter it. func_verbose "filter symbol list for \`$libname.la' to tag DATA exports" # FIXME: $output_objdir/$libname.filter potentially contains lots of # 's' commands which not all seds can handle. GNU sed should be fine # though. Also, the filter scales superlinearly with the number of # global variables. join(1) would be nice here, but unfortunately # isn't a blessed tool. $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter func_append delfiles " $export_symbols $output_objdir/$libname.filter" export_symbols=$output_objdir/$libname.def $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols fi tmp_deplibs= for test_deplib in $deplibs; do case " $convenience " in *" $test_deplib "*) ;; *) func_append tmp_deplibs " $test_deplib" ;; esac done deplibs="$tmp_deplibs" if test -n "$convenience"; then if test -n "$whole_archive_flag_spec" && test "$compiler_needs_object" = yes && test -z "$libobjs"; then # extract the archives, so we have objects to list. # TODO: could optimize this to just extract one archive. whole_archive_flag_spec= fi if test -n "$whole_archive_flag_spec"; then save_libobjs=$libobjs eval libobjs=\"\$libobjs $whole_archive_flag_spec\" test "X$libobjs" = "X " && libobjs= else gentop="$output_objdir/${outputname}x" func_append generated " $gentop" func_extract_archives $gentop $convenience func_append libobjs " $func_extract_archives_result" test "X$libobjs" = "X " && libobjs= fi fi if test "$thread_safe" = yes && test -n "$thread_safe_flag_spec"; then eval flag=\"$thread_safe_flag_spec\" func_append linker_flags " $flag" fi # Make a backup of the uninstalled library when relinking if test "$opt_mode" = relink; then $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U)' || exit $? fi # Do each of the archive commands. if test "$module" = yes && test -n "$module_cmds" ; then if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then eval test_cmds=\"$module_expsym_cmds\" cmds=$module_expsym_cmds else eval test_cmds=\"$module_cmds\" cmds=$module_cmds fi else if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then eval test_cmds=\"$archive_expsym_cmds\" cmds=$archive_expsym_cmds else eval test_cmds=\"$archive_cmds\" cmds=$archive_cmds fi fi if test "X$skipped_export" != "X:" && func_len " $test_cmds" && len=$func_len_result && test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then : else # The command line is too long to link in one step, link piecewise # or, if using GNU ld and skipped_export is not :, use a linker # script. # Save the value of $output and $libobjs because we want to # use them later. If we have whole_archive_flag_spec, we # want to use save_libobjs as it was before # whole_archive_flag_spec was expanded, because we can't # assume the linker understands whole_archive_flag_spec. # This may have to be revisited, in case too many # convenience libraries get linked in and end up exceeding # the spec. if test -z "$convenience" || test -z "$whole_archive_flag_spec"; then save_libobjs=$libobjs fi save_output=$output func_basename "$output" output_la=$func_basename_result # Clear the reloadable object creation command queue and # initialize k to one. test_cmds= concat_cmds= objlist= last_robj= k=1 if test -n "$save_libobjs" && test "X$skipped_export" != "X:" && test "$with_gnu_ld" = yes; then output=${output_objdir}/${output_la}.lnkscript func_verbose "creating GNU ld script: $output" echo 'INPUT (' > $output for obj in $save_libobjs do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" >> $output done echo ')' >> $output func_append delfiles " $output" func_to_tool_file "$output" output=$func_to_tool_file_result elif test -n "$save_libobjs" && test "X$skipped_export" != "X:" && test "X$file_list_spec" != X; then output=${output_objdir}/${output_la}.lnk func_verbose "creating linker input file list: $output" : > $output set x $save_libobjs shift firstobj= if test "$compiler_needs_object" = yes; then firstobj="$1 " shift fi for obj do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" >> $output done func_append delfiles " $output" func_to_tool_file "$output" output=$firstobj\"$file_list_spec$func_to_tool_file_result\" else if test -n "$save_libobjs"; then func_verbose "creating reloadable object files..." output=$output_objdir/$output_la-${k}.$objext eval test_cmds=\"$reload_cmds\" func_len " $test_cmds" len0=$func_len_result len=$len0 # Loop over the list of objects to be linked. for obj in $save_libobjs do func_len " $obj" func_arith $len + $func_len_result len=$func_arith_result if test "X$objlist" = X || test "$len" -lt "$max_cmd_len"; then func_append objlist " $obj" else # The command $test_cmds is almost too long, add a # command to the queue. if test "$k" -eq 1 ; then # The first file doesn't have a previous command to add. reload_objs=$objlist eval concat_cmds=\"$reload_cmds\" else # All subsequent reloadable object files will link in # the last one created. reload_objs="$objlist $last_robj" eval concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\" fi last_robj=$output_objdir/$output_la-${k}.$objext func_arith $k + 1 k=$func_arith_result output=$output_objdir/$output_la-${k}.$objext objlist=" $obj" func_len " $last_robj" func_arith $len0 + $func_len_result len=$func_arith_result fi done # Handle the remaining objects by creating one last # reloadable object file. All subsequent reloadable object # files will link in the last one created. test -z "$concat_cmds" || concat_cmds=$concat_cmds~ reload_objs="$objlist $last_robj" eval concat_cmds=\"\${concat_cmds}$reload_cmds\" if test -n "$last_robj"; then eval concat_cmds=\"\${concat_cmds}~\$RM $last_robj\" fi func_append delfiles " $output" else output= fi if ${skipped_export-false}; then func_verbose "generating symbol list for \`$libname.la'" export_symbols="$output_objdir/$libname.exp" $opt_dry_run || $RM $export_symbols libobjs=$output # Append the command to create the export file. test -z "$concat_cmds" || concat_cmds=$concat_cmds~ eval concat_cmds=\"\$concat_cmds$export_symbols_cmds\" if test -n "$last_robj"; then eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\" fi fi test -n "$save_libobjs" && func_verbose "creating a temporary reloadable object file: $output" # Loop through the commands generated above and execute them. save_ifs="$IFS"; IFS='~' for cmd in $concat_cmds; do IFS="$save_ifs" $opt_silent || { func_quote_for_expand "$cmd" eval "func_echo $func_quote_for_expand_result" } $opt_dry_run || eval "$cmd" || { lt_exit=$? # Restore the uninstalled library and exit if test "$opt_mode" = relink; then ( cd "$output_objdir" && \ $RM "${realname}T" && \ $MV "${realname}U" "$realname" ) fi exit $lt_exit } done IFS="$save_ifs" if test -n "$export_symbols_regex" && ${skipped_export-false}; then func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"' func_show_eval '$MV "${export_symbols}T" "$export_symbols"' fi fi if ${skipped_export-false}; then if test -n "$export_symbols" && test -n "$include_expsyms"; then tmp_export_symbols="$export_symbols" test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols" $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"' fi if test -n "$orig_export_symbols"; then # The given exports_symbols file has to be filtered, so filter it. func_verbose "filter symbol list for \`$libname.la' to tag DATA exports" # FIXME: $output_objdir/$libname.filter potentially contains lots of # 's' commands which not all seds can handle. GNU sed should be fine # though. Also, the filter scales superlinearly with the number of # global variables. join(1) would be nice here, but unfortunately # isn't a blessed tool. $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter func_append delfiles " $export_symbols $output_objdir/$libname.filter" export_symbols=$output_objdir/$libname.def $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols fi fi libobjs=$output # Restore the value of output. output=$save_output if test -n "$convenience" && test -n "$whole_archive_flag_spec"; then eval libobjs=\"\$libobjs $whole_archive_flag_spec\" test "X$libobjs" = "X " && libobjs= fi # Expand the library linking commands again to reset the # value of $libobjs for piecewise linking. # Do each of the archive commands. if test "$module" = yes && test -n "$module_cmds" ; then if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then cmds=$module_expsym_cmds else cmds=$module_cmds fi else if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then cmds=$archive_expsym_cmds else cmds=$archive_cmds fi fi fi if test -n "$delfiles"; then # Append the command to remove temporary files to $cmds. eval cmds=\"\$cmds~\$RM $delfiles\" fi # Add any objects from preloaded convenience libraries if test -n "$dlprefiles"; then gentop="$output_objdir/${outputname}x" func_append generated " $gentop" func_extract_archives $gentop $dlprefiles func_append libobjs " $func_extract_archives_result" test "X$libobjs" = "X " && libobjs= fi save_ifs="$IFS"; IFS='~' for cmd in $cmds; do IFS="$save_ifs" eval cmd=\"$cmd\" $opt_silent || { func_quote_for_expand "$cmd" eval "func_echo $func_quote_for_expand_result" } $opt_dry_run || eval "$cmd" || { lt_exit=$? # Restore the uninstalled library and exit if test "$opt_mode" = relink; then ( cd "$output_objdir" && \ $RM "${realname}T" && \ $MV "${realname}U" "$realname" ) fi exit $lt_exit } done IFS="$save_ifs" # Restore the uninstalled library and exit if test "$opt_mode" = relink; then $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname)' || exit $? if test -n "$convenience"; then if test -z "$whole_archive_flag_spec"; then func_show_eval '${RM}r "$gentop"' fi fi exit $EXIT_SUCCESS fi # Create links to the real library. for linkname in $linknames; do if test "$realname" != "$linkname"; then func_show_eval '(cd "$output_objdir" && $RM "$linkname" && $LN_S "$realname" "$linkname")' 'exit $?' fi done # If -module or -export-dynamic was specified, set the dlname. if test "$module" = yes || test "$export_dynamic" = yes; then # On all known operating systems, these are identical. dlname="$soname" fi fi ;; obj) if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then func_warning "\`-dlopen' is ignored for objects" fi case " $deplibs" in *\ -l* | *\ -L*) func_warning "\`-l' and \`-L' are ignored for objects" ;; esac test -n "$rpath" && \ func_warning "\`-rpath' is ignored for objects" test -n "$xrpath" && \ func_warning "\`-R' is ignored for objects" test -n "$vinfo" && \ func_warning "\`-version-info' is ignored for objects" test -n "$release" && \ func_warning "\`-release' is ignored for objects" case $output in *.lo) test -n "$objs$old_deplibs" && \ func_fatal_error "cannot build library object \`$output' from non-libtool objects" libobj=$output func_lo2o "$libobj" obj=$func_lo2o_result ;; *) libobj= obj="$output" ;; esac # Delete the old objects. $opt_dry_run || $RM $obj $libobj # Objects from convenience libraries. This assumes # single-version convenience libraries. Whenever we create # different ones for PIC/non-PIC, this we'll have to duplicate # the extraction. reload_conv_objs= gentop= # reload_cmds runs $LD directly, so let us get rid of # -Wl from whole_archive_flag_spec and hope we can get by with # turning comma into space.. wl= if test -n "$convenience"; then if test -n "$whole_archive_flag_spec"; then eval tmp_whole_archive_flags=\"$whole_archive_flag_spec\" reload_conv_objs=$reload_objs\ `$ECHO "$tmp_whole_archive_flags" | $SED 's|,| |g'` else gentop="$output_objdir/${obj}x" func_append generated " $gentop" func_extract_archives $gentop $convenience reload_conv_objs="$reload_objs $func_extract_archives_result" fi fi # If we're not building shared, we need to use non_pic_objs test "$build_libtool_libs" != yes && libobjs="$non_pic_objects" # Create the old-style object. reload_objs="$objs$old_deplibs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; /\.lib$/d; $lo2o" | $NL2SP`" $reload_conv_objs" ### testsuite: skip nested quoting test output="$obj" func_execute_cmds "$reload_cmds" 'exit $?' # Exit if we aren't doing a library object file. if test -z "$libobj"; then if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi exit $EXIT_SUCCESS fi if test "$build_libtool_libs" != yes; then if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi # Create an invalid libtool object if no PIC, so that we don't # accidentally link it into a program. # $show "echo timestamp > $libobj" # $opt_dry_run || eval "echo timestamp > $libobj" || exit $? exit $EXIT_SUCCESS fi if test -n "$pic_flag" || test "$pic_mode" != default; then # Only do commands if we really have different PIC objects. reload_objs="$libobjs $reload_conv_objs" output="$libobj" func_execute_cmds "$reload_cmds" 'exit $?' fi if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi exit $EXIT_SUCCESS ;; prog) case $host in *cygwin*) func_stripname '' '.exe' "$output" output=$func_stripname_result.exe;; esac test -n "$vinfo" && \ func_warning "\`-version-info' is ignored for programs" test -n "$release" && \ func_warning "\`-release' is ignored for programs" test "$preload" = yes \ && test "$dlopen_support" = unknown \ && test "$dlopen_self" = unknown \ && test "$dlopen_self_static" = unknown && \ func_warning "\`LT_INIT([dlopen])' not used. Assuming no dlopen support." case $host in *-*-rhapsody* | *-*-darwin1.[012]) # On Rhapsody replace the C library is the System framework compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's/ -lc / System.ltframework /'` finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's/ -lc / System.ltframework /'` ;; esac case $host in *-*-darwin*) # Don't allow lazy linking, it breaks C++ global constructors # But is supposedly fixed on 10.4 or later (yay!). if test "$tagname" = CXX ; then case ${MACOSX_DEPLOYMENT_TARGET-10.0} in 10.[0123]) func_append compile_command " ${wl}-bind_at_load" func_append finalize_command " ${wl}-bind_at_load" ;; esac fi # Time to change all our "foo.ltframework" stuff back to "-framework foo" compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` ;; esac # move library search paths that coincide with paths to not yet # installed libraries to the beginning of the library search list new_libs= for path in $notinst_path; do case " $new_libs " in *" -L$path/$objdir "*) ;; *) case " $compile_deplibs " in *" -L$path/$objdir "*) func_append new_libs " -L$path/$objdir" ;; esac ;; esac done for deplib in $compile_deplibs; do case $deplib in -L*) case " $new_libs " in *" $deplib "*) ;; *) func_append new_libs " $deplib" ;; esac ;; *) func_append new_libs " $deplib" ;; esac done compile_deplibs="$new_libs" func_append compile_command " $compile_deplibs" func_append finalize_command " $finalize_deplibs" if test -n "$rpath$xrpath"; then # If the user specified any rpath flags, then add them. for libdir in $rpath $xrpath; do # This is the magic to use -rpath. case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac done fi # Now hardcode the library paths rpath= hardcode_libdirs= for libdir in $compile_rpath $finalize_rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then if test -z "$hardcode_libdirs"; then hardcode_libdirs="$libdir" else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append rpath " $flag" fi elif test -n "$runpath_var"; then case "$perm_rpath " in *" $libdir "*) ;; *) func_append perm_rpath " $libdir" ;; esac fi case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) testbindir=`${ECHO} "$libdir" | ${SED} -e 's*/lib$*/bin*'` case :$dllsearchpath: in *":$libdir:"*) ;; ::) dllsearchpath=$libdir;; *) func_append dllsearchpath ":$libdir";; esac case :$dllsearchpath: in *":$testbindir:"*) ;; ::) dllsearchpath=$testbindir;; *) func_append dllsearchpath ":$testbindir";; esac ;; esac done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir="$hardcode_libdirs" eval rpath=\" $hardcode_libdir_flag_spec\" fi compile_rpath="$rpath" rpath= hardcode_libdirs= for libdir in $finalize_rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then if test -z "$hardcode_libdirs"; then hardcode_libdirs="$libdir" else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append rpath " $flag" fi elif test -n "$runpath_var"; then case "$finalize_perm_rpath " in *" $libdir "*) ;; *) func_append finalize_perm_rpath " $libdir" ;; esac fi done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir="$hardcode_libdirs" eval rpath=\" $hardcode_libdir_flag_spec\" fi finalize_rpath="$rpath" if test -n "$libobjs" && test "$build_old_libs" = yes; then # Transform all the library objects into standard objects. compile_command=`$ECHO "$compile_command" | $SP2NL | $SED "$lo2o" | $NL2SP` finalize_command=`$ECHO "$finalize_command" | $SP2NL | $SED "$lo2o" | $NL2SP` fi func_generate_dlsyms "$outputname" "@PROGRAM@" "no" # template prelinking step if test -n "$prelink_cmds"; then func_execute_cmds "$prelink_cmds" 'exit $?' fi wrappers_required=yes case $host in *cegcc* | *mingw32ce*) # Disable wrappers for cegcc and mingw32ce hosts, we are cross compiling anyway. wrappers_required=no ;; *cygwin* | *mingw* ) if test "$build_libtool_libs" != yes; then wrappers_required=no fi ;; *) if test "$need_relink" = no || test "$build_libtool_libs" != yes; then wrappers_required=no fi ;; esac if test "$wrappers_required" = no; then # Replace the output file specification. compile_command=`$ECHO "$compile_command" | $SED 's%@OUTPUT@%'"$output"'%g'` link_command="$compile_command$compile_rpath" # We have no uninstalled library dependencies, so finalize right now. exit_status=0 func_show_eval "$link_command" 'exit_status=$?' if test -n "$postlink_cmds"; then func_to_tool_file "$output" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi # Delete the generated files. if test -f "$output_objdir/${outputname}S.${objext}"; then func_show_eval '$RM "$output_objdir/${outputname}S.${objext}"' fi exit $exit_status fi if test -n "$compile_shlibpath$finalize_shlibpath"; then compile_command="$shlibpath_var=\"$compile_shlibpath$finalize_shlibpath\$$shlibpath_var\" $compile_command" fi if test -n "$finalize_shlibpath"; then finalize_command="$shlibpath_var=\"$finalize_shlibpath\$$shlibpath_var\" $finalize_command" fi compile_var= finalize_var= if test -n "$runpath_var"; then if test -n "$perm_rpath"; then # We should set the runpath_var. rpath= for dir in $perm_rpath; do func_append rpath "$dir:" done compile_var="$runpath_var=\"$rpath\$$runpath_var\" " fi if test -n "$finalize_perm_rpath"; then # We should set the runpath_var. rpath= for dir in $finalize_perm_rpath; do func_append rpath "$dir:" done finalize_var="$runpath_var=\"$rpath\$$runpath_var\" " fi fi if test "$no_install" = yes; then # We don't need to create a wrapper script. link_command="$compile_var$compile_command$compile_rpath" # Replace the output file specification. link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output"'%g'` # Delete the old output file. $opt_dry_run || $RM $output # Link the executable and exit func_show_eval "$link_command" 'exit $?' if test -n "$postlink_cmds"; then func_to_tool_file "$output" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi exit $EXIT_SUCCESS fi if test "$hardcode_action" = relink; then # Fast installation is not supported link_command="$compile_var$compile_command$compile_rpath" relink_command="$finalize_var$finalize_command$finalize_rpath" func_warning "this platform does not like uninstalled shared libraries" func_warning "\`$output' will be relinked during installation" else if test "$fast_install" != no; then link_command="$finalize_var$compile_command$finalize_rpath" if test "$fast_install" = yes; then relink_command=`$ECHO "$compile_var$compile_command$compile_rpath" | $SED 's%@OUTPUT@%\$progdir/\$file%g'` else # fast_install is set to needless relink_command= fi else link_command="$compile_var$compile_command$compile_rpath" relink_command="$finalize_var$finalize_command$finalize_rpath" fi fi # Replace the output file specification. link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output_objdir/$outputname"'%g'` # Delete the old output files. $opt_dry_run || $RM $output $output_objdir/$outputname $output_objdir/lt-$outputname func_show_eval "$link_command" 'exit $?' if test -n "$postlink_cmds"; then func_to_tool_file "$output_objdir/$outputname" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output_objdir/$outputname"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi # Now create the wrapper script. func_verbose "creating $output" # Quote the relink command for shipping. if test -n "$relink_command"; then # Preserve any variables that may affect compiler behavior for var in $variables_saved_for_relink; do if eval test -z \"\${$var+set}\"; then relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command" elif eval var_value=\$$var; test -z "$var_value"; then relink_command="$var=; export $var; $relink_command" else func_quote_for_eval "$var_value" relink_command="$var=$func_quote_for_eval_result; export $var; $relink_command" fi done relink_command="(cd `pwd`; $relink_command)" relink_command=`$ECHO "$relink_command" | $SED "$sed_quote_subst"` fi # Only actually do things if not in dry run mode. $opt_dry_run || { # win32 will think the script is a binary if it has # a .exe suffix, so we strip it off here. case $output in *.exe) func_stripname '' '.exe' "$output" output=$func_stripname_result ;; esac # test for cygwin because mv fails w/o .exe extensions case $host in *cygwin*) exeext=.exe func_stripname '' '.exe' "$outputname" outputname=$func_stripname_result ;; *) exeext= ;; esac case $host in *cygwin* | *mingw* ) func_dirname_and_basename "$output" "" "." output_name=$func_basename_result output_path=$func_dirname_result cwrappersource="$output_path/$objdir/lt-$output_name.c" cwrapper="$output_path/$output_name.exe" $RM $cwrappersource $cwrapper trap "$RM $cwrappersource $cwrapper; exit $EXIT_FAILURE" 1 2 15 func_emit_cwrapperexe_src > $cwrappersource # The wrapper executable is built using the $host compiler, # because it contains $host paths and files. If cross- # compiling, it, like the target executable, must be # executed on the $host or under an emulation environment. $opt_dry_run || { $LTCC $LTCFLAGS -o $cwrapper $cwrappersource $STRIP $cwrapper } # Now, create the wrapper script for func_source use: func_ltwrapper_scriptname $cwrapper $RM $func_ltwrapper_scriptname_result trap "$RM $func_ltwrapper_scriptname_result; exit $EXIT_FAILURE" 1 2 15 $opt_dry_run || { # note: this script will not be executed, so do not chmod. if test "x$build" = "x$host" ; then $cwrapper --lt-dump-script > $func_ltwrapper_scriptname_result else func_emit_wrapper no > $func_ltwrapper_scriptname_result fi } ;; * ) $RM $output trap "$RM $output; exit $EXIT_FAILURE" 1 2 15 func_emit_wrapper no > $output chmod +x $output ;; esac } exit $EXIT_SUCCESS ;; esac # See if we need to build an old-fashioned archive. for oldlib in $oldlibs; do if test "$build_libtool_libs" = convenience; then oldobjs="$libobjs_save $symfileobj" addlibs="$convenience" build_libtool_libs=no else if test "$build_libtool_libs" = module; then oldobjs="$libobjs_save" build_libtool_libs=no else oldobjs="$old_deplibs $non_pic_objects" if test "$preload" = yes && test -f "$symfileobj"; then func_append oldobjs " $symfileobj" fi fi addlibs="$old_convenience" fi if test -n "$addlibs"; then gentop="$output_objdir/${outputname}x" func_append generated " $gentop" func_extract_archives $gentop $addlibs func_append oldobjs " $func_extract_archives_result" fi # Do each command in the archive commands. if test -n "$old_archive_from_new_cmds" && test "$build_libtool_libs" = yes; then cmds=$old_archive_from_new_cmds else # Add any objects from preloaded convenience libraries if test -n "$dlprefiles"; then gentop="$output_objdir/${outputname}x" func_append generated " $gentop" func_extract_archives $gentop $dlprefiles func_append oldobjs " $func_extract_archives_result" fi # POSIX demands no paths to be encoded in archives. We have # to avoid creating archives with duplicate basenames if we # might have to extract them afterwards, e.g., when creating a # static archive out of a convenience library, or when linking # the entirety of a libtool archive into another (currently # not supported by libtool). if (for obj in $oldobjs do func_basename "$obj" $ECHO "$func_basename_result" done | sort | sort -uc >/dev/null 2>&1); then : else echo "copying selected object files to avoid basename conflicts..." gentop="$output_objdir/${outputname}x" func_append generated " $gentop" func_mkdir_p "$gentop" save_oldobjs=$oldobjs oldobjs= counter=1 for obj in $save_oldobjs do func_basename "$obj" objbase="$func_basename_result" case " $oldobjs " in " ") oldobjs=$obj ;; *[\ /]"$objbase "*) while :; do # Make sure we don't pick an alternate name that also # overlaps. newobj=lt$counter-$objbase func_arith $counter + 1 counter=$func_arith_result case " $oldobjs " in *[\ /]"$newobj "*) ;; *) if test ! -f "$gentop/$newobj"; then break; fi ;; esac done func_show_eval "ln $obj $gentop/$newobj || cp $obj $gentop/$newobj" func_append oldobjs " $gentop/$newobj" ;; *) func_append oldobjs " $obj" ;; esac done fi func_to_tool_file "$oldlib" func_convert_file_msys_to_w32 tool_oldlib=$func_to_tool_file_result eval cmds=\"$old_archive_cmds\" func_len " $cmds" len=$func_len_result if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then cmds=$old_archive_cmds elif test -n "$archiver_list_spec"; then func_verbose "using command file archive linking..." for obj in $oldobjs do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" done > $output_objdir/$libname.libcmd func_to_tool_file "$output_objdir/$libname.libcmd" oldobjs=" $archiver_list_spec$func_to_tool_file_result" cmds=$old_archive_cmds else # the command line is too long to link in one step, link in parts func_verbose "using piecewise archive linking..." save_RANLIB=$RANLIB RANLIB=: objlist= concat_cmds= save_oldobjs=$oldobjs oldobjs= # Is there a better way of finding the last object in the list? for obj in $save_oldobjs do last_oldobj=$obj done eval test_cmds=\"$old_archive_cmds\" func_len " $test_cmds" len0=$func_len_result len=$len0 for obj in $save_oldobjs do func_len " $obj" func_arith $len + $func_len_result len=$func_arith_result func_append objlist " $obj" if test "$len" -lt "$max_cmd_len"; then : else # the above command should be used before it gets too long oldobjs=$objlist if test "$obj" = "$last_oldobj" ; then RANLIB=$save_RANLIB fi test -z "$concat_cmds" || concat_cmds=$concat_cmds~ eval concat_cmds=\"\${concat_cmds}$old_archive_cmds\" objlist= len=$len0 fi done RANLIB=$save_RANLIB oldobjs=$objlist if test "X$oldobjs" = "X" ; then eval cmds=\"\$concat_cmds\" else eval cmds=\"\$concat_cmds~\$old_archive_cmds\" fi fi fi func_execute_cmds "$cmds" 'exit $?' done test -n "$generated" && \ func_show_eval "${RM}r$generated" # Now create the libtool archive. case $output in *.la) old_library= test "$build_old_libs" = yes && old_library="$libname.$libext" func_verbose "creating $output" # Preserve any variables that may affect compiler behavior for var in $variables_saved_for_relink; do if eval test -z \"\${$var+set}\"; then relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command" elif eval var_value=\$$var; test -z "$var_value"; then relink_command="$var=; export $var; $relink_command" else func_quote_for_eval "$var_value" relink_command="$var=$func_quote_for_eval_result; export $var; $relink_command" fi done # Quote the link command for shipping. relink_command="(cd `pwd`; $SHELL $progpath $preserve_args --mode=relink $libtool_args @inst_prefix_dir@)" relink_command=`$ECHO "$relink_command" | $SED "$sed_quote_subst"` if test "$hardcode_automatic" = yes ; then relink_command= fi # Only create the output if not a dry run. $opt_dry_run || { for installed in no yes; do if test "$installed" = yes; then if test -z "$install_libdir"; then break fi output="$output_objdir/$outputname"i # Replace all uninstalled libtool libraries with the installed ones newdependency_libs= for deplib in $dependency_libs; do case $deplib in *.la) func_basename "$deplib" name="$func_basename_result" func_resolve_sysroot "$deplib" eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result` test -z "$libdir" && \ func_fatal_error "\`$deplib' is not a valid libtool archive" func_append newdependency_libs " ${lt_sysroot:+=}$libdir/$name" ;; -L*) func_stripname -L '' "$deplib" func_replace_sysroot "$func_stripname_result" func_append newdependency_libs " -L$func_replace_sysroot_result" ;; -R*) func_stripname -R '' "$deplib" func_replace_sysroot "$func_stripname_result" func_append newdependency_libs " -R$func_replace_sysroot_result" ;; *) func_append newdependency_libs " $deplib" ;; esac done dependency_libs="$newdependency_libs" newdlfiles= for lib in $dlfiles; do case $lib in *.la) func_basename "$lib" name="$func_basename_result" eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib` test -z "$libdir" && \ func_fatal_error "\`$lib' is not a valid libtool archive" func_append newdlfiles " ${lt_sysroot:+=}$libdir/$name" ;; *) func_append newdlfiles " $lib" ;; esac done dlfiles="$newdlfiles" newdlprefiles= for lib in $dlprefiles; do case $lib in *.la) # Only pass preopened files to the pseudo-archive (for # eventual linking with the app. that links it) if we # didn't already link the preopened objects directly into # the library: func_basename "$lib" name="$func_basename_result" eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib` test -z "$libdir" && \ func_fatal_error "\`$lib' is not a valid libtool archive" func_append newdlprefiles " ${lt_sysroot:+=}$libdir/$name" ;; esac done dlprefiles="$newdlprefiles" else newdlfiles= for lib in $dlfiles; do case $lib in [\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;; *) abs=`pwd`"/$lib" ;; esac func_append newdlfiles " $abs" done dlfiles="$newdlfiles" newdlprefiles= for lib in $dlprefiles; do case $lib in [\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;; *) abs=`pwd`"/$lib" ;; esac func_append newdlprefiles " $abs" done dlprefiles="$newdlprefiles" fi $RM $output # place dlname in correct position for cygwin # In fact, it would be nice if we could use this code for all target # systems that can't hard-code library paths into their executables # and that have no shared library path variable independent of PATH, # but it turns out we can't easily determine that from inspecting # libtool variables, so we have to hard-code the OSs to which it # applies here; at the moment, that means platforms that use the PE # object format with DLL files. See the long comment at the top of # tests/bindir.at for full details. tdlname=$dlname case $host,$output,$installed,$module,$dlname in *cygwin*,*lai,yes,no,*.dll | *mingw*,*lai,yes,no,*.dll | *cegcc*,*lai,yes,no,*.dll) # If a -bindir argument was supplied, place the dll there. if test "x$bindir" != x ; then func_relative_path "$install_libdir" "$bindir" tdlname=$func_relative_path_result$dlname else # Otherwise fall back on heuristic. tdlname=../bin/$dlname fi ;; esac $ECHO > $output "\ # $outputname - a libtool library file # Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION # # Please DO NOT delete this file! # It is necessary for linking the library. # The name that we can dlopen(3). dlname='$tdlname' # Names of this library. library_names='$library_names' # The name of the static archive. old_library='$old_library' # Linker flags that can not go in dependency_libs. inherited_linker_flags='$new_inherited_linker_flags' # Libraries that this one depends upon. dependency_libs='$dependency_libs' # Names of additional weak libraries provided by this library weak_library_names='$weak_libs' # Version information for $libname. current=$current age=$age revision=$revision # Is this an already installed library? installed=$installed # Should we warn about portability when linking against -modules? shouldnotlink=$module # Files to dlopen/dlpreopen dlopen='$dlfiles' dlpreopen='$dlprefiles' # Directory that this library needs to be installed in: libdir='$install_libdir'" if test "$installed" = no && test "$need_relink" = yes; then $ECHO >> $output "\ relink_command=\"$relink_command\"" fi done } # Do a symbolic link so that the libtool archive can be found in # LD_LIBRARY_PATH before the program is installed. func_show_eval '( cd "$output_objdir" && $RM "$outputname" && $LN_S "../$outputname" "$outputname" )' 'exit $?' ;; esac exit $EXIT_SUCCESS } { test "$opt_mode" = link || test "$opt_mode" = relink; } && func_mode_link ${1+"$@"} # func_mode_uninstall arg... func_mode_uninstall () { $opt_debug RM="$nonopt" files= rmforce= exit_status=0 # This variable tells wrapper scripts just to set variables rather # than running their programs. libtool_install_magic="$magic" for arg do case $arg in -f) func_append RM " $arg"; rmforce=yes ;; -*) func_append RM " $arg" ;; *) func_append files " $arg" ;; esac done test -z "$RM" && \ func_fatal_help "you must specify an RM program" rmdirs= for file in $files; do func_dirname "$file" "" "." dir="$func_dirname_result" if test "X$dir" = X.; then odir="$objdir" else odir="$dir/$objdir" fi func_basename "$file" name="$func_basename_result" test "$opt_mode" = uninstall && odir="$dir" # Remember odir for removal later, being careful to avoid duplicates if test "$opt_mode" = clean; then case " $rmdirs " in *" $odir "*) ;; *) func_append rmdirs " $odir" ;; esac fi # Don't error if the file doesn't exist and rm -f was used. if { test -L "$file"; } >/dev/null 2>&1 || { test -h "$file"; } >/dev/null 2>&1 || test -f "$file"; then : elif test -d "$file"; then exit_status=1 continue elif test "$rmforce" = yes; then continue fi rmfiles="$file" case $name in *.la) # Possibly a libtool archive, so verify it. if func_lalib_p "$file"; then func_source $dir/$name # Delete the libtool libraries and symlinks. for n in $library_names; do func_append rmfiles " $odir/$n" done test -n "$old_library" && func_append rmfiles " $odir/$old_library" case "$opt_mode" in clean) case " $library_names " in *" $dlname "*) ;; *) test -n "$dlname" && func_append rmfiles " $odir/$dlname" ;; esac test -n "$libdir" && func_append rmfiles " $odir/$name $odir/${name}i" ;; uninstall) if test -n "$library_names"; then # Do each command in the postuninstall commands. func_execute_cmds "$postuninstall_cmds" 'test "$rmforce" = yes || exit_status=1' fi if test -n "$old_library"; then # Do each command in the old_postuninstall commands. func_execute_cmds "$old_postuninstall_cmds" 'test "$rmforce" = yes || exit_status=1' fi # FIXME: should reinstall the best remaining shared library. ;; esac fi ;; *.lo) # Possibly a libtool object, so verify it. if func_lalib_p "$file"; then # Read the .lo file func_source $dir/$name # Add PIC object to the list of files to remove. if test -n "$pic_object" && test "$pic_object" != none; then func_append rmfiles " $dir/$pic_object" fi # Add non-PIC object to the list of files to remove. if test -n "$non_pic_object" && test "$non_pic_object" != none; then func_append rmfiles " $dir/$non_pic_object" fi fi ;; *) if test "$opt_mode" = clean ; then noexename=$name case $file in *.exe) func_stripname '' '.exe' "$file" file=$func_stripname_result func_stripname '' '.exe' "$name" noexename=$func_stripname_result # $file with .exe has already been added to rmfiles, # add $file without .exe func_append rmfiles " $file" ;; esac # Do a test to see if this is a libtool program. if func_ltwrapper_p "$file"; then if func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" relink_command= func_source $func_ltwrapper_scriptname_result func_append rmfiles " $func_ltwrapper_scriptname_result" else relink_command= func_source $dir/$noexename fi # note $name still contains .exe if it was in $file originally # as does the version of $file that was added into $rmfiles func_append rmfiles " $odir/$name $odir/${name}S.${objext}" if test "$fast_install" = yes && test -n "$relink_command"; then func_append rmfiles " $odir/lt-$name" fi if test "X$noexename" != "X$name" ; then func_append rmfiles " $odir/lt-${noexename}.c" fi fi fi ;; esac func_show_eval "$RM $rmfiles" 'exit_status=1' done # Try to remove the ${objdir}s in the directories where we deleted files for dir in $rmdirs; do if test -d "$dir"; then func_show_eval "rmdir $dir >/dev/null 2>&1" fi done exit $exit_status } { test "$opt_mode" = uninstall || test "$opt_mode" = clean; } && func_mode_uninstall ${1+"$@"} test -z "$opt_mode" && { help="$generic_help" func_fatal_help "you must specify a MODE" } test -z "$exec_cmd" && \ func_fatal_help "invalid operation mode \`$opt_mode'" if test -n "$exec_cmd"; then eval exec "$exec_cmd" exit $EXIT_FAILURE fi exit $exit_status # The TAGs below are defined such that we never get into a situation # in which we disable both kinds of libraries. Given conflicting # choices, we go for a static library, that is the most portable, # since we can't tell whether shared libraries were disabled because # the user asked for that or because the platform doesn't support # them. This is particularly important on AIX, because we don't # support having both static and shared libraries enabled at the same # time on that platform, so we default to a shared-only configuration. # If a disable-shared tag is given, we'll fallback to a static-only # configuration. But we'll never go from static-only to shared-only. # ### BEGIN LIBTOOL TAG CONFIG: disable-shared build_libtool_libs=no build_old_libs=yes # ### END LIBTOOL TAG CONFIG: disable-shared # ### BEGIN LIBTOOL TAG CONFIG: disable-static build_old_libs=`case $build_libtool_libs in yes) echo no;; *) echo yes;; esac` # ### END LIBTOOL TAG CONFIG: disable-static # Local Variables: # mode:shell-script # sh-indentation:2 # End: # vi:sw=2 slurm-slurm-15-08-7-1/auxdir/ltoptions.m4000066400000000000000000000300731265000126300201210ustar00rootroot00000000000000# Helper functions for option handling. -*- Autoconf -*- # # Copyright (C) 2004, 2005, 2007, 2008, 2009 Free Software Foundation, # Inc. # Written by Gary V. Vaughan, 2004 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # serial 7 ltoptions.m4 # This is to help aclocal find these macros, as it can't see m4_define. AC_DEFUN([LTOPTIONS_VERSION], [m4_if([1])]) # _LT_MANGLE_OPTION(MACRO-NAME, OPTION-NAME) # ------------------------------------------ m4_define([_LT_MANGLE_OPTION], [[_LT_OPTION_]m4_bpatsubst($1__$2, [[^a-zA-Z0-9_]], [_])]) # _LT_SET_OPTION(MACRO-NAME, OPTION-NAME) # --------------------------------------- # Set option OPTION-NAME for macro MACRO-NAME, and if there is a # matching handler defined, dispatch to it. Other OPTION-NAMEs are # saved as a flag. m4_define([_LT_SET_OPTION], [m4_define(_LT_MANGLE_OPTION([$1], [$2]))dnl m4_ifdef(_LT_MANGLE_DEFUN([$1], [$2]), _LT_MANGLE_DEFUN([$1], [$2]), [m4_warning([Unknown $1 option `$2'])])[]dnl ]) # _LT_IF_OPTION(MACRO-NAME, OPTION-NAME, IF-SET, [IF-NOT-SET]) # ------------------------------------------------------------ # Execute IF-SET if OPTION is set, IF-NOT-SET otherwise. m4_define([_LT_IF_OPTION], [m4_ifdef(_LT_MANGLE_OPTION([$1], [$2]), [$3], [$4])]) # _LT_UNLESS_OPTIONS(MACRO-NAME, OPTION-LIST, IF-NOT-SET) # ------------------------------------------------------- # Execute IF-NOT-SET unless all options in OPTION-LIST for MACRO-NAME # are set. m4_define([_LT_UNLESS_OPTIONS], [m4_foreach([_LT_Option], m4_split(m4_normalize([$2])), [m4_ifdef(_LT_MANGLE_OPTION([$1], _LT_Option), [m4_define([$0_found])])])[]dnl m4_ifdef([$0_found], [m4_undefine([$0_found])], [$3 ])[]dnl ]) # _LT_SET_OPTIONS(MACRO-NAME, OPTION-LIST) # ---------------------------------------- # OPTION-LIST is a space-separated list of Libtool options associated # with MACRO-NAME. If any OPTION has a matching handler declared with # LT_OPTION_DEFINE, dispatch to that macro; otherwise complain about # the unknown option and exit. m4_defun([_LT_SET_OPTIONS], [# Set options m4_foreach([_LT_Option], m4_split(m4_normalize([$2])), [_LT_SET_OPTION([$1], _LT_Option)]) m4_if([$1],[LT_INIT],[ dnl dnl Simply set some default values (i.e off) if boolean options were not dnl specified: _LT_UNLESS_OPTIONS([LT_INIT], [dlopen], [enable_dlopen=no ]) _LT_UNLESS_OPTIONS([LT_INIT], [win32-dll], [enable_win32_dll=no ]) dnl dnl If no reference was made to various pairs of opposing options, then dnl we run the default mode handler for the pair. For example, if neither dnl `shared' nor `disable-shared' was passed, we enable building of shared dnl archives by default: _LT_UNLESS_OPTIONS([LT_INIT], [shared disable-shared], [_LT_ENABLE_SHARED]) _LT_UNLESS_OPTIONS([LT_INIT], [static disable-static], [_LT_ENABLE_STATIC]) _LT_UNLESS_OPTIONS([LT_INIT], [pic-only no-pic], [_LT_WITH_PIC]) _LT_UNLESS_OPTIONS([LT_INIT], [fast-install disable-fast-install], [_LT_ENABLE_FAST_INSTALL]) ]) ])# _LT_SET_OPTIONS ## --------------------------------- ## ## Macros to handle LT_INIT options. ## ## --------------------------------- ## # _LT_MANGLE_DEFUN(MACRO-NAME, OPTION-NAME) # ----------------------------------------- m4_define([_LT_MANGLE_DEFUN], [[_LT_OPTION_DEFUN_]m4_bpatsubst(m4_toupper([$1__$2]), [[^A-Z0-9_]], [_])]) # LT_OPTION_DEFINE(MACRO-NAME, OPTION-NAME, CODE) # ----------------------------------------------- m4_define([LT_OPTION_DEFINE], [m4_define(_LT_MANGLE_DEFUN([$1], [$2]), [$3])[]dnl ])# LT_OPTION_DEFINE # dlopen # ------ LT_OPTION_DEFINE([LT_INIT], [dlopen], [enable_dlopen=yes ]) AU_DEFUN([AC_LIBTOOL_DLOPEN], [_LT_SET_OPTION([LT_INIT], [dlopen]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the `dlopen' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_DLOPEN], []) # win32-dll # --------- # Declare package support for building win32 dll's. LT_OPTION_DEFINE([LT_INIT], [win32-dll], [enable_win32_dll=yes case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-cegcc*) AC_CHECK_TOOL(AS, as, false) AC_CHECK_TOOL(DLLTOOL, dlltool, false) AC_CHECK_TOOL(OBJDUMP, objdump, false) ;; esac test -z "$AS" && AS=as _LT_DECL([], [AS], [1], [Assembler program])dnl test -z "$DLLTOOL" && DLLTOOL=dlltool _LT_DECL([], [DLLTOOL], [1], [DLL creation program])dnl test -z "$OBJDUMP" && OBJDUMP=objdump _LT_DECL([], [OBJDUMP], [1], [Object dumper program])dnl ])# win32-dll AU_DEFUN([AC_LIBTOOL_WIN32_DLL], [AC_REQUIRE([AC_CANONICAL_HOST])dnl _LT_SET_OPTION([LT_INIT], [win32-dll]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the `win32-dll' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_WIN32_DLL], []) # _LT_ENABLE_SHARED([DEFAULT]) # ---------------------------- # implement the --enable-shared flag, and supports the `shared' and # `disable-shared' LT_INIT options. # DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'. m4_define([_LT_ENABLE_SHARED], [m4_define([_LT_ENABLE_SHARED_DEFAULT], [m4_if($1, no, no, yes)])dnl AC_ARG_ENABLE([shared], [AS_HELP_STRING([--enable-shared@<:@=PKGS@:>@], [build shared libraries @<:@default=]_LT_ENABLE_SHARED_DEFAULT[@:>@])], [p=${PACKAGE-default} case $enableval in yes) enable_shared=yes ;; no) enable_shared=no ;; *) enable_shared=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_shared=yes fi done IFS="$lt_save_ifs" ;; esac], [enable_shared=]_LT_ENABLE_SHARED_DEFAULT) _LT_DECL([build_libtool_libs], [enable_shared], [0], [Whether or not to build shared libraries]) ])# _LT_ENABLE_SHARED LT_OPTION_DEFINE([LT_INIT], [shared], [_LT_ENABLE_SHARED([yes])]) LT_OPTION_DEFINE([LT_INIT], [disable-shared], [_LT_ENABLE_SHARED([no])]) # Old names: AC_DEFUN([AC_ENABLE_SHARED], [_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[shared]) ]) AC_DEFUN([AC_DISABLE_SHARED], [_LT_SET_OPTION([LT_INIT], [disable-shared]) ]) AU_DEFUN([AM_ENABLE_SHARED], [AC_ENABLE_SHARED($@)]) AU_DEFUN([AM_DISABLE_SHARED], [AC_DISABLE_SHARED($@)]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AM_ENABLE_SHARED], []) dnl AC_DEFUN([AM_DISABLE_SHARED], []) # _LT_ENABLE_STATIC([DEFAULT]) # ---------------------------- # implement the --enable-static flag, and support the `static' and # `disable-static' LT_INIT options. # DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'. m4_define([_LT_ENABLE_STATIC], [m4_define([_LT_ENABLE_STATIC_DEFAULT], [m4_if($1, no, no, yes)])dnl AC_ARG_ENABLE([static], [AS_HELP_STRING([--enable-static@<:@=PKGS@:>@], [build static libraries @<:@default=]_LT_ENABLE_STATIC_DEFAULT[@:>@])], [p=${PACKAGE-default} case $enableval in yes) enable_static=yes ;; no) enable_static=no ;; *) enable_static=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_static=yes fi done IFS="$lt_save_ifs" ;; esac], [enable_static=]_LT_ENABLE_STATIC_DEFAULT) _LT_DECL([build_old_libs], [enable_static], [0], [Whether or not to build static libraries]) ])# _LT_ENABLE_STATIC LT_OPTION_DEFINE([LT_INIT], [static], [_LT_ENABLE_STATIC([yes])]) LT_OPTION_DEFINE([LT_INIT], [disable-static], [_LT_ENABLE_STATIC([no])]) # Old names: AC_DEFUN([AC_ENABLE_STATIC], [_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[static]) ]) AC_DEFUN([AC_DISABLE_STATIC], [_LT_SET_OPTION([LT_INIT], [disable-static]) ]) AU_DEFUN([AM_ENABLE_STATIC], [AC_ENABLE_STATIC($@)]) AU_DEFUN([AM_DISABLE_STATIC], [AC_DISABLE_STATIC($@)]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AM_ENABLE_STATIC], []) dnl AC_DEFUN([AM_DISABLE_STATIC], []) # _LT_ENABLE_FAST_INSTALL([DEFAULT]) # ---------------------------------- # implement the --enable-fast-install flag, and support the `fast-install' # and `disable-fast-install' LT_INIT options. # DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'. m4_define([_LT_ENABLE_FAST_INSTALL], [m4_define([_LT_ENABLE_FAST_INSTALL_DEFAULT], [m4_if($1, no, no, yes)])dnl AC_ARG_ENABLE([fast-install], [AS_HELP_STRING([--enable-fast-install@<:@=PKGS@:>@], [optimize for fast installation @<:@default=]_LT_ENABLE_FAST_INSTALL_DEFAULT[@:>@])], [p=${PACKAGE-default} case $enableval in yes) enable_fast_install=yes ;; no) enable_fast_install=no ;; *) enable_fast_install=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_fast_install=yes fi done IFS="$lt_save_ifs" ;; esac], [enable_fast_install=]_LT_ENABLE_FAST_INSTALL_DEFAULT) _LT_DECL([fast_install], [enable_fast_install], [0], [Whether or not to optimize for fast installation])dnl ])# _LT_ENABLE_FAST_INSTALL LT_OPTION_DEFINE([LT_INIT], [fast-install], [_LT_ENABLE_FAST_INSTALL([yes])]) LT_OPTION_DEFINE([LT_INIT], [disable-fast-install], [_LT_ENABLE_FAST_INSTALL([no])]) # Old names: AU_DEFUN([AC_ENABLE_FAST_INSTALL], [_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[fast-install]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the `fast-install' option into LT_INIT's first parameter.]) ]) AU_DEFUN([AC_DISABLE_FAST_INSTALL], [_LT_SET_OPTION([LT_INIT], [disable-fast-install]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the `disable-fast-install' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_ENABLE_FAST_INSTALL], []) dnl AC_DEFUN([AM_DISABLE_FAST_INSTALL], []) # _LT_WITH_PIC([MODE]) # -------------------- # implement the --with-pic flag, and support the `pic-only' and `no-pic' # LT_INIT options. # MODE is either `yes' or `no'. If omitted, it defaults to `both'. m4_define([_LT_WITH_PIC], [AC_ARG_WITH([pic], [AS_HELP_STRING([--with-pic@<:@=PKGS@:>@], [try to use only PIC/non-PIC objects @<:@default=use both@:>@])], [lt_p=${PACKAGE-default} case $withval in yes|no) pic_mode=$withval ;; *) pic_mode=default # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for lt_pkg in $withval; do IFS="$lt_save_ifs" if test "X$lt_pkg" = "X$lt_p"; then pic_mode=yes fi done IFS="$lt_save_ifs" ;; esac], [pic_mode=default]) test -z "$pic_mode" && pic_mode=m4_default([$1], [default]) _LT_DECL([], [pic_mode], [0], [What type of objects to build])dnl ])# _LT_WITH_PIC LT_OPTION_DEFINE([LT_INIT], [pic-only], [_LT_WITH_PIC([yes])]) LT_OPTION_DEFINE([LT_INIT], [no-pic], [_LT_WITH_PIC([no])]) # Old name: AU_DEFUN([AC_LIBTOOL_PICMODE], [_LT_SET_OPTION([LT_INIT], [pic-only]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the `pic-only' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_PICMODE], []) ## ----------------- ## ## LTDL_INIT Options ## ## ----------------- ## m4_define([_LTDL_MODE], []) LT_OPTION_DEFINE([LTDL_INIT], [nonrecursive], [m4_define([_LTDL_MODE], [nonrecursive])]) LT_OPTION_DEFINE([LTDL_INIT], [recursive], [m4_define([_LTDL_MODE], [recursive])]) LT_OPTION_DEFINE([LTDL_INIT], [subproject], [m4_define([_LTDL_MODE], [subproject])]) m4_define([_LTDL_TYPE], []) LT_OPTION_DEFINE([LTDL_INIT], [installable], [m4_define([_LTDL_TYPE], [installable])]) LT_OPTION_DEFINE([LTDL_INIT], [convenience], [m4_define([_LTDL_TYPE], [convenience])]) slurm-slurm-15-08-7-1/auxdir/ltsugar.m4000066400000000000000000000104241265000126300175450ustar00rootroot00000000000000# ltsugar.m4 -- libtool m4 base layer. -*-Autoconf-*- # # Copyright (C) 2004, 2005, 2007, 2008 Free Software Foundation, Inc. # Written by Gary V. Vaughan, 2004 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # serial 6 ltsugar.m4 # This is to help aclocal find these macros, as it can't see m4_define. AC_DEFUN([LTSUGAR_VERSION], [m4_if([0.1])]) # lt_join(SEP, ARG1, [ARG2...]) # ----------------------------- # Produce ARG1SEPARG2...SEPARGn, omitting [] arguments and their # associated separator. # Needed until we can rely on m4_join from Autoconf 2.62, since all earlier # versions in m4sugar had bugs. m4_define([lt_join], [m4_if([$#], [1], [], [$#], [2], [[$2]], [m4_if([$2], [], [], [[$2]_])$0([$1], m4_shift(m4_shift($@)))])]) m4_define([_lt_join], [m4_if([$#$2], [2], [], [m4_if([$2], [], [], [[$1$2]])$0([$1], m4_shift(m4_shift($@)))])]) # lt_car(LIST) # lt_cdr(LIST) # ------------ # Manipulate m4 lists. # These macros are necessary as long as will still need to support # Autoconf-2.59 which quotes differently. m4_define([lt_car], [[$1]]) m4_define([lt_cdr], [m4_if([$#], 0, [m4_fatal([$0: cannot be called without arguments])], [$#], 1, [], [m4_dquote(m4_shift($@))])]) m4_define([lt_unquote], $1) # lt_append(MACRO-NAME, STRING, [SEPARATOR]) # ------------------------------------------ # Redefine MACRO-NAME to hold its former content plus `SEPARATOR'`STRING'. # Note that neither SEPARATOR nor STRING are expanded; they are appended # to MACRO-NAME as is (leaving the expansion for when MACRO-NAME is invoked). # No SEPARATOR is output if MACRO-NAME was previously undefined (different # than defined and empty). # # This macro is needed until we can rely on Autoconf 2.62, since earlier # versions of m4sugar mistakenly expanded SEPARATOR but not STRING. m4_define([lt_append], [m4_define([$1], m4_ifdef([$1], [m4_defn([$1])[$3]])[$2])]) # lt_combine(SEP, PREFIX-LIST, INFIX, SUFFIX1, [SUFFIX2...]) # ---------------------------------------------------------- # Produce a SEP delimited list of all paired combinations of elements of # PREFIX-LIST with SUFFIX1 through SUFFIXn. Each element of the list # has the form PREFIXmINFIXSUFFIXn. # Needed until we can rely on m4_combine added in Autoconf 2.62. m4_define([lt_combine], [m4_if(m4_eval([$# > 3]), [1], [m4_pushdef([_Lt_sep], [m4_define([_Lt_sep], m4_defn([lt_car]))])]]dnl [[m4_foreach([_Lt_prefix], [$2], [m4_foreach([_Lt_suffix], ]m4_dquote(m4_dquote(m4_shift(m4_shift(m4_shift($@)))))[, [_Lt_sep([$1])[]m4_defn([_Lt_prefix])[$3]m4_defn([_Lt_suffix])])])])]) # lt_if_append_uniq(MACRO-NAME, VARNAME, [SEPARATOR], [UNIQ], [NOT-UNIQ]) # ----------------------------------------------------------------------- # Iff MACRO-NAME does not yet contain VARNAME, then append it (delimited # by SEPARATOR if supplied) and expand UNIQ, else NOT-UNIQ. m4_define([lt_if_append_uniq], [m4_ifdef([$1], [m4_if(m4_index([$3]m4_defn([$1])[$3], [$3$2$3]), [-1], [lt_append([$1], [$2], [$3])$4], [$5])], [lt_append([$1], [$2], [$3])$4])]) # lt_dict_add(DICT, KEY, VALUE) # ----------------------------- m4_define([lt_dict_add], [m4_define([$1($2)], [$3])]) # lt_dict_add_subkey(DICT, KEY, SUBKEY, VALUE) # -------------------------------------------- m4_define([lt_dict_add_subkey], [m4_define([$1($2:$3)], [$4])]) # lt_dict_fetch(DICT, KEY, [SUBKEY]) # ---------------------------------- m4_define([lt_dict_fetch], [m4_ifval([$3], m4_ifdef([$1($2:$3)], [m4_defn([$1($2:$3)])]), m4_ifdef([$1($2)], [m4_defn([$1($2)])]))]) # lt_if_dict_fetch(DICT, KEY, [SUBKEY], VALUE, IF-TRUE, [IF-FALSE]) # ----------------------------------------------------------------- m4_define([lt_if_dict_fetch], [m4_if(lt_dict_fetch([$1], [$2], [$3]), [$4], [$5], [$6])]) # lt_dict_filter(DICT, [SUBKEY], VALUE, [SEPARATOR], KEY, [...]) # -------------------------------------------------------------- m4_define([lt_dict_filter], [m4_if([$5], [], [], [lt_join(m4_quote(m4_default([$4], [[, ]])), lt_unquote(m4_split(m4_normalize(m4_foreach(_Lt_key, lt_car([m4_shiftn(4, $@)]), [lt_if_dict_fetch([$1], _Lt_key, [$2], [$3], [_Lt_key ])])))))])[]dnl ]) slurm-slurm-15-08-7-1/auxdir/ltversion.m4000066400000000000000000000012621265000126300201110ustar00rootroot00000000000000# ltversion.m4 -- version numbers -*- Autoconf -*- # # Copyright (C) 2004 Free Software Foundation, Inc. # Written by Scott James Remnant, 2004 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # @configure_input@ # serial 3337 ltversion.m4 # This file is part of GNU Libtool m4_define([LT_PACKAGE_VERSION], [2.4.2]) m4_define([LT_PACKAGE_REVISION], [1.3337]) AC_DEFUN([LTVERSION_VERSION], [macro_version='2.4.2' macro_revision='1.3337' _LT_DECL(, macro_version, 0, [Which release of libtool.m4 was used?]) _LT_DECL(, macro_revision, 0) ]) slurm-slurm-15-08-7-1/auxdir/lt~obsolete.m4000066400000000000000000000137561265000126300204510ustar00rootroot00000000000000# lt~obsolete.m4 -- aclocal satisfying obsolete definitions. -*-Autoconf-*- # # Copyright (C) 2004, 2005, 2007, 2009 Free Software Foundation, Inc. # Written by Scott James Remnant, 2004. # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # serial 5 lt~obsolete.m4 # These exist entirely to fool aclocal when bootstrapping libtool. # # In the past libtool.m4 has provided macros via AC_DEFUN (or AU_DEFUN) # which have later been changed to m4_define as they aren't part of the # exported API, or moved to Autoconf or Automake where they belong. # # The trouble is, aclocal is a bit thick. It'll see the old AC_DEFUN # in /usr/share/aclocal/libtool.m4 and remember it, then when it sees us # using a macro with the same name in our local m4/libtool.m4 it'll # pull the old libtool.m4 in (it doesn't see our shiny new m4_define # and doesn't know about Autoconf macros at all.) # # So we provide this file, which has a silly filename so it's always # included after everything else. This provides aclocal with the # AC_DEFUNs it wants, but when m4 processes it, it doesn't do anything # because those macros already exist, or will be overwritten later. # We use AC_DEFUN over AU_DEFUN for compatibility with aclocal-1.6. # # Anytime we withdraw an AC_DEFUN or AU_DEFUN, remember to add it here. # Yes, that means every name once taken will need to remain here until # we give up compatibility with versions before 1.7, at which point # we need to keep only those names which we still refer to. # This is to help aclocal find these macros, as it can't see m4_define. AC_DEFUN([LTOBSOLETE_VERSION], [m4_if([1])]) m4_ifndef([AC_LIBTOOL_LINKER_OPTION], [AC_DEFUN([AC_LIBTOOL_LINKER_OPTION])]) m4_ifndef([AC_PROG_EGREP], [AC_DEFUN([AC_PROG_EGREP])]) m4_ifndef([_LT_AC_PROG_ECHO_BACKSLASH], [AC_DEFUN([_LT_AC_PROG_ECHO_BACKSLASH])]) m4_ifndef([_LT_AC_SHELL_INIT], [AC_DEFUN([_LT_AC_SHELL_INIT])]) m4_ifndef([_LT_AC_SYS_LIBPATH_AIX], [AC_DEFUN([_LT_AC_SYS_LIBPATH_AIX])]) m4_ifndef([_LT_PROG_LTMAIN], [AC_DEFUN([_LT_PROG_LTMAIN])]) m4_ifndef([_LT_AC_TAGVAR], [AC_DEFUN([_LT_AC_TAGVAR])]) m4_ifndef([AC_LTDL_ENABLE_INSTALL], [AC_DEFUN([AC_LTDL_ENABLE_INSTALL])]) m4_ifndef([AC_LTDL_PREOPEN], [AC_DEFUN([AC_LTDL_PREOPEN])]) m4_ifndef([_LT_AC_SYS_COMPILER], [AC_DEFUN([_LT_AC_SYS_COMPILER])]) m4_ifndef([_LT_AC_LOCK], [AC_DEFUN([_LT_AC_LOCK])]) m4_ifndef([AC_LIBTOOL_SYS_OLD_ARCHIVE], [AC_DEFUN([AC_LIBTOOL_SYS_OLD_ARCHIVE])]) m4_ifndef([_LT_AC_TRY_DLOPEN_SELF], [AC_DEFUN([_LT_AC_TRY_DLOPEN_SELF])]) m4_ifndef([AC_LIBTOOL_PROG_CC_C_O], [AC_DEFUN([AC_LIBTOOL_PROG_CC_C_O])]) m4_ifndef([AC_LIBTOOL_SYS_HARD_LINK_LOCKS], [AC_DEFUN([AC_LIBTOOL_SYS_HARD_LINK_LOCKS])]) m4_ifndef([AC_LIBTOOL_OBJDIR], [AC_DEFUN([AC_LIBTOOL_OBJDIR])]) m4_ifndef([AC_LTDL_OBJDIR], [AC_DEFUN([AC_LTDL_OBJDIR])]) m4_ifndef([AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH], [AC_DEFUN([AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH])]) m4_ifndef([AC_LIBTOOL_SYS_LIB_STRIP], [AC_DEFUN([AC_LIBTOOL_SYS_LIB_STRIP])]) m4_ifndef([AC_PATH_MAGIC], [AC_DEFUN([AC_PATH_MAGIC])]) m4_ifndef([AC_PROG_LD_GNU], [AC_DEFUN([AC_PROG_LD_GNU])]) m4_ifndef([AC_PROG_LD_RELOAD_FLAG], [AC_DEFUN([AC_PROG_LD_RELOAD_FLAG])]) m4_ifndef([AC_DEPLIBS_CHECK_METHOD], [AC_DEFUN([AC_DEPLIBS_CHECK_METHOD])]) m4_ifndef([AC_LIBTOOL_PROG_COMPILER_NO_RTTI], [AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_NO_RTTI])]) m4_ifndef([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE], [AC_DEFUN([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE])]) m4_ifndef([AC_LIBTOOL_PROG_COMPILER_PIC], [AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_PIC])]) m4_ifndef([AC_LIBTOOL_PROG_LD_SHLIBS], [AC_DEFUN([AC_LIBTOOL_PROG_LD_SHLIBS])]) m4_ifndef([AC_LIBTOOL_POSTDEP_PREDEP], [AC_DEFUN([AC_LIBTOOL_POSTDEP_PREDEP])]) m4_ifndef([LT_AC_PROG_EGREP], [AC_DEFUN([LT_AC_PROG_EGREP])]) m4_ifndef([LT_AC_PROG_SED], [AC_DEFUN([LT_AC_PROG_SED])]) m4_ifndef([_LT_CC_BASENAME], [AC_DEFUN([_LT_CC_BASENAME])]) m4_ifndef([_LT_COMPILER_BOILERPLATE], [AC_DEFUN([_LT_COMPILER_BOILERPLATE])]) m4_ifndef([_LT_LINKER_BOILERPLATE], [AC_DEFUN([_LT_LINKER_BOILERPLATE])]) m4_ifndef([_AC_PROG_LIBTOOL], [AC_DEFUN([_AC_PROG_LIBTOOL])]) m4_ifndef([AC_LIBTOOL_SETUP], [AC_DEFUN([AC_LIBTOOL_SETUP])]) m4_ifndef([_LT_AC_CHECK_DLFCN], [AC_DEFUN([_LT_AC_CHECK_DLFCN])]) m4_ifndef([AC_LIBTOOL_SYS_DYNAMIC_LINKER], [AC_DEFUN([AC_LIBTOOL_SYS_DYNAMIC_LINKER])]) m4_ifndef([_LT_AC_TAGCONFIG], [AC_DEFUN([_LT_AC_TAGCONFIG])]) m4_ifndef([AC_DISABLE_FAST_INSTALL], [AC_DEFUN([AC_DISABLE_FAST_INSTALL])]) m4_ifndef([_LT_AC_LANG_CXX], [AC_DEFUN([_LT_AC_LANG_CXX])]) m4_ifndef([_LT_AC_LANG_F77], [AC_DEFUN([_LT_AC_LANG_F77])]) m4_ifndef([_LT_AC_LANG_GCJ], [AC_DEFUN([_LT_AC_LANG_GCJ])]) m4_ifndef([AC_LIBTOOL_LANG_C_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_C_CONFIG])]) m4_ifndef([_LT_AC_LANG_C_CONFIG], [AC_DEFUN([_LT_AC_LANG_C_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_CXX_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_CXX_CONFIG])]) m4_ifndef([_LT_AC_LANG_CXX_CONFIG], [AC_DEFUN([_LT_AC_LANG_CXX_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_F77_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_F77_CONFIG])]) m4_ifndef([_LT_AC_LANG_F77_CONFIG], [AC_DEFUN([_LT_AC_LANG_F77_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_GCJ_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_GCJ_CONFIG])]) m4_ifndef([_LT_AC_LANG_GCJ_CONFIG], [AC_DEFUN([_LT_AC_LANG_GCJ_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_RC_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_RC_CONFIG])]) m4_ifndef([_LT_AC_LANG_RC_CONFIG], [AC_DEFUN([_LT_AC_LANG_RC_CONFIG])]) m4_ifndef([AC_LIBTOOL_CONFIG], [AC_DEFUN([AC_LIBTOOL_CONFIG])]) m4_ifndef([_LT_AC_FILE_LTDLL_C], [AC_DEFUN([_LT_AC_FILE_LTDLL_C])]) m4_ifndef([_LT_REQUIRED_DARWIN_CHECKS], [AC_DEFUN([_LT_REQUIRED_DARWIN_CHECKS])]) m4_ifndef([_LT_AC_PROG_CXXCPP], [AC_DEFUN([_LT_AC_PROG_CXXCPP])]) m4_ifndef([_LT_PREPARE_SED_QUOTE_VARS], [AC_DEFUN([_LT_PREPARE_SED_QUOTE_VARS])]) m4_ifndef([_LT_PROG_ECHO_BACKSLASH], [AC_DEFUN([_LT_PROG_ECHO_BACKSLASH])]) m4_ifndef([_LT_PROG_F77], [AC_DEFUN([_LT_PROG_F77])]) m4_ifndef([_LT_PROG_FC], [AC_DEFUN([_LT_PROG_FC])]) m4_ifndef([_LT_PROG_CXX], [AC_DEFUN([_LT_PROG_CXX])]) slurm-slurm-15-08-7-1/auxdir/missing000077500000000000000000000153301265000126300172220ustar00rootroot00000000000000#! /bin/sh # Common wrapper for a few potentially missing GNU programs. scriptversion=2013-10-28.13; # UTC # Copyright (C) 1996-2013 Free Software Foundation, Inc. # Originally written by Fran,cois Pinard , 1996. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. if test $# -eq 0; then echo 1>&2 "Try '$0 --help' for more information" exit 1 fi case $1 in --is-lightweight) # Used by our autoconf macros to check whether the available missing # script is modern enough. exit 0 ;; --run) # Back-compat with the calling convention used by older automake. shift ;; -h|--h|--he|--hel|--help) echo "\ $0 [OPTION]... PROGRAM [ARGUMENT]... Run 'PROGRAM [ARGUMENT]...', returning a proper advice when this fails due to PROGRAM being missing or too old. Options: -h, --help display this help and exit -v, --version output version information and exit Supported PROGRAM values: aclocal autoconf autoheader autom4te automake makeinfo bison yacc flex lex help2man Version suffixes to PROGRAM as well as the prefixes 'gnu-', 'gnu', and 'g' are ignored when checking the name. Send bug reports to ." exit $? ;; -v|--v|--ve|--ver|--vers|--versi|--versio|--version) echo "missing $scriptversion (GNU Automake)" exit $? ;; -*) echo 1>&2 "$0: unknown '$1' option" echo 1>&2 "Try '$0 --help' for more information" exit 1 ;; esac # Run the given program, remember its exit status. "$@"; st=$? # If it succeeded, we are done. test $st -eq 0 && exit 0 # Also exit now if we it failed (or wasn't found), and '--version' was # passed; such an option is passed most likely to detect whether the # program is present and works. case $2 in --version|--help) exit $st;; esac # Exit code 63 means version mismatch. This often happens when the user # tries to use an ancient version of a tool on a file that requires a # minimum version. if test $st -eq 63; then msg="probably too old" elif test $st -eq 127; then # Program was missing. msg="missing on your system" else # Program was found and executed, but failed. Give up. exit $st fi perl_URL=http://www.perl.org/ flex_URL=http://flex.sourceforge.net/ gnu_software_URL=http://www.gnu.org/software program_details () { case $1 in aclocal|automake) echo "The '$1' program is part of the GNU Automake package:" echo "<$gnu_software_URL/automake>" echo "It also requires GNU Autoconf, GNU m4 and Perl in order to run:" echo "<$gnu_software_URL/autoconf>" echo "<$gnu_software_URL/m4/>" echo "<$perl_URL>" ;; autoconf|autom4te|autoheader) echo "The '$1' program is part of the GNU Autoconf package:" echo "<$gnu_software_URL/autoconf/>" echo "It also requires GNU m4 and Perl in order to run:" echo "<$gnu_software_URL/m4/>" echo "<$perl_URL>" ;; esac } give_advice () { # Normalize program name to check for. normalized_program=`echo "$1" | sed ' s/^gnu-//; t s/^gnu//; t s/^g//; t'` printf '%s\n' "'$1' is $msg." configure_deps="'configure.ac' or m4 files included by 'configure.ac'" case $normalized_program in autoconf*) echo "You should only need it if you modified 'configure.ac'," echo "or m4 files included by it." program_details 'autoconf' ;; autoheader*) echo "You should only need it if you modified 'acconfig.h' or" echo "$configure_deps." program_details 'autoheader' ;; automake*) echo "You should only need it if you modified 'Makefile.am' or" echo "$configure_deps." program_details 'automake' ;; aclocal*) echo "You should only need it if you modified 'acinclude.m4' or" echo "$configure_deps." program_details 'aclocal' ;; autom4te*) echo "You might have modified some maintainer files that require" echo "the 'autom4te' program to be rebuilt." program_details 'autom4te' ;; bison*|yacc*) echo "You should only need it if you modified a '.y' file." echo "You may want to install the GNU Bison package:" echo "<$gnu_software_URL/bison/>" ;; lex*|flex*) echo "You should only need it if you modified a '.l' file." echo "You may want to install the Fast Lexical Analyzer package:" echo "<$flex_URL>" ;; help2man*) echo "You should only need it if you modified a dependency" \ "of a man page." echo "You may want to install the GNU Help2man package:" echo "<$gnu_software_URL/help2man/>" ;; makeinfo*) echo "You should only need it if you modified a '.texi' file, or" echo "any other file indirectly affecting the aspect of the manual." echo "You might want to install the Texinfo package:" echo "<$gnu_software_URL/texinfo/>" echo "The spurious makeinfo call might also be the consequence of" echo "using a buggy 'make' (AIX, DU, IRIX), in which case you might" echo "want to install GNU make:" echo "<$gnu_software_URL/make/>" ;; *) echo "You might have modified some files without having the proper" echo "tools for further handling them. Check the 'README' file, it" echo "often tells you about the needed prerequisites for installing" echo "this package. You may also peek at any GNU archive site, in" echo "case some other package contains this missing '$1' program." ;; esac } give_advice "$1" | sed -e '1s/^/WARNING: /' \ -e '2,$s/^/ /' >&2 # Propagate the correct exit status (expected to be 127 for a program # not found, 63 for a program that failed due to version mismatch). exit $st # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC" # time-stamp-end: "; # UTC" # End: slurm-slurm-15-08-7-1/auxdir/slurm.m4000066400000000000000000000213501265000126300172260ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Mark A. Grondona # # SYNOPSIS: # Various X_AC_SLURM* macros for use in slurm # ##***************************************************************************** AC_DEFUN([X_AC_SLURM_PORTS], [ AC_MSG_CHECKING(for slurmctld default port) AC_ARG_WITH(slurmctld-port, AS_HELP_STRING(--with-slurmctld-port=N,set slurmctld default port [[6817]]), [ if test `expr match "$withval" '[[0-9]]*$'` -gt 0; then slurmctldport="$withval" fi ] ) AC_MSG_RESULT(${slurmctldport=$1}) AC_DEFINE_UNQUOTED(SLURMCTLD_PORT, [$slurmctldport], [Define the default port number for slurmctld]) AC_SUBST(SLURMCTLD_PORT) AC_MSG_CHECKING(for slurmd default port) AC_ARG_WITH(slurmd-port, AS_HELP_STRING(--with-slurmd-port=N,set slurmd default port [[6818]]), [ if test `expr match "$withval" '[[0-9]]*$'` -gt 0; then slurmdport="$withval" fi ] ) AC_MSG_RESULT(${slurmdport=$2}) AC_DEFINE_UNQUOTED(SLURMD_PORT, [$slurmdport], [Define the default port number for slurmd]) AC_SUBST(SLURMD_PORT) AC_MSG_CHECKING(for slurmdbd default port) AC_ARG_WITH(slurmdbd-port, AS_HELP_STRING(--with-slurmdbd-port=N,set slurmdbd default port [[6819]]), [ if test `expr match "$withval" '[[0-9]]*$'` -gt 0; then slurmdbdport="$withval" fi ] ) AC_MSG_RESULT(${slurmdbdport=$3}) AC_DEFINE_UNQUOTED(SLURMDBD_PORT, [$slurmdbdport], [Define the default port number for slurmdbd]) AC_SUBST(SLURMDBD_PORT) AC_MSG_CHECKING(for slurmctld default port count) AC_ARG_WITH(slurmctld-port-count, AS_HELP_STRING(--with-slurmctld-port-count=N,set slurmctld default port count [[1]]), [ if test `expr match "$withval" '[[0-9]]*$'` -gt 0; then slurmctldportcount="$withval" fi ] ) AC_MSG_RESULT(${slurmctldportcount=$4}) AC_DEFINE_UNQUOTED(SLURMCTLD_PORT_COUNT, [$slurmctldportcount], [Define the default port count for slurmctld]) AC_SUBST(SLURMCTLD_PORT_COUNT) ]) dnl dnl Generic option for system dimensions dnl AC_DEFUN([X_AC_DIMENSIONS], [ AC_MSG_CHECKING([System dimensions]) AC_ARG_WITH([dimensions], AS_HELP_STRING(--with-dimensions=N, set system dimension count for generic computer system), [ if test `expr match "$withval" '[[0-9]]*$'` -gt 0; then dimensions="$withval" x_ac_dimensions=yes fi ], [x_ac_dimensions=no] ) if test "$x_ac_dimensions" = yes; then if test $dimensions -lt 1; then AC_MSG_ERROR([Invalid dimensions value $dimensions]) fi AC_MSG_RESULT([$dimensions]); AC_DEFINE_UNQUOTED(SYSTEM_DIMENSIONS, [$dimensions], [Define system dimension count]) else AC_MSG_RESULT([not set]); fi ]) dnl dnl Check for program_invocation_name dnl AC_DEFUN([X_AC_SLURM_PROGRAM_INVOCATION_NAME], [ AC_MSG_CHECKING([for program_invocation_name]) AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include extern char *program_invocation_name;]], [[char *p; p = program_invocation_name; printf("%s\n", p);]])],[got_program_invocation_name=yes],[ ]) AC_MSG_RESULT(${got_program_invocation_name=no}) if test "x$got_program_invocation_name" = "xyes"; then AC_DEFINE(HAVE_PROGRAM_INVOCATION_NAME, 1, [Define if libc sets program_invocation_name] ) fi ])dnl AC_PROG_INVOCATION_NAME dnl dnl Check for Bigendian arch and set SLURM_BIGENDIAN acc'dngly dnl AC_DEFUN([X_AC_SLURM_BIGENDIAN], [ AC_C_BIGENDIAN if test "x$ac_cv_c_bigendian" = "xyes"; then AC_DEFINE(SLURM_BIGENDIAN,1, [Define if your architecture's byteorder is big endian.]) fi ])dnl AC_SLURM_BIGENDIAN dnl dnl AC_SLURM_SEMAPHORE dnl AC_DEFUN([X_AC_SLURM_SEMAPHORE], [ SEMAPHORE_SOURCES="" SEMAPHORE_LIBS="" AC_CHECK_LIB( posix4, sem_open, [SEMAPHORE_LIBS="-lposix4"; AC_DEFINE(HAVE_POSIX_SEMS, 1, [Define if you have Posix semaphores.])], [SEMAPHORE_SOURCES="semaphore.c"] ) AC_SUBST(SEMAPHORE_SOURCES) AC_SUBST(SEMAPHORE_LIBS) ])dnl AC_SLURM_SEMAPHORE dnl dnl dnl dnl Perform SLURM Project version setup AC_DEFUN([X_AC_SLURM_VERSION], [ # # Determine project/version from META file. # These are substituted into the Makefile and config.h. # PROJECT="`perl -ne 'print,exit if s/^\s*NAME:\s*(\S*).*/\1/i' $srcdir/META`" AC_DEFINE_UNQUOTED(PROJECT, "$PROJECT", [Define the project's name.]) AC_SUBST(PROJECT) # Automake desires "PACKAGE" variable instead of PROJECT PACKAGE=$PROJECT ## Build the API version ## NOTE: We map API_MAJOR to be (API_CURRENT - API_AGE) to match the ## behavior of libtool in setting the library version number. For more ## information see src/api/Makefile.am for name in CURRENT REVISION AGE; do API=`perl -ne "print,exit if s/^\s*API_$name:\s*(\S*).*/\1/i" $srcdir/META` eval SLURM_API_$name=$API done SLURM_API_MAJOR=`expr $SLURM_API_CURRENT - $SLURM_API_AGE` SLURM_API_VERSION=`printf "0x%02x%02x%02x" $((10#$SLURM_API_MAJOR)) $((10#$SLURM_API_AGE)) $((10#$SLURM_API_REVISION))` AC_DEFINE_UNQUOTED(SLURM_API_VERSION, $SLURM_API_VERSION, [Define the API's version]) AC_DEFINE_UNQUOTED(SLURM_API_CURRENT, $SLURM_API_CURRENT, [API current version]) AC_DEFINE_UNQUOTED(SLURM_API_MAJOR, $SLURM_API_MAJOR, [API current major]) AC_DEFINE_UNQUOTED(SLURM_API_AGE, $SLURM_API_AGE, [API current age]) AC_DEFINE_UNQUOTED(SLURM_API_REVISION, $SLURM_API_REVISION, [API current rev]) AC_SUBST(SLURM_API_VERSION) AC_SUBST(SLURM_API_CURRENT) AC_SUBST(SLURM_API_MAJOR) AC_SUBST(SLURM_API_AGE) AC_SUBST(SLURM_API_REVISION) # rpm make target needs Version in META, not major and minor version numbers VERSION="`perl -ne 'print,exit if s/^\s*VERSION:\s*(\S*).*/\1/i' $srcdir/META`" # If you ever use AM_INIT_AUTOMAKE(subdir-objects) do not define VERSION # since it will do it this automatically AC_DEFINE_UNQUOTED(VERSION, "$VERSION", [Define the project's version.]) AC_SUBST(VERSION) SLURM_MAJOR="`perl -ne 'print,exit if s/^\s*MAJOR:\s*(\S*).*/\1/i' $srcdir/META`" SLURM_MINOR="`perl -ne 'print,exit if s/^\s*MINOR:\s*(\S*).*/\1/i' $srcdir/META`" SLURM_MICRO="`perl -ne 'print,exit if s/^\s*MICRO:\s*(\S*).*/\1/i' $srcdir/META`" RELEASE="`perl -ne 'print,exit if s/^\s*RELEASE:\s*(\S*).*/\1/i' $srcdir/META`" # NOTE: SLURM_VERSION_NUMBER excludes any non-numeric component # (e.g. "pre1" in the MICRO), but may be suitable for the user determining # how to use the APIs or other differences. SLURM_VERSION_NUMBER="`printf "0x%02x%02x%02x" $((10#$SLURM_MAJOR)) $((10#$SLURM_MINOR)) $((10#$SLURM_MICRO))`" AC_DEFINE_UNQUOTED(SLURM_VERSION_NUMBER, $SLURM_VERSION_NUMBER, [SLURM Version Number]) AC_SUBST(SLURM_VERSION_NUMBER) if test "$SLURM_MAJOR.$SLURM_MINOR.$SLURM_MICRO" != "$VERSION"; then AC_MSG_ERROR([META information is inconsistent: $VERSION != $SLURM_MAJOR.$SLURM_MINOR.$SLURM_MICRO!]) fi # Check to see if we're on an unstable branch (no prereleases yet) if echo "$RELEASE" | grep -e "UNSTABLE"; then DATE=`date +"%Y%m%d%H%M"` SLURM_RELEASE="unstable svn build $DATE" SLURM_VERSION_STRING="$SLURM_MAJOR.$SLURM_MINOR ($SLURM_RELEASE)" else SLURM_RELEASE="`echo $RELEASE | sed 's/^0\.//'`" SLURM_VERSION_STRING="$SLURM_MAJOR.$SLURM_MINOR.$SLURM_MICRO" test $RELEASE = "1" || SLURM_VERSION_STRING="$SLURM_VERSION_STRING-$SLURM_RELEASE" fi AC_DEFINE_UNQUOTED(SLURM_MAJOR, "$SLURM_MAJOR", [Define the project's major version.]) AC_DEFINE_UNQUOTED(SLURM_MINOR, "$SLURM_MINOR", [Define the project's minor version.]) AC_DEFINE_UNQUOTED(SLURM_MICRO, "$SLURM_MICRO", [Define the project's micro version.]) AC_DEFINE_UNQUOTED(RELEASE, "$RELEASE", [Define the project's release.]) AC_DEFINE_UNQUOTED(SLURM_VERSION_STRING, "$SLURM_VERSION_STRING", [Define the project's version string.]) AC_SUBST(SLURM_MAJOR) AC_SUBST(SLURM_MINOR) AC_SUBST(SLURM_MICRO) AC_SUBST(RELEASE) AC_SUBST(SLURM_VERSION_STRING) ]) dnl AC_SLURM_VERSION dnl dnl Test if we want to include rpath in the executables (default=yes) dnl Doing so is generally discouraged due to problems this causes in upgrading dnl software and general incompatability issues dnl AC_DEFUN([X_AC_RPATH], [ ac_with_rpath=yes AC_MSG_CHECKING([whether to include rpath in build]) AC_ARG_WITH( [rpath], AS_HELP_STRING(--without-rpath, Do not include rpath in build), [ case "$withval" in yes) ac_with_rpath=yes ;; no) ac_with_rpath=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$withval" for --without-rpath]) ;; esac ] ) AC_MSG_RESULT([$ac_with_rpath]) ]) slurm-slurm-15-08-7-1/auxdir/test-driver000077500000000000000000000102771265000126300200260ustar00rootroot00000000000000#! /bin/sh # test-driver - basic testsuite driver script. scriptversion=2013-07-13.22; # UTC # Copyright (C) 2011-2013 Free Software Foundation, Inc. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # This file is maintained in Automake, please report # bugs to or send patches to # . # Make unconditional expansion of undefined variables an error. This # helps a lot in preventing typo-related bugs. set -u usage_error () { echo "$0: $*" >&2 print_usage >&2 exit 2 } print_usage () { cat <$log_file 2>&1 estatus=$? if test $enable_hard_errors = no && test $estatus -eq 99; then estatus=1 fi case $estatus:$expect_failure in 0:yes) col=$red res=XPASS recheck=yes gcopy=yes;; 0:*) col=$grn res=PASS recheck=no gcopy=no;; 77:*) col=$blu res=SKIP recheck=no gcopy=yes;; 99:*) col=$mgn res=ERROR recheck=yes gcopy=yes;; *:yes) col=$lgn res=XFAIL recheck=no gcopy=yes;; *:*) col=$red res=FAIL recheck=yes gcopy=yes;; esac # Report outcome to console. echo "${col}${res}${std}: $test_name" # Register the test result, and other relevant metadata. echo ":test-result: $res" > $trs_file echo ":global-test-result: $res" >> $trs_file echo ":recheck: $recheck" >> $trs_file echo ":copy-in-global-log: $gcopy" >> $trs_file # Local Variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC" # time-stamp-end: "; # UTC" # End: slurm-slurm-15-08-7-1/auxdir/type_socklen_t.m4000066400000000000000000000020151265000126300211030ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Lars Brinkhoff # # SYNOPSIS: # TYPE_SOCKLEN_T # # DESCRIPTION: # Check whether sys/socket.h defines type socklen_t. # Please note that some systems require sys/types.h to be included # before sys/socket.h can be compiled. ##***************************************************************************** AC_DEFUN([TYPE_SOCKLEN_T], [AC_CACHE_CHECK([for socklen_t], ac_cv_type_socklen_t, [ AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#include #include ]], [[socklen_t len = 42; return 0;]])],[ac_cv_type_socklen_t=yes],[ac_cv_type_socklen_t=no]) ]) if test "$ac_cv_type_socklen_t" = "yes"; then AC_DEFINE([HAVE_SOCKLEN_T], [1], [Define if you have the socklen_t type.]) fi AH_VERBATIM([HAVE_SOCKLEN_T_], [#ifndef HAVE_SOCKLEN_T # define HAVE_SOCKLEN_T typedef int socklen_t; #endif]) ]) slurm-slurm-15-08-7-1/auxdir/x_ac__system_configuration.m4000066400000000000000000000015051265000126300234700ustar00rootroot00000000000000##***************************************************************************** # $Id$ ##***************************************************************************** # AUTHOR: # Moe Jette # # SYNOPSIS: # X_AC__SYSTEM_CONFIGURATION # # DESCRIPTION: # Tests for existence of the _system_configuration structure. # # WARNINGS: # This macro must be placed after AC_PROG_CC or equivalent. ##***************************************************************************** AC_DEFUN([X_AC__SYSTEM_CONFIGURATION], [ AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#include ]], [[double x = _system_configuration.physmem;]])],[AC_DEFINE(HAVE__SYSTEM_CONFIGURATION, 1, [Define to 1 if you have the external variable, _system_configuration with a member named physmem.])],[]) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_affinity.m4000066400000000000000000000044261265000126300206740ustar00rootroot00000000000000##***************************************************************************** # $Id$ ##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_AFFINITY # # DESCRIPTION: # Test for various task affinity functions and set the definitions. # # WARNINGS: # This macro must be placed after AC_PROG_CC or equivalent. ##***************************************************************************** AC_DEFUN([X_AC_AFFINITY], [ # Test if sched_setaffinity function exists and argument count (it can vary) AC_CHECK_FUNCS(sched_setaffinity, [have_sched_setaffinity=yes]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#define _GNU_SOURCE #include ]], [[cpu_set_t mask; sched_getaffinity(0, sizeof(cpu_set_t), &mask);]])],[AC_DEFINE(SCHED_GETAFFINITY_THREE_ARGS, 1, [Define to 1 if sched_getaffinity takes three arguments.])],[]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#define _GNU_SOURCE #include ]], [[cpu_set_t mask; sched_getaffinity(0, &mask);]])],[AC_DEFINE(SCHED_GETAFFINITY_TWO_ARGS, 1, [Define to 1 if sched_getaffinity takes two arguments.])],[]) # # Test for NUMA memory afffinity functions and set the definitions # AC_CHECK_LIB([numa], [numa_available], [ac_have_numa=yes; NUMA_LIBS="-lnuma"]) AC_SUBST(NUMA_LIBS) AM_CONDITIONAL(HAVE_NUMA, test "x$ac_have_numa" = "xyes") if test "x$ac_have_numa" = "xyes"; then AC_DEFINE(HAVE_NUMA, 1, [define if numa library installed]) CFLAGS="-DNUMA_VERSION1_COMPATIBILITY $CFLAGS" else AC_MSG_WARN([unable to locate NUMA memory affinity functions]) fi # # Test for cpuset directory # cpuset_default_dir="/dev/cpuset" AC_ARG_WITH([cpusetdir], AS_HELP_STRING(--with-cpusetdir=PATH,specify path to cpuset directory default is /dev/cpuset), [try_path=$withval]) for cpuset_dir in $try_path "" $cpuset_default_dir; do if test -d "$cpuset_dir" ; then AC_DEFINE_UNQUOTED(CPUSET_DIR, "$cpuset_dir", [Define location of cpuset directory]) have_sched_setaffinity=yes break fi done # # Set HAVE_SCHED_SETAFFINITY if any task affinity supported AM_CONDITIONAL(HAVE_SCHED_SETAFFINITY, test "x$have_sched_setaffinity" = "xyes") ]) slurm-slurm-15-08-7-1/auxdir/x_ac_aix.m4000066400000000000000000000051551265000126300176440ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_AIX # # DESCRIPTION: # Check for AIX operating system and sets parameters accordingly, # also define HAVE_AIX and HAVE_LARGEFILE if appropriate. # NOTE: AC_SYS_LARGEFILE may fail on AIX due to inconstencies within # installed gcc header files. ##***************************************************************************** AC_DEFUN([X_AC_AIX], [ case "$host" in *-*-aix*) LDFLAGS="$LDFLAGS -Wl,-brtl" # permit run time linking LIB_LDFLAGS="$LDFLAGS -Wl,-G -Wl,-bnoentry -Wl,-bgcbypass:1000 -Wl,-bexpfull" SO_LDFLAGS=" $LDFLAGS -Wl,-G -Wl,-bnoentry -Wl,-bgcbypass:1000 -Wl,-bexpfull" if test "$OBJECT_MODE" = "64"; then CFLAGS="-maix64 $CFLAGS" CMD_LDFLAGS="$LDFLAGS -Wl,-bgcbypass:1000 -Wl,-bexpfull" # keep all common functions else CFLAGS="-maix32 $CFLAGS" CMD_LDFLAGS="$LDFLAGS -Wl,-bgcbypass:1000 -Wl,-bexpfull -Wl,-bmaxdata:0x70000000" # keep all common functions fi ac_have_aix="yes" ac_with_readline="no" AC_DEFINE(HAVE_AIX, 1, [Define to 1 for AIX operating system]) ;; *) ac_have_aix="no" ;; esac AC_SUBST(CMD_LDFLAGS) AC_SUBST(LIB_LDFLAGS) AC_SUBST(SO_LDFLAGS) AM_CONDITIONAL(HAVE_AIX, test "x$ac_have_aix" = "xyes") AC_SUBST(HAVE_AIX, "$ac_have_aix") if test "x$ac_have_aix" = "xyes"; then AC_ARG_WITH(proctrack, AS_HELP_STRING(--with-proctrack=PATH,Specify path to proctrack sources), [ PROCTRACKDIR="$withval" ] ) if test -f "$PROCTRACKDIR/lib/proctrackext.exp"; then PROCTRACKDIR="$PROCTRACKDIR/lib" AC_SUBST(PROCTRACKDIR) CPPFLAGS="-I$PROCTRACKDIR/include $CPPFLAGS" AC_CHECK_HEADERS(proctrack.h) ac_have_aix_proctrack="yes" elif test -f "$prefix/lib/proctrackext.exp"; then PROCTRACKDIR="$prefix/lib" AC_SUBST(PROCTRACKDIR) CPPFLAGS="$CPPFLAGS -I$prefix/include" AC_CHECK_HEADERS(proctrack.h) ac_have_aix_proctrack="yes" else AC_MSG_WARN([proctrackext.exp is required for AIX proctrack support, specify location with --with-proctrack]) ac_have_aix_proctrack="no" fi else ac_have_aix_proctrack="no" AC_SYS_LARGEFILE fi AM_CONDITIONAL(HAVE_AIX_PROCTRACK, test "x$ac_have_aix_proctrack" = "xyes") ]) slurm-slurm-15-08-7-1/auxdir/x_ac_blcr.m4000066400000000000000000000036141265000126300200030ustar00rootroot00000000000000##***************************************************************************** ## $Id: x_ac_blcr.m4 0001 2009-01-10 16:06:05Z hjcao $ ##***************************************************************************** # AUTHOR: # Copied from x_ac_munge. # # # SYNOPSIS: # X_AC_BLCR() # # DESCRIPTION: # Check the usual suspects for an BLCR installation, # updating CPPFLAGS and LDFLAGS as necessary. # # WARNINGS: # This macro must be placed after AC_PROG_CC and before AC_PROG_LIBTOOL. ##***************************************************************************** AC_DEFUN([X_AC_BLCR], [ _x_ac_blcr_dirs="/usr /usr/local /opt/freeware /opt/blcr" _x_ac_blcr_libs="lib64 lib" AC_ARG_WITH( [blcr], AS_HELP_STRING(--with-blcr=PATH,Specify path to BLCR installation), [_x_ac_blcr_dirs="$withval $_x_ac_blcr_dirs"]) AC_CACHE_CHECK( [for blcr installation], [x_ac_cv_blcr_dir], [ for d in $_x_ac_blcr_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/libcr.h" || continue for bit in $_x_ac_blcr_libs; do test -d "$d/$bit" || continue _x_ac_blcr_libs_save="$LIBS" LIBS="-L$d/$bit -lcr $LIBS" AC_LINK_IFELSE( [AC_LANG_CALL([], cr_get_restart_info)], AS_VAR_SET(x_ac_cv_blcr_dir, $d)) LIBS="$_x_ac_blcr_libs_save" test -n "$x_ac_cv_blcr_dir" && break done test -n "$x_ac_cv_blcr_dir" && break done ]) if test -z "$x_ac_cv_blcr_dir"; then AC_MSG_WARN([unable to locate blcr installation]) else BLCR_HOME="$x_ac_cv_blcr_dir" BLCR_LIBS="-lcr" BLCR_CPPFLAGS="-I$x_ac_cv_blcr_dir/include" BLCR_LDFLAGS="-L$x_ac_cv_blcr_dir/$bit" fi AC_DEFINE_UNQUOTED(BLCR_HOME, "$x_ac_cv_blcr_dir", [Define BLCR installation home]) AC_SUBST(BLCR_HOME) AC_SUBST(BLCR_LIBS) AC_SUBST(BLCR_CPPFLAGS) AC_SUBST(BLCR_LDFLAGS) AM_CONDITIONAL(WITH_BLCR, test -n "$x_ac_cv_blcr_dir") ]) slurm-slurm-15-08-7-1/auxdir/x_ac_bluegene.m4000066400000000000000000000342711265000126300206520ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Morris Jette , Danny Auble # # SYNOPSIS: # X_AC_BGL X_AC_BGP X_AC_BGQ # # DESCRIPTION: # Test for Blue Gene specific files. # If found define HAVE_BG and HAVE_FRONT_END and others ##***************************************************************************** AC_DEFUN([X_AC_BGL], [ ac_real_bluegene_loaded=no ac_bluegene_loaded=no AC_ARG_WITH(db2-dir, AS_HELP_STRING(--with-db2-dir=PATH,Specify path to parent directory of DB2 library), [ trydb2dir=$withval ]) # test for bluegene emulation mode AC_ARG_ENABLE(bluegene-emulation, AS_HELP_STRING(--enable-bluegene-emulation, deprecated use --enable-bgl-emulation), [ case "$enableval" in yes) bluegene_emulation=yes ;; no) bluegene_emulation=no ;; *) AC_MSG_ERROR([bad value "$enableval" for --enable-bluegene-emulation]) ;; esac ]) AC_ARG_ENABLE(bgl-emulation, AS_HELP_STRING(--enable-bgl-emulation,Run SLURM in BGL mode on a non-bluegene system), [ case "$enableval" in yes) bgl_emulation=yes ;; no) bgl_emulation=no ;; *) AC_MSG_ERROR([bad value "$enableval" for --enable-bgl-emulation]) ;; esac ]) if test "x$bluegene_emulation" = "xyes" -o "x$bgl_emulation" = "xyes"; then AC_DEFINE(HAVE_3D, 1, [Define to 1 if 3-dimensional architecture]) AC_DEFINE(SYSTEM_DIMENSIONS, 3, [3-dimensional architecture]) AC_DEFINE(HAVE_BG, 1, [Define to 1 if emulating or running on Blue Gene system]) AC_DEFINE(HAVE_BG_L_P, 1, [Define to 1 if emulating or running on Blue Gene/L or P system]) AC_DEFINE(HAVE_BGL, 1, [Define to 1 if emulating or running on Blue Gene/L system]) AC_DEFINE(HAVE_FRONT_END, 1, [Define to 1 if running slurmd on front-end only]) AC_MSG_NOTICE([Running in BG/L emulation mode]) bg_default_dirs="" #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes else bg_default_dirs="/bgl/BlueLight/ppcfloor/bglsys /opt/IBM/db2/V8.1 /u/bgdb2cli/sqllib /home/bgdb2cli/sqllib" fi for bg_dir in $trydb2dir "" $bg_default_dirs; do # Skip directories that don't exist if test ! -z "$bg_dir" -a ! -d "$bg_dir" ; then continue; fi # Search for required BG API libraries in the directory if test -z "$have_bg_ar" -a -f "$bg_dir/lib64/libbglbridge.so" ; then have_bg_ar=yes bg_bridge_so="$bg_dir/lib64/libbglbridge.so" bg_ldflags="$bg_ldflags -L$bg_dir/lib64 -L/usr/lib64 -Wl,--unresolved-symbols=ignore-in-shared-libs -lbglbridge -lbgldb -ltableapi -lbglmachine -lexpat -lsaymessage" fi # Search for required DB2 library in the directory if test -z "$have_db2" -a -f "$bg_dir/lib64/libdb2.so" ; then have_db2=yes bg_db2_so="$bg_dir/lib64/libdb2.so" bg_ldflags="$bg_ldflags -L$bg_dir/lib64 -ldb2" fi # Search for headers in the directory if test -z "$have_bg_hdr" -a -f "$bg_dir/include/rm_api.h" ; then have_bg_hdr=yes bg_includes="-I$bg_dir/include" fi done if test ! -z "$have_bg_ar" -a ! -z "$have_bg_hdr" -a ! -z "$have_db2" ; then # ac_with_readline="no" # Test to make sure the api is good have_bg_files=yes saved_LDFLAGS="$LDFLAGS" LDFLAGS="$saved_LDFLAGS $bg_ldflags -m64" AC_LINK_IFELSE([AC_LANG_PROGRAM([[ int rm_set_serial(char *); ]], [[ rm_set_serial(""); ]])],[have_bg_files=yes],[AC_MSG_ERROR(There is a problem linking to the BG/L api.)]) LDFLAGS="$saved_LDFLAGS" fi if test ! -z "$have_bg_files" ; then BG_INCLUDES="$bg_includes" CFLAGS="$CFLAGS -m64 --std=gnu99" CXXFLAGS="$CXXFLAGS $CFLAGS" AC_DEFINE(HAVE_3D, 1, [Define to 1 if 3-dimensional architecture]) AC_DEFINE(SYSTEM_DIMENSIONS, 3, [3-dimensional architecture]) AC_DEFINE(HAVE_BG, 1, [Define to 1 if emulating or running on Blue Gene system]) AC_DEFINE(HAVE_BG_L_P, 1, [Define to 1 if emulating or running on Blue Gene/L or P system]) AC_DEFINE(HAVE_BGL, 1, [Define to 1 if emulating or running on Blue Gene/L system]) AC_DEFINE(HAVE_FRONT_END, 1, [Define to 1 if running slurmd on front-end only]) AC_DEFINE(HAVE_BG_FILES, 1, [Define to 1 if have Blue Gene files]) AC_DEFINE_UNQUOTED(BG_BRIDGE_SO, "$bg_bridge_so", [Define the BG_BRIDGE_SO value]) AC_DEFINE_UNQUOTED(BG_DB2_SO, "$bg_db2_so", [Define the BG_DB2_SO value]) AC_MSG_CHECKING(for BG serial value) bg_serial="BGL" AC_ARG_WITH(bg-serial, AS_HELP_STRING(--with-bg-serial=NAME,set BG_SERIAL value), [bg_serial="$withval"]) AC_MSG_RESULT($bg_serial) AC_DEFINE_UNQUOTED(BG_SERIAL, "$bg_serial", [Define the BG_SERIAL value]) #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes ac_real_bluegene_loaded=yes fi AC_SUBST(BG_INCLUDES) ]) AC_DEFUN([X_AC_BGP], [ # test for bluegene emulation mode AC_ARG_ENABLE(bgp-emulation, AS_HELP_STRING(--enable-bgp-emulation,Run SLURM in BG/P mode on a non-bluegene system), [ case "$enableval" in yes) bgp_emulation=yes ;; no) bgp_emulation=no ;; *) AC_MSG_ERROR([bad value "$enableval" for --enable-bgp-emulation]) ;; esac ]) # Skip if already set if test "x$ac_bluegene_loaded" = "xyes" ; then bg_default_dirs="" elif test "x$bgp_emulation" = "xyes"; then AC_DEFINE(HAVE_3D, 1, [Define to 1 if 3-dimensional architecture]) AC_DEFINE(SYSTEM_DIMENSIONS, 3, [3-dimensional architecture]) AC_DEFINE(HAVE_BG, 1, [Define to 1 if emulating or running on Blue Gene system]) AC_DEFINE(HAVE_BG_L_P, 1, [Define to 1 if emulating or running on Blue Gene/L or P system]) AC_DEFINE(HAVE_BGP, 1, [Define to 1 if emulating or running on Blue Gene/P system]) AC_DEFINE(HAVE_FRONT_END, 1, [Define to 1 if running slurmd on front-end only]) AC_MSG_NOTICE([Running in BG/P emulation mode]) bg_default_dirs="" #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes else bg_default_dirs="/bgsys/drivers/ppcfloor" fi libname=bgpbridge for bg_dir in $trydb2dir "" $bg_default_dirs; do # Skip directories that don't exist if test ! -z "$bg_dir" -a ! -d "$bg_dir" ; then continue; fi soloc=$bg_dir/lib64/lib$libname.so # Search for required BG API libraries in the directory if test -z "$have_bg_ar" -a -f "$soloc" ; then have_bgp_ar=yes bg_ldflags="$bg_ldflags -L$bg_dir/lib64 -L/usr/lib64 -Wl,--unresolved-symbols=ignore-in-shared-libs -l$libname" fi # Search for headers in the directory if test -z "$have_bg_hdr" -a -f "$bg_dir/include/rm_api.h" ; then have_bgp_hdr=yes bg_includes="-I$bg_dir/include" fi done if test ! -z "$have_bgp_ar" -a ! -z "$have_bgp_hdr" ; then # ac_with_readline="no" # Test to make sure the api is good saved_LDFLAGS="$LDFLAGS" LDFLAGS="$saved_LDFLAGS $bg_ldflags -m64" AC_LINK_IFELSE([AC_LANG_PROGRAM([[ int rm_set_serial(char *); ]], [[ rm_set_serial(""); ]])],[have_bgp_files=yes],[AC_MSG_ERROR(There is a problem linking to the BG/P api.)]) LDFLAGS="$saved_LDFLAGS" fi if test ! -z "$have_bgp_files" ; then BG_INCLUDES="$bg_includes" CFLAGS="$CFLAGS -m64" CXXFLAGS="$CXXFLAGS $CFLAGS" AC_DEFINE(HAVE_3D, 1, [Define to 1 if 3-dimensional architecture]) AC_DEFINE(SYSTEM_DIMENSIONS, 3, [3-dimensional architecture]) AC_DEFINE(HAVE_BG, 1, [Define to 1 if emulating or running on Blue Gene system]) AC_DEFINE(HAVE_BG_L_P, 1, [Define to 1 if emulating or running on Blue Gene/L or P system]) AC_DEFINE(HAVE_BGP, 1, [Define to 1 if emulating or running on Blue Gene/P system]) AC_DEFINE(HAVE_FRONT_END, 1, [Define to 1 if running slurmd on front-end only]) AC_DEFINE(HAVE_BG_FILES, 1, [Define to 1 if have Blue Gene files]) AC_DEFINE_UNQUOTED(BG_BRIDGE_SO, "$soloc", [Define the BG_BRIDGE_SO value]) AC_MSG_CHECKING(for BG serial value) bg_serial="BGP" AC_ARG_WITH(bg-serial,, [bg_serial="$withval"]) AC_MSG_RESULT($bg_serial) AC_DEFINE_UNQUOTED(BG_SERIAL, "$bg_serial", [Define the BG_SERIAL value]) #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes ac_real_bluegene_loaded=yes fi AC_SUBST(BG_INCLUDES) ]) AC_DEFUN([X_AC_BGQ], [ # test for bluegene emulation mode AC_ARG_ENABLE(bgq-emulation, AS_HELP_STRING(--enable-bgq-emulation,Run SLURM in BG/Q mode on a non-bluegene system), [ case "$enableval" in yes) bgq_emulation=yes ;; no) bgq_emulation=no ;; *) AC_MSG_ERROR([bad value "$enableval" for --enable-bgq-emulation]) ;; esac ]) # Skip if already set if test "x$ac_bluegene_loaded" = "xyes" ; then bg_default_dirs="" elif test "x$bgq_emulation" = "xyes"; then AC_DEFINE(HAVE_4D, 1, [Define to 1 if 4-dimensional architecture]) AC_DEFINE(SYSTEM_DIMENSIONS, 4, [4-dimensional schedulable architecture]) AC_DEFINE(HAVE_BG, 1, [Define to 1 if emulating or running on Blue Gene system]) AC_DEFINE(HAVE_BGQ, 1, [Define to 1 if emulating or running on Blue Gene/Q system]) AC_DEFINE(HAVE_FRONT_END, 1, [Define to 1 if running slurmd on front-end only]) AC_MSG_NOTICE([Running in BG/Q emulation mode]) bg_default_dirs="" #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes ac_bgq_loaded=yes else bg_default_dirs="/bgsys/drivers/ppcfloor" fi libname=bgsched loglibname=log4cxx runjoblibname=runjob_client for bg_dir in $trydb2dir "" $bg_default_dirs; do # Skip directories that don't exist if test ! -z "$bg_dir" -a ! -d "$bg_dir" ; then continue; fi soloc=$bg_dir/hlcs/lib/lib$libname.so # Search for required BG API libraries in the directory if test -z "$have_bg_ar" -a -f "$soloc" ; then have_bgq_ar=yes if test "$ac_with_rpath" = "yes"; then bg_libs="$bg_libs -Wl,-rpath -Wl,$bg_dir/hlcs/lib -L$bg_dir/hlcs/lib -l$libname" else bg_libs="$bg_libs -L$bg_dir/hlcs/lib -l$libname" fi fi soloc=$bg_dir/extlib/lib/lib$loglibname.so if test -z "$have_bg_ar" -a -f "$soloc" ; then have_bgq_ar=yes if test "$ac_with_rpath" = "yes"; then bg_libs="$bg_libs -Wl,-rpath -Wl,$bg_dir/extlib/lib -L$bg_dir/extlib/lib -l$loglibname" else bg_libs="$bg_libs -L$bg_dir/extlib/lib -l$loglibname" fi fi soloc=$bg_dir/hlcs/lib/lib$runjoblibname.so # Search for required BG API libraries in the directory if test -z "$have_bg_ar" -a -f "$soloc" ; then have_bgq_ar=yes if test "$ac_with_rpath" = "yes"; then runjob_ldflags="$runjob_ldflags -Wl,-rpath -Wl,$bg_dir/hlcs/lib -L$bg_dir/hlcs/lib -l$runjoblibname" else runjob_ldflags="$runjob_ldflags -L$bg_dir/hlcs/lib -l$runjoblibname" fi fi # Search for headers in the directory if test -z "$have_bg_hdr" -a -f "$bg_dir/hlcs/include/bgsched/bgsched.h" ; then have_bgq_hdr=yes bg_includes="-I$bg_dir -I$bg_dir/hlcs/include" fi if test -z "$have_bg_hdr" -a -f "$bg_dir/extlib/include/log4cxx/logger.h" ; then have_bgq_hdr=yes bg_includes="$bg_includes -I$bg_dir/extlib/include" fi done if test ! -z "$have_bgq_ar" -a ! -z "$have_bgq_hdr" ; then # ac_with_readline="no" # Test to make sure the api is good saved_LIBS="$LIBS" saved_CPPFLAGS="$CPPFLAGS" LIBS="$saved_LIBS $bg_libs" CPPFLAGS="$saved_CPPFLAGS -m64 $bg_includes" AC_LANG_PUSH(C++) AC_LINK_IFELSE([AC_LANG_PROGRAM( [[#include #include ]], [[ bgsched::init(""); log4cxx::LoggerPtr logger_ptr(log4cxx::Logger::getLogger( "ibm" ));]])], [have_bgq_files=yes], [AC_MSG_ERROR(There is a problem linking to the BG/Q api.)]) # In later versions of the driver IBM added a better function # to see if blocks were IO connected or not. Here is a check # to not break backwards compatibility AC_LINK_IFELSE([AC_LANG_PROGRAM( [[#include #include ]], [[ bgsched::Block::checkIO("", NULL, NULL);]])], [have_bgq_new_io_check=yes], [AC_MSG_RESULT(Using old iocheck.)]) # In later versions of the driver IBM added an "action" to a # block. Here is a check to not break backwards compatibility AC_LINK_IFELSE([AC_LANG_PROGRAM( [[#include #include ]], [[ bgsched::Block::Ptr block_ptr; block_ptr->getAction();]])], [have_bgq_get_action=yes], [AC_MSG_RESULT(Blocks do not have actions!)]) AC_LANG_POP(C++) LIBS="$saved_LIBS" CPPFLAGS="$saved_CPPFLAGS" fi if test ! -z "$have_bgq_files" ; then BG_LDFLAGS="$bg_libs" RUNJOB_LDFLAGS="$runjob_ldflags" BG_INCLUDES="$bg_includes" CFLAGS="$CFLAGS -m64" CXXFLAGS="$CXXFLAGS $CFLAGS" AC_DEFINE(HAVE_4D, 1, [Define to 1 if 4-dimensional architecture]) AC_DEFINE(SYSTEM_DIMENSIONS, 4, [4-dimensional architecture]) AC_DEFINE(HAVE_BG, 1, [Define to 1 if emulating or running on Blue Gene system]) AC_DEFINE(HAVE_BGQ, 1, [Define to 1 if emulating or running on Blue Gene/Q system]) AC_DEFINE(HAVE_FRONT_END, 1, [Define to 1 if running slurmd on front-end only]) AC_DEFINE(HAVE_BG_FILES, 1, [Define to 1 if have Blue Gene files]) #AC_DEFINE_UNQUOTED(BG_BRIDGE_SO, "$soloc", [Define the BG_BRIDGE_SO value]) if test ! -z "$have_bgq_new_io_check" ; then AC_DEFINE(HAVE_BG_NEW_IO_CHECK, 1, [Define to 1 if using code with new iocheck]) fi if test ! -z "$have_bgq_get_action" ; then AC_DEFINE(HAVE_BG_GET_ACTION, 1, [Define to 1 if using code where blocks have actions]) fi AC_MSG_NOTICE([Running on a legitimate BG/Q system]) # AC_MSG_CHECKING(for BG serial value) # bg_serial="BGQ" # AC_ARG_WITH(bg-serial,, [bg_serial="$withval"]) # AC_MSG_RESULT($bg_serial) # AC_DEFINE_UNQUOTED(BG_SERIAL, "$bg_serial", [Define the BG_SERIAL value]) #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes ac_real_bluegene_loaded=yes ac_bgq_loaded=yes fi AC_SUBST(BG_INCLUDES) AC_SUBST(BG_LDFLAGS) AC_SUBST(RUNJOB_LDFLAGS) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_cflags.m4000066400000000000000000000013641265000126300203200ustar00rootroot00000000000000##***************************************************************************** ## $Id: x_ac_cflags.m4 5401 2005-09-22 01:56:49Z morrone $ ##***************************************************************************** # AUTHOR: # Danny Auble # # SYNOPSIS: # X_AC_CFLAGS # # DESCRIPTION: # Add extra cflags ##***************************************************************************** AC_DEFUN([X_AC_CFLAGS], [ # This is here to avoid a bug in the gcc compiler 3.4.6 # Without this flag there is a bug when pointing to other functions # and then using them. It is also advised to set the flag if there # are goto statements you may get better performance. if test "$GCC" = yes; then CFLAGS="$CFLAGS -fno-gcse" fi ]) slurm-slurm-15-08-7-1/auxdir/x_ac_cray.m4000066400000000000000000000272121265000126300200170ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_CRAY # # DESCRIPTION: # Test for Cray XT and XE systems with 2-D/3-D interconnects. # Tests for required libraries (Native Cray systems only): # * libjob # Tests for required libraries (ALPS Cray systems only): # * mySQL (relies on testing for mySQL presence earlier); # * libexpat, needed for XML-RPC calls to Cray's BASIL # (Batch Application Scheduler Interface Layer) interface. # Tests for required libraries (non-Cray systems with a Cray network): # * libalpscomm_sn # * libalpscomm_cn # Tests for DataWarp files #***************************************************************************** # # Copyright 2013 Cray Inc. All Rights Reserved. # AC_DEFUN([X_AC_CRAY], [ ac_have_native_cray="no" ac_have_alps_cray="no" ac_have_real_cray="no" ac_have_alps_emulation="no" ac_have_alps_cray_emulation="no" ac_have_cray_network="no" ac_really_no_cray="no" AC_ARG_WITH( [alps-emulation], AS_HELP_STRING(--with-alps-emulation,Run SLURM against an emulated ALPS system - requires option cray.conf @<:@default=no@:>@), [test "$withval" = no || ac_have_alps_emulation=yes], [ac_have_alps_emulation=no]) AC_ARG_ENABLE( [cray-emulation], AS_HELP_STRING(--enable-alps-cray-emulation,Run SLURM in an emulated Cray mode), [ case "$enableval" in yes) ac_have_alps_cray_emulation="yes" ;; no) ac_have_alps_cray_emulation="no" ;; *) AC_MSG_ERROR([bad value "$enableval" for --enable-alps-cray-emulation]) ;; esac ] ) AC_ARG_ENABLE( [native-cray], AS_HELP_STRING(--enable-native-cray,Run SLURM natively on a Cray without ALPS), [ case "$enableval" in yes) ac_have_native_cray="yes" ;; no) ac_have_native_cray="no" ;; *) AC_MSG_ERROR([bad value "$enableval" for --enable-native-cray]) ;; esac ] ) AC_ARG_ENABLE( [cray-network], AS_HELP_STRING(--enable-cray-network,Run SLURM on a non-Cray system with a Cray network), [ case "$enableval" in yes) ac_have_cray_network="yes" ;; no) ac_have_cray_network="no" ;; *) AC_MSG_ERROR([bad value "$enableval" for --enable-cray-network]) ;: esac ] ) AC_ARG_ENABLE( [really-no-cray], AS_HELP_STRING(--enable-really-no-cray,Disable cray support for eslogin machines), [ case "$enableval" in yes) ac_really_no_cray="yes" ;; no) ac_really_no_cray="no" ;; *) AC_MSG_ERROR([bad value "$enableval" for --enable-really-no-cray]) ;; esac ] ) if test "$ac_have_alps_emulation" = "yes"; then ac_have_alps_cray="yes" AC_MSG_NOTICE([Running A ALPS Cray system against an ALPS emulation]) AC_DEFINE(HAVE_ALPS_EMULATION, 1, [Define to 1 if running against an ALPS emulation]) elif test "$ac_have_alps_cray_emulation" = "yes"; then ac_have_alps_cray="yes" AC_MSG_NOTICE([Running in Cray emulation mode]) AC_DEFINE(HAVE_ALPS_CRAY_EMULATION, 1, [Define to 1 for emulating a Cray XT/XE system using ALPS]) elif test "$ac_have_native_cray" = "yes" || test "$ac_have_cray_network" = "yes" ; then _x_ac_cray_job_dir="job/default" _x_ac_cray_alpscomm_dir="alpscomm/default" _x_ac_cray_dirs="/opt/cray" for d in $_x_ac_cray_dirs; do test -d "$d" || continue if test "$ac_have_native_cray" = "yes"; then _test_dir="$d/$_x_ac_cray_job_dir" test -d "$_test_dir" || continue test -d "$_test_dir/include" || continue test -f "$_test_dir/include/job.h" || continue test -d "$_test_dir/lib64" || continue test -f "$_test_dir/lib64/libjob.so" || continue CRAY_JOB_CPPFLAGS="$CRAY_JOB_CPPFLAGS -I$_test_dir/include" CRAY_JOB_LDFLAGS="$CRAY_JOB_LDFLAGS -L$_test_dir/lib64 -ljob" fi _test_dir="$d/$_x_ac_cray_alpscomm_dir" test -d "$_test_dir" || continue test -d "$_test_dir/include" || continue test -f "$_test_dir/include/alpscomm_cn.h" || continue test -f "$_test_dir/include/alpscomm_sn.h" || continue test -d "$_test_dir/lib64" || continue test -f "$_test_dir/lib64/libalpscomm_cn.so" || continue test -f "$_test_dir/lib64/libalpscomm_sn.so" || continue CRAY_ALPSC_CN_CPPFLAGS="$CRAY_ALPSC_CN_CPPFLAGS -I$_test_dir/include" CRAY_ALPSC_SN_CPPFLAGS="$CRAY_ALPSC_SN_CPPFLAGS -I$_test_dir/include" CRAY_ALPSC_CN_LDFLAGS="$CRAY_ALPSC_CN_LDFLAGS -L$_test_dir/lib64 -lalpscomm_cn" CRAY_ALPSC_SN_LDFLAGS="$CRAY_ALPSC_SN_LDFLAGS -L$_test_dir/lib64 -lalpscomm_sn" CRAY_SWITCH_CPPFLAGS="$CRAY_SWITCH_CPPFLAGS $CRAY_JOB_CPPFLAGS $CRAY_ALPSC_CN_CPPFLAGS $CRAY_ALPSC_SN_CPPFLAGS" CRAY_SWITCH_LDFLAGS="$CRAY_SWITCH_LDFLAGS $CRAY_JOB_LDFLAGS $CRAY_ALPSC_CN_LDFLAGS $CRAY_ALPSC_SN_LDFLAGS" CRAY_SELECT_CPPFLAGS="$CRAY_SELECT_CPPFLAGS $CRAY_ALPSC_SN_CPPFLAGS" CRAY_SELECT_LDFLAGS="$CRAY_SELECT_LDFLAGS $CRAY_ALPSC_SN_LDFLAGS" if test "$ac_have_native_cray" = "yes"; then CRAY_TASK_CPPFLAGS="$CRAY_TASK_CPPFLAGS $CRAY_ALPSC_CN_CPPFLAGS" CRAY_TASK_LDFLAGS="$CRAY_TASK_LDFLAGS $CRAY_ALPSC_CN_LDFLAGS" fi saved_CPPFLAGS="$CPPFLAGS" saved_LIBS="$LIBS" CPPFLAGS="$CRAY_JOB_CPPFLAGS $CRAY_ALPSC_CN_CPPFLAGS $CRAY_ALPSC_SN_CPPFLAGS $saved_CPPFLAGS" LIBS="$CRAY_JOB_LDFLAGS $CRAY_ALPSC_CN_LDFLAGS $CRAY_ALPSC_SN_LDFLAGS $saved_LIBS" if test "$ac_have_native_cray" = "yes"; then AC_LINK_IFELSE( [AC_LANG_PROGRAM( [[#include #include #include ]], [[ job_getjidcnt(); alpsc_release_cookies((char **)0, 0, 0); alpsc_flush_lustre((char **)0); ]] )], [have_cray_files="yes"], [AC_MSG_ERROR(There is a problem linking to the Cray API)]) # See if we have 5.2UP01 alpscomm functions AC_SEARCH_LIBS([alpsc_pre_suspend], [alpscomm_cn], [AC_DEFINE(HAVE_NATIVE_CRAY_GA, 1, [Define to 1 if alpscomm functions new to CLE 5.2UP01 are defined])]) elif test "$ac_have_cray_network" = "yes"; then AC_LINK_IFELSE( [AC_LANG_PROGRAM( [[#include #include ]], [[ alpsc_release_cookies((char **)0, 0, 0); alpsc_flush_lustre((char **)0); ]] )], [have_cray_files="yes"], [AC_MSG_ERROR(There is a problem linking to the Cray API)]) fi LIBS="$saved_LIBS" CPPFLAGS="$saved_CPPFLAGS" break done if test -z "$have_cray_files"; then AC_MSG_ERROR([Unable to locate Cray APIs (usually in /opt/cray/alpscomm and /opt/cray/job)]) else if test "$ac_have_native_cray" = "yes"; then AC_MSG_NOTICE([Running on a Cray system in native mode without ALPS]) elif test "$ac_have_cray_network" = "yes"; then AC_MSG_NOTICE([Running on a system with a Cray network]) fi fi if test "$ac_have_native_cray" = "yes"; then ac_have_real_cray="yes" ac_have_native_cray="yes" AC_DEFINE(HAVE_NATIVE_CRAY, 1, [Define to 1 for running on a Cray in native mode without ALPS]) AC_DEFINE(HAVE_REAL_CRAY, 1, [Define to 1 for running on a real Cray system]) elif test "$ac_have_cray_network" = "yes"; then ac_have_cray_network="yes" AC_DEFINE(HAVE_3D, 1, [Define to 1 if 3-dimensional architecture]) AC_DEFINE(SYSTEM_DIMENSIONS, 3, [3-dimensional architecture]) AC_DEFINE(HAVE_CRAY_NETWORK, 1, [Define to 1 for systems with a Cray network]) fi else # Check for a Cray-specific file: # * older XT systems use an /etc/xtrelease file # * newer XT/XE systems use an /etc/opt/cray/release/xtrelease file # * both have an /etc/xthostname AC_MSG_CHECKING([whether this is a Cray XT or XE system running on ALPS or ALPS simulator]) if test -f /etc/xtrelease || test -d /etc/opt/cray/release; then ac_have_alps_cray="yes" ac_have_real_cray="yes" fi AC_MSG_RESULT([$ac_have_alps_cray]) fi if test "$ac_really_no_cray" = "yes"; then ac_have_alps_cray="no" ac_have_real_cray="no" fi if test "$ac_have_alps_cray" = "yes"; then # libexpat is always required for the XML-RPC interface, but it is only # needed in the select plugin, so set it up here instead of everywhere. AC_CHECK_HEADER(expat.h, [], AC_MSG_ERROR([Cray BASIL requires expat headers/rpm])) AC_CHECK_LIB(expat, XML_ParserCreate, [CRAY_SELECT_LDFLAGS="$CRAY_SELECT_LDFLAGS -lexpat"], AC_MSG_ERROR([Cray BASIL requires libexpat.so (i.e. libexpat1-dev)])) if test "$ac_have_real_cray" = "yes"; then # libjob is needed, but we don't want to put it on the LIBS line here. # If we are on a native system it is handled elsewhere, and on a hybrid # we only need this in libsrun. AC_CHECK_LIB([job], [job_getjid], [CRAY_JOB_LDFLAGS="$CRAY_JOB_LDFLAGS -ljob"], AC_MSG_ERROR([Need cray-job (usually in /opt/cray/job/default)])) AC_DEFINE(HAVE_REAL_CRAY, 1, [Define to 1 for running on a real Cray system]) fi if test -z "$MYSQL_CFLAGS" || test -z "$MYSQL_LIBS"; then AC_MSG_ERROR([Cray BASIL requires the cray-MySQL-devel-enterprise rpm]) fi # Used by X_AC_DEBUG to set default SALLOC_RUN_FOREGROUND value to 1 x_ac_salloc_background=no AC_DEFINE(HAVE_3D, 1, [Define to 1 if 3-dimensional architecture]) AC_DEFINE(SYSTEM_DIMENSIONS, 3, [3-dimensional architecture]) AC_DEFINE(HAVE_FRONT_END, 1, [Define to 1 if running slurmd on front-end only]) AC_DEFINE(HAVE_ALPS_CRAY, 1, [Define to 1 for Cray XT/XE systems using ALPS]) AC_DEFINE(SALLOC_KILL_CMD, 1, [Define to 1 for salloc to kill child processes at job termination]) fi AM_CONDITIONAL(HAVE_NATIVE_CRAY, test "$ac_have_native_cray" = "yes") AM_CONDITIONAL(HAVE_ALPS_CRAY, test "$ac_have_alps_cray" = "yes") AM_CONDITIONAL(HAVE_REAL_CRAY, test "$ac_have_real_cray" = "yes") AM_CONDITIONAL(HAVE_CRAY_NETWORK, test "$ac_have_cray_network" = "yes") AM_CONDITIONAL(HAVE_ALPS_EMULATION, test "$ac_have_alps_emulation" = "yes") AM_CONDITIONAL(HAVE_ALPS_CRAY_EMULATION, test "$ac_have_alps_cray_emulation" = "yes") AC_SUBST(CRAY_JOB_CPPFLAGS) AC_SUBST(CRAY_JOB_LDFLAGS) AC_SUBST(CRAY_SELECT_CPPFLAGS) AC_SUBST(CRAY_SELECT_LDFLAGS) AC_SUBST(CRAY_SWITCH_CPPFLAGS) AC_SUBST(CRAY_SWITCH_LDFLAGS) AC_SUBST(CRAY_TASK_CPPFLAGS) AC_SUBST(CRAY_TASK_LDFLAGS) _x_ac_datawarp_dirs="/opt/cray/dws/default" _x_ac_datawarp_libs="lib64 lib" AC_ARG_WITH( [datawarp], AS_HELP_STRING(--with-datawarp=PATH,Specify path to DataWarp installation), [_x_ac_datawarp_dirs="$withval $_x_ac_datawarp_dirs"]) AC_CACHE_CHECK( [for datawarp installation], [x_ac_cv_datawarp_dir], [ for d in $_x_ac_datawarp_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/dws_thin.h" || continue for bit in $_x_ac_datawarp_libs; do test -d "$d/$bit" || continue test -f "$d/$bit/libdws_thin.so" || continue AS_VAR_SET(x_ac_cv_datawarp_dir, $d) break done test -n "$x_ac_cv_datawarp_dir" && break done ]) if test -z "$x_ac_cv_datawarp_dir"; then AC_MSG_WARN([unable to locate DataWarp installation]) else DATAWARP_CPPFLAGS="-I$x_ac_cv_datawarp_dir/include" if test "$ac_with_rpath" = "yes"; then DATAWARP_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_datawarp_dir/$bit -L$x_ac_cv_datawarp_dir/$bit -ldws_thin" else DATAWARP_LDFLAGS="-L$x_ac_cv_datawarp_dir/$bit -ldws_thin" fi AC_DEFINE(HAVE_DATAWARP, 1, [Define to 1 if DataWarp library found]) fi AC_SUBST(DATAWARP_CPPFLAGS) AC_SUBST(DATAWARP_LDFLAGS) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_curl.m4000066400000000000000000000260021265000126300200220ustar00rootroot00000000000000#*************************************************************************** # _ _ ____ _ # Project ___| | | | _ \| | # / __| | | | |_) | | # | (__| |_| | _ <| |___ # \___|\___/|_| \_\_____| # # Copyright (C) 2006, David Shaw # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution. The terms # are also available at http://curl.haxx.se/docs/copyright.html. # # You may opt to use, copy, modify, merge, publish, distribute and/or sell # copies of the Software, and permit persons to whom the Software is # furnished to do so, under the terms of the COPYING file. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # ########################################################################### # LIBCURL_CHECK_CONFIG ([DEFAULT-ACTION], [MINIMUM-VERSION], # [ACTION-IF-YES], [ACTION-IF-NO]) # ---------------------------------------------------------- # David Shaw May-09-2006 # # Checks for libcurl. DEFAULT-ACTION is the string yes or no to # specify whether to default to --with-libcurl or --without-libcurl. # If not supplied, DEFAULT-ACTION is yes. MINIMUM-VERSION is the # minimum version of libcurl to accept. Pass the version as a regular # version number like 7.10.1. If not supplied, any version is # accepted. ACTION-IF-YES is a list of shell commands to run if # libcurl was successfully found and passed the various tests. # ACTION-IF-NO is a list of shell commands that are run otherwise. # Note that using --without-libcurl does run ACTION-IF-NO. # # This macro #defines HAVE_LIBCURL if a working libcurl setup is # found, and sets @LIBCURL@ and @LIBCURL_CPPFLAGS@ to the necessary # values. Other useful defines are LIBCURL_FEATURE_xxx where xxx are # the various features supported by libcurl, and LIBCURL_PROTOCOL_yyy # where yyy are the various protocols supported by libcurl. Both xxx # and yyy are capitalized. See the list of AH_TEMPLATEs at the top of # the macro for the complete list of possible defines. Shell # variables $libcurl_feature_xxx and $libcurl_protocol_yyy are also # defined to 'yes' for those features and protocols that were found. # Note that xxx and yyy keep the same capitalization as in the # curl-config list (e.g. it's "HTTP" and not "http"). # # Users may override the detected values by doing something like: # LIBCURL="-lcurl" LIBCURL_CPPFLAGS="-I/usr/myinclude" ./configure # # For the sake of sanity, this macro assumes that any libcurl that is # found is after version 7.7.2, the first version that included the # curl-config script. Note that it is very important for people # packaging binary versions of libcurl to include this script! # Without curl-config, we can only guess what protocols are available, # or use curl_version_info to figure it out at runtime. AC_DEFUN([LIBCURL_CHECK_CONFIG], [ AH_TEMPLATE([LIBCURL_FEATURE_SSL],[Defined if libcurl supports SSL]) AH_TEMPLATE([LIBCURL_FEATURE_KRB4],[Defined if libcurl supports KRB4]) AH_TEMPLATE([LIBCURL_FEATURE_IPV6],[Defined if libcurl supports IPv6]) AH_TEMPLATE([LIBCURL_FEATURE_LIBZ],[Defined if libcurl supports libz]) AH_TEMPLATE([LIBCURL_FEATURE_ASYNCHDNS],[Defined if libcurl supports AsynchDNS]) AH_TEMPLATE([LIBCURL_FEATURE_IDN],[Defined if libcurl supports IDN]) AH_TEMPLATE([LIBCURL_FEATURE_SSPI],[Defined if libcurl supports SSPI]) AH_TEMPLATE([LIBCURL_FEATURE_NTLM],[Defined if libcurl supports NTLM]) AH_TEMPLATE([LIBCURL_PROTOCOL_HTTP],[Defined if libcurl supports HTTP]) AH_TEMPLATE([LIBCURL_PROTOCOL_HTTPS],[Defined if libcurl supports HTTPS]) AH_TEMPLATE([LIBCURL_PROTOCOL_FTP],[Defined if libcurl supports FTP]) AH_TEMPLATE([LIBCURL_PROTOCOL_FTPS],[Defined if libcurl supports FTPS]) AH_TEMPLATE([LIBCURL_PROTOCOL_FILE],[Defined if libcurl supports FILE]) AH_TEMPLATE([LIBCURL_PROTOCOL_TELNET],[Defined if libcurl supports TELNET]) AH_TEMPLATE([LIBCURL_PROTOCOL_LDAP],[Defined if libcurl supports LDAP]) AH_TEMPLATE([LIBCURL_PROTOCOL_DICT],[Defined if libcurl supports DICT]) AH_TEMPLATE([LIBCURL_PROTOCOL_TFTP],[Defined if libcurl supports TFTP]) AH_TEMPLATE([LIBCURL_PROTOCOL_RTSP],[Defined if libcurl supports RTSP]) AH_TEMPLATE([LIBCURL_PROTOCOL_POP3],[Defined if libcurl supports POP3]) AH_TEMPLATE([LIBCURL_PROTOCOL_IMAP],[Defined if libcurl supports IMAP]) AH_TEMPLATE([LIBCURL_PROTOCOL_SMTP],[Defined if libcurl supports SMTP]) AC_ARG_WITH(libcurl, AC_HELP_STRING([--with-libcurl=PREFIX],[look for the curl library in PREFIX/lib and headers in PREFIX/include]), [_libcurl_with=$withval],[_libcurl_with=ifelse([$1],,[yes],[$1])]) if test "$_libcurl_with" != "no" ; then AC_PROG_AWK _libcurl_version_parse="eval $AWK '{split(\$NF,A,\".\"); X=256*256*A[[1]]+256*A[[2]]+A[[3]]; print X;}'" _libcurl_try_link=yes if test -d "$_libcurl_with" ; then LIBCURL_CPPFLAGS="-I$withval/include" _libcurl_ldflags="-L$withval/lib" AC_PATH_PROG([_libcurl_config],[curl-config],[], ["$withval/bin"]) else AC_PATH_PROG([_libcurl_config],[curl-config],[],[$PATH]) fi if test x$_libcurl_config != "x" ; then AC_CACHE_CHECK([for the version of libcurl], [libcurl_cv_lib_curl_version], [libcurl_cv_lib_curl_version=`$_libcurl_config --version | $AWK '{print $[]2}'`]) _libcurl_version=`echo $libcurl_cv_lib_curl_version | $_libcurl_version_parse` _libcurl_wanted=`echo ifelse([$2],,[0],[$2]) | $_libcurl_version_parse` if test $_libcurl_wanted -gt 0 ; then AC_CACHE_CHECK([for libcurl >= version $2], [libcurl_cv_lib_version_ok], [ if test $_libcurl_version -ge $_libcurl_wanted ; then libcurl_cv_lib_version_ok=yes else libcurl_cv_lib_version_ok=no fi ]) fi if test $_libcurl_wanted -eq 0 || test x$libcurl_cv_lib_version_ok = xyes ; then if test x"$LIBCURL_CPPFLAGS" = "x" ; then LIBCURL_CPPFLAGS=`$_libcurl_config --cflags` fi if test x"$LIBCURL" = "x" ; then LIBCURL=`$_libcurl_config --libs` # This is so silly, but Apple actually has a bug in their # curl-config script. Fixed in Tiger, but there are still # lots of Panther installs around. case "${host}" in powerpc-apple-darwin7*) LIBCURL=`echo $LIBCURL | sed -e 's|-arch i386||g'` ;; esac fi # All curl-config scripts support --feature _libcurl_features=`$_libcurl_config --feature` # Is it modern enough to have --protocols? (7.12.4) if test $_libcurl_version -ge 461828 ; then _libcurl_protocols=`$_libcurl_config --protocols` fi else _libcurl_try_link=no fi unset _libcurl_wanted fi if test $_libcurl_try_link = yes ; then # we didn't find curl-config, so let's see if the user-supplied # link line (or failing that, "-lcurl") is enough. LIBCURL=${LIBCURL-"$_libcurl_ldflags -lcurl"} AC_CACHE_CHECK([whether libcurl is usable], [libcurl_cv_lib_curl_usable], [ _libcurl_save_cppflags=$CPPFLAGS CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS" _libcurl_save_libs=$LIBS LIBS="$LIBCURL $LIBS" AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include ]],[[ /* Try and use a few common options to force a failure if we are missing symbols or can't link. */ int x; curl_easy_setopt(NULL,CURLOPT_URL,NULL); x=CURL_ERROR_SIZE; x=CURLOPT_WRITEFUNCTION; x=CURLOPT_WRITEDATA; x=CURLOPT_ERRORBUFFER; x=CURLOPT_STDERR; x=CURLOPT_VERBOSE; if (x) ; ]])],libcurl_cv_lib_curl_usable=yes,libcurl_cv_lib_curl_usable=no) CPPFLAGS=$_libcurl_save_cppflags LIBS=$_libcurl_save_libs unset _libcurl_save_cppflags unset _libcurl_save_libs ]) if test $libcurl_cv_lib_curl_usable = yes ; then # Does curl_free() exist in this version of libcurl? # If not, fake it with free() _libcurl_save_cppflags=$CPPFLAGS CPPFLAGS="$CPPFLAGS $LIBCURL_CPPFLAGS" _libcurl_save_libs=$LIBS LIBS="$LIBS $LIBCURL" AC_CHECK_FUNC(curl_free,, AC_DEFINE(curl_free,free, [Define curl_free() as free() if our version of curl lacks curl_free.])) CPPFLAGS=$_libcurl_save_cppflags LIBS=$_libcurl_save_libs unset _libcurl_save_cppflags unset _libcurl_save_libs AC_DEFINE(HAVE_LIBCURL,1, [Define to 1 if you have a functional curl library.]) AC_SUBST(LIBCURL_CPPFLAGS) AC_SUBST(LIBCURL) for _libcurl_feature in $_libcurl_features ; do AC_DEFINE_UNQUOTED(AS_TR_CPP(libcurl_feature_$_libcurl_feature),[1]) eval AS_TR_SH(libcurl_feature_$_libcurl_feature)=yes done if test "x$_libcurl_protocols" = "x" ; then # We don't have --protocols, so just assume that all # protocols are available _libcurl_protocols="HTTP FTP FILE TELNET LDAP DICT TFTP" if test x$libcurl_feature_SSL = xyes ; then _libcurl_protocols="$_libcurl_protocols HTTPS" # FTPS wasn't standards-compliant until version # 7.11.0 (0x070b00 == 461568) if test $_libcurl_version -ge 461568; then _libcurl_protocols="$_libcurl_protocols FTPS" fi fi # RTSP, IMAP, POP3 and SMTP were added in # 7.20.0 (0x071400 == 463872) if test $_libcurl_version -ge 463872; then _libcurl_protocols="$_libcurl_protocols RTSP IMAP POP3 SMTP" fi fi for _libcurl_protocol in $_libcurl_protocols ; do AC_DEFINE_UNQUOTED(AS_TR_CPP(libcurl_protocol_$_libcurl_protocol),[1]) eval AS_TR_SH(libcurl_protocol_$_libcurl_protocol)=yes done else unset LIBCURL unset LIBCURL_CPPFLAGS fi fi unset _libcurl_try_link unset _libcurl_version_parse unset _libcurl_config unset _libcurl_feature unset _libcurl_features unset _libcurl_protocol unset _libcurl_protocols unset _libcurl_version unset _libcurl_ldflags fi if test x$_libcurl_with = xno || test x$libcurl_cv_lib_curl_usable != xyes ; then # This is the IF-NO path ifelse([$4],,:,[$4]) else # This is the IF-YES path ifelse([$3],,:,[$3]) fi AM_CONDITIONAL(WITH_CURL, test x$_libcurl_with = xyes && test x$libcurl_cv_lib_curl_usable = xyes) unset _libcurl_with ])dnl slurm-slurm-15-08-7-1/auxdir/x_ac_databases.m4000066400000000000000000000066121265000126300210110ustar00rootroot00000000000000##***************************************************************************** ## $Id: x_ac_databases.m4 5401 2005-09-22 01:56:49Z da $ ##***************************************************************************** # AUTHOR: # Danny Auble # # SYNOPSIS: # X_AC_DATABASES # # DESCRIPTION: # Test for Different Database apis. If found define appropriate ENVs. ##***************************************************************************** AC_DEFUN([X_AC_DATABASES], [ #Check for MySQL ac_have_mysql="no" _x_ac_mysql_bin="no" ### Check for mysql_config program AC_ARG_WITH( [mysql_config], AS_HELP_STRING(--with-mysql_config=PATH, Specify path to mysql_config binary), [_x_ac_mysql_bin="$withval"]) if test x$_x_ac_mysql_bin = xno; then AC_PATH_PROG(HAVEMYSQLCONFIG, mysql_config, no) else AC_PATH_PROG(HAVEMYSQLCONFIG, mysql_config, no, $_x_ac_mysql_bin) fi if test x$HAVEMYSQLCONFIG = xno; then AC_MSG_WARN([*** mysql_config not found. Evidently no MySQL development libs installed on system.]) else # check for mysql-5.0.0+ mysql_config_major_version=`$HAVEMYSQLCONFIG --version | \ sed 's/\([[0-9]]*\).\([[0-9]]*\).\([[a-zA-Z0-9]]*\)/\1/'` mysql_config_minor_version=`$HAVEMYSQLCONFIG --version | \ sed 's/\([[0-9]]*\).\([[0-9]]*\).\([[a-zA-Z0-9]]*\)/\2/'` mysql_config_micro_version=`$HAVEMYSQLCONFIG --version | \ sed 's/\([[0-9]]*\).\([[0-9]]*\).\([[a-zA-Z0-9]]*\)/\3/'` if test $mysql_config_major_version -lt 5; then AC_MSG_WARN([*** mysql-$mysql_config_major_version.$mysql_config_minor_version.$mysql_config_micro_version available, we need >= mysql-5.0.0 installed for the mysql interface.]) ac_have_mysql="no" else # mysql_config puts -I on the front of the dir. We don't # want that so we remove it. MYSQL_CFLAGS=`$HAVEMYSQLCONFIG --include` MYSQL_LIBS=`$HAVEMYSQLCONFIG --libs_r` save_CFLAGS="$CFLAGS" save_LIBS="$LIBS" CFLAGS="$MYSQL_CFLAGS $save_CFLAGS" LIBS="$MYSQL_LIBS $save_LIBS" AC_TRY_LINK([#include ],[ MYSQL mysql; (void) mysql_init(&mysql); (void) mysql_close(&mysql); ], [ac_have_mysql="yes"], [ac_have_mysql="no"]) CFLAGS="$save_CFLAGS" LIBS="$save_LIBS" if test "$ac_have_mysql" = yes; then AC_MSG_RESULT([MySQL test program built properly.]) AC_SUBST(MYSQL_LIBS) AC_SUBST(MYSQL_CFLAGS) AC_DEFINE(HAVE_MYSQL, 1, [Define to 1 if using MySQL libaries]) else MYSQL_CFLAGS=`$HAVEMYSQLCONFIG --include` MYSQL_LIBS=`$HAVEMYSQLCONFIG --libs` save_CFLAGS="$CFLAGS" save_LIBS="$LIBS" CFLAGS="$MYSQL_CFLAGS $save_CFLAGS" LIBS="$MYSQL_LIBS $save_LIBS" AC_TRY_LINK([#include ],[ MYSQL mysql; (void) mysql_init(&mysql); (void) mysql_close(&mysql); ], [ac_have_mysql="yes"], [ac_have_mysql="no"]) CFLAGS="$save_CFLAGS" LIBS="$save_LIBS" if test "$ac_have_mysql" = yes; then AC_MSG_RESULT([MySQL (non-threaded) test program built properly.]) AC_SUBST(MYSQL_LIBS) AC_SUBST(MYSQL_CFLAGS) AC_DEFINE(MYSQL_NOT_THREAD_SAFE, 1, [Define to 1 if with non thread-safe code]) AC_DEFINE(HAVE_MYSQL, 1, [Define to 1 if using MySQL libaries]) else MYSQL_CFLAGS="" MYSQL_LIBS="" AC_MSG_WARN([*** MySQL test program execution failed.]) fi fi fi fi AM_CONDITIONAL(WITH_MYSQL, test x"$ac_have_mysql" = x"yes") ]) slurm-slurm-15-08-7-1/auxdir/x_ac_debug.m4000066400000000000000000000142321265000126300201450ustar00rootroot00000000000000##***************************************************************************** # $Id$ ##***************************************************************************** # AUTHOR: # Chris Dunlap # # SYNOPSIS: # X_AC_DEBUG # # DESCRIPTION: # Add support for the "--enable-debug", "--enable-memory-leak-debug", # "--disable-partial-attach", "--enable-front-end", "--enable-developer" and # "--enable-simulator" configure script options. # # options. # If debugging is enabled, CFLAGS will be prepended with the debug flags. # The NDEBUG macro (used by assert) will also be set accordingly. # # WARNINGS: # This macro must be placed after AC_PROG_CC or equivalent. ##***************************************************************************** AC_DEFUN([X_AC_DEBUG], [ AC_MSG_CHECKING([whether or not developer options are enabled]) AC_ARG_ENABLE( [developer], AS_HELP_STRING(--enable-developer,enable developer options (asserts, -Werror - also sets --enable-debug as well)), [ case "$enableval" in yes) x_ac_developer=yes ;; no) x_ac_developer=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --enable-developer]) ;; esac ] ) if test "$x_ac_developer" = yes; then test "$GCC" = yes && CFLAGS="$CFLAGS -Werror" test "$GXX" = yes && CXXFLAGS="$CXXFLAGS -Werror" # automatically turn on --enable-debug if being a developer x_ac_debug=yes else AC_DEFINE([NDEBUG], [1], [Define to 1 if you are building a production release.] ) fi AC_MSG_RESULT([${x_ac_developer=no}]) AC_MSG_CHECKING([whether debugging is enabled]) AC_ARG_ENABLE( [debug], AS_HELP_STRING(--disable-debug,disable debugging symbols and compile with optimizations), [ case "$enableval" in yes) x_ac_debug=yes ;; no) x_ac_debug=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --enable-debug]) ;; esac ], [x_ac_debug=yes] ) if test "$x_ac_debug" = yes; then # you will most likely get a -O2 in you compile line, but the last option # is the only one that is looked at. test "$GCC" = yes && CFLAGS="$CFLAGS -Wall -g -O0 -fno-strict-aliasing" test "$GXX" = yes && CXXFLAGS="$CXXFLAGS -Wall -g -O0 -fno-strict-aliasing" fi AC_MSG_RESULT([${x_ac_debug=no}]) AC_MSG_CHECKING([whether memory leak debugging is enabled]) AC_ARG_ENABLE( [memory-leak-debug], AS_HELP_STRING(--enable-memory-leak-debug,enable memory leak debugging code for development), [ case "$enableval" in yes) x_ac_memory_debug=yes ;; no) x_ac_memory_debug=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --enable-memory-leak-debug]) ;; esac ] ) if test "$x_ac_memory_debug" = yes; then AC_DEFINE(MEMORY_LEAK_DEBUG, 1, [Define to 1 for memory leak debugging.]) fi AC_MSG_RESULT([${x_ac_memory_debug=no}]) AC_MSG_CHECKING([whether to enable slurmd operation on a front-end]) AC_ARG_ENABLE( [front-end], AS_HELP_STRING(--enable-front-end, enable slurmd operation on a front-end), [ case "$enableval" in yes) x_ac_front_end=yes ;; no) x_ac_front_end=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --enable-front-end]) ;; esac ] ) if test "$x_ac_front_end" = yes; then AC_DEFINE(HAVE_FRONT_END, 1, [Define to 1 if running slurmd on front-end only]) fi AC_MSG_RESULT([${x_ac_front_end=no}]) AC_MSG_CHECKING([whether debugger partial attach enabled]) AC_ARG_ENABLE( [partial-attach], AS_HELP_STRING(--disable-partial-attach,disable debugger partial task attach support), [ case "$enableval" in yes) x_ac_partial_attach=yes ;; no) x_ac_partial_attach=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --enable-partial-leak-attach]) ;; esac ] ) if test "$x_ac_partial_attach" != "no"; then AC_DEFINE(DEBUGGER_PARTIAL_ATTACH, 1, [Define to 1 for debugger partial task attach support.]) fi AC_MSG_RESULT([${x_ac_partial_attach=no}]) AC_MSG_CHECKING([whether salloc should kill child processes at job termination]) AC_ARG_ENABLE( [salloc-kill-cmd], AS_HELP_STRING(--enable-salloc-kill-cmd,salloc should kill child processes at job termination), [ case "$enableval" in yes) x_ac_salloc_kill_cmd=yes ;; no) x_ac_salloc_kill_cmd=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --enable-salloc-kill-cmd]) ;; esac ] ) if test "$x_ac_salloc_kill_cmd" = yes; then AC_DEFINE(SALLOC_KILL_CMD, 1, [Define to 1 for salloc to kill child processes at job termination]) AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) fi # NOTE: Default value of SALLOC_RUN_FOREGROUND is system dependent # x_ac_salloc_background is set to "no" for Cray systems in x_ac_cray.m4 AC_MSG_CHECKING([whether to disable salloc execution in the background]) AC_ARG_ENABLE( [salloc-background], AS_HELP_STRING(--disable-salloc-background,disable salloc execution in the background), [ case "$enableval" in yes) x_ac_salloc_background=yes ;; no) x_ac_salloc_background=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --disable-salloc-background]) ;; esac ] ) if test "$x_ac_salloc_background" = no; then AC_DEFINE(SALLOC_RUN_FOREGROUND, 1, [Define to 1 to require salloc execution in the foreground.]) AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) fi AC_MSG_CHECKING([whether to enable slurm simulator]) AC_ARG_ENABLE( [simulator], AS_HELP_STRING(--enable-simulator, enable slurm simulator), [ case "$enableval" in yes) x_ac_simulator=yes ;; no) x_ac_simulator=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --enable-simulator]) ;; esac ] ) if test "$x_ac_simulator" = yes; then AC_DEFINE(SLURM_SIMULATOR, 1, [Define to 1 if running slurm simulator]) fi AC_MSG_RESULT([${x_ac_simulator=no}]) ] ) slurm-slurm-15-08-7-1/auxdir/x_ac_dlfcn.m4000066400000000000000000000005051265000126300201430ustar00rootroot00000000000000# $NetBSD$ AC_DEFUN([X_AC_DLFCN], [ AC_MSG_CHECKING([library containing dlopen]) AC_CHECK_LIB([], [dlopen], [ac_have_dlopen=yes; DL_LIBS=""], [AC_CHECK_LIB([dl], [dlopen], [ac_have_dlopen=yes; DL_LIBS="-ldl"], [AC_CHECK_LIB([svdl], [dlopen], [ac_have_dlopen=yes; DL_LIBS="-lsvdl"])])]) AC_SUBST(DL_LIBS) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_elan.m4000066400000000000000000000040531265000126300177760ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Mark A. Grondona # # SYNOPSIS: # AC_ELAN # # DESCRIPTION: # Checks for whether Elan MPI may be supported either via libelan3 # or libelanctrl. ELAN_LIBS is set to the libraries needed for # Elan modules. # # WARNINGS: # This macro must be placed after AC_PROG_CC or equivalent. ##***************************************************************************** AC_DEFUN([X_AC_ELAN], [ AC_CHECK_LIB([rmscall], [rms_prgcreate], [ac_elan_have_rmscall=yes; ELAN_LIBS="-lrmscall"]) if test "$ac_elan_have_rmscall" != "yes" ; then AC_MSG_NOTICE([Cannot support QsNet without librmscall]) fi AC_CHECK_LIB([elan3], [elan3_create], [ac_elan_have_elan3=yes], [ac_elan_noelan3=1]) AC_CHECK_LIB([elanctrl], [elanctrl_open], [ac_elan_have_elanctrl=yes], [ac_elan_noelanctrl=1]) if test "$ac_elan_have_elan3" = "yes"; then AC_DEFINE(HAVE_LIBELAN3, 1, [define if you have libelan3.]) ELAN_LIBS="$ELAN_LIBS -lelan3" test "$ac_elan_have_rmscall" = "yes" && ac_have_elan="yes" elif test "$ac_elan_have_elanctrl" = "yes"; then AC_DEFINE(HAVE_LIBELANCTRL, 1, [define if you have libelanctrl.]) ELAN_LIBS="$ELAN_LIBS -lelanctrl" test "$ac_elan_have_rmscall" = "yes" && ac_have_elan="yes" else AC_MSG_NOTICE([Cannot support QsNet without libelan3 or libelanctrl!]) fi if test "$ac_have_elan" = yes; then AC_CHECK_LIB([elanhosts], [elanhost_config_create], [ac_elan_have_elanhosts=yes], []) if test "$ac_elan_have_elanhosts" = "yes"; then AC_DEFINE(HAVE_LIBELANHOSTS, 1, [define if you have libelanhosts.]) ELAN_LIBS="$ELAN_LIBS -lelanhosts" else ac_have_elan="no" AC_MSG_NOTICE([Cannot build QsNet modules without libelanhosts]) fi fi AC_SUBST(ELAN_LIBS) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_env.m4000066400000000000000000000022421265000126300176450ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_ENV_LOGIC # # DESCRIPTION: # Test for how user's environment should be loaded for sbatch's # --get-user-env option (as used by Moab) ##***************************************************************************** AC_DEFUN([X_AC_ENV_LOGIC], [ AC_MSG_CHECKING([whether sbatch --get-user-env option should load .login]) AC_ARG_ENABLE( [load-env-no-login], AS_HELP_STRING(--enable-load-env-no-login, [enable --get-user-env option to load user environment without .login]), [ case "$enableval" in yes) x_ac_load_env_no_login=yes ;; no) x_ac_load_env_no_login=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --enable-load-env-no-login]) ;; esac ], [x_ac_load_env_no_login=no] ) if test "$x_ac_load_env_no_login" = yes; then AC_MSG_RESULT([yes]) AC_DEFINE(LOAD_ENV_NO_LOGIN, 1, [Define to 1 for --get-user-env to load user environment without .login]) else AC_MSG_RESULT([no]) fi ]) slurm-slurm-15-08-7-1/auxdir/x_ac_federation.m4000066400000000000000000000026511265000126300212010ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Jason King # # SYNOPSIS: # AC_FEDERATION # # DESCRIPTION: # Checks for availability of the libraries necessary to support # communication via User Space over the Federation switch. # # WARNINGS: # This macro must be placed after AC_PROG_CC or equivalent. ##***************************************************************************** AC_DEFUN([X_AC_FEDERATION], [ AC_MSG_CHECKING([whether to enable AIX Federation switch support]) ntbl_default_dirs="/usr/lib" for ntbl_dir in $ntbl_default_dirs; do # skip dirs that don't exist if test ! -z "$ntbl_dir" -a ! -d "$ntbl_dir" ; then continue; fi if test "$OBJECT_MODE" = "64"; then libntbl="ntbl_64" else libntbl="ntbl" fi # search for required NTBL API libraries if test -f "$ntbl_dir/lib${libntbl}.so"; then ac_have_federation="yes" FEDERATION_LDFLAGS="-l$libntbl" break; fi done if test "x$ac_have_federation" != "xyes" ; then AC_MSG_RESULT([no]) AC_MSG_NOTICE([Cannot support Federation without libntbl]) else AC_MSG_RESULT([yes]) AC_DEFINE(HAVE_LIBNTBL, 1, [define if you have libntbl.]) fi AC_SUBST(FEDERATION_LDFLAGS) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_freeipmi.m4000066400000000000000000000042101265000126300206520ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Thomas Cadeau # # SYNOPSIS: # X_AC_FREEIPMI # # DESCRIPTION: # Determine if the FREEIPMI libraries exists ##***************************************************************************** AC_DEFUN([X_AC_FREEIPMI], [ _x_ac_freeipmi_dirs="/usr /usr/local" _x_ac_freeipmi_libs="lib64 lib" AC_ARG_WITH( [freeipmi], AS_HELP_STRING(--with-freeipmi=PATH,Specify path to freeipmi installation), [_x_ac_freeipmi_dirs="$withval $_x_ac_freeipmi_dirs"]) AC_CACHE_CHECK( [for freeipmi installation], [x_ac_cv_freeipmi_dir], [ for d in $_x_ac_freeipmi_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/ipmi_monitoring.h" || continue for bit in $_x_ac_freeipmi_libs; do test -d "$d/$bit" || continue _x_ac_freeipmi_cppflags_save="$CPPFLAGS" CPPFLAGS="-I$d/include $CPPFLAGS" _x_ac_freeipmi_libs_save="$LIBS" LIBS="-L$d/$bit -lipmimonitoring $LIBS" AC_TRY_LINK([#include #include ], [int err;] [unsigned int flag = 0;] [return ipmi_monitoring_init (flag, &err);], AS_VAR_SET(x_ac_cv_freeipmi_dir, $d), []) CPPFLAGS="$_x_ac_freeipmi_cppflags_save" LIBS="$_x_ac_freeipmi_libs_save" test -n "$x_ac_cv_freeipmi_dir" && break done test -n "$x_ac_cv_freeipmi_dir" && break done ]) if test -z "$x_ac_cv_freeipmi_dir"; then AC_MSG_WARN([unable to locate freeipmi installation]) else FREEIPMI_CPPFLAGS="-I$x_ac_cv_freeipmi_dir/include" if test "$ac_with_rpath" = "yes"; then FREEIPMI_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_freeipmi_dir/$bit -L$x_ac_cv_freeipmi_dir/$bit" else FREEIPMI_LDFLAGS="-L$x_ac_cv_freeipmi_dir/$bit" fi FREEIPMI_LIBS="-lipmimonitoring" AC_DEFINE(HAVE_FREEIPMI, 1, [Define to 1 if freeipmi library found]) fi AC_SUBST(FREEIPMI_LIBS) AC_SUBST(FREEIPMI_CPPFLAGS) AC_SUBST(FREEIPMI_LDFLAGS) AM_CONDITIONAL(BUILD_IPMI, test -n "$x_ac_cv_freeipmi_dir") ]) slurm-slurm-15-08-7-1/auxdir/x_ac_gpl_licensed.m4000066400000000000000000000011031265000126300215000ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Chris Dunlap # # SYNOPSIS: # AC_GPL_LICENSED # # DESCRIPTION: # Acknowledge being licensed under terms of the GNU General Public License. ##***************************************************************************** AC_DEFUN([X_AC_GPL_LICENSED], [ AC_DEFINE([GPL_LICENSED], [1], [Define to 1 if licensed under terms of the GNU General Public License.] ) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_hwloc.m4000066400000000000000000000042761265000126300202020ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_HWLOC # # DESCRIPTION: # Determine if the HWLOC libraries exists and if they support PCI data. ##***************************************************************************** AC_DEFUN([X_AC_HWLOC], [ _x_ac_hwloc_dirs="/usr /usr/local" _x_ac_hwloc_libs="lib64 lib" x_ac_cv_hwloc_pci="no" AC_ARG_WITH( [hwloc], AS_HELP_STRING(--with-hwloc=PATH,Specify path to hwloc installation), [_x_ac_hwloc_dirs="$withval $_x_ac_hwloc_dirs"]) AC_CACHE_CHECK( [for hwloc installation], [x_ac_cv_hwloc_dir], [ for d in $_x_ac_hwloc_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/hwloc.h" || continue for bit in $_x_ac_hwloc_libs; do test -d "$d/$bit" || continue _x_ac_hwloc_cppflags_save="$CPPFLAGS" CPPFLAGS="-I$d/include $CPPFLAGS" _x_ac_hwloc_libs_save="$LIBS" LIBS="-L$d/$bit -lhwloc $LIBS" AC_LINK_IFELSE( [AC_LANG_CALL([], hwloc_topology_init)], AS_VAR_SET(x_ac_cv_hwloc_dir, $d)) AC_TRY_LINK([#include ], [int i = HWLOC_OBJ_PCI_DEVICE;], [x_ac_cv_hwloc_pci="yes"], []) CPPFLAGS="$_x_ac_hwloc_cppflags_save" LIBS="$_x_ac_hwloc_libs_save" test -n "$x_ac_cv_hwloc_dir" && break done test -n "$x_ac_cv_hwloc_dir" && break done ]) if test -z "$x_ac_cv_hwloc_dir"; then AC_MSG_WARN([unable to locate hwloc installation]) else HWLOC_CPPFLAGS="-I$x_ac_cv_hwloc_dir/include" if test "$ac_with_rpath" = "yes"; then HWLOC_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_hwloc_dir/$bit -L$x_ac_cv_hwloc_dir/$bit" else HWLOC_LDFLAGS="-L$x_ac_cv_hwloc_dir/$bit" fi HWLOC_LIBS="-lhwloc" AC_DEFINE(HAVE_HWLOC, 1, [Define to 1 if hwloc library found]) if test "$x_ac_cv_hwloc_pci" = "yes"; then AC_DEFINE(HAVE_HWLOC_PCI, 1, [Define to 1 if hwloc library supports PCI devices]) fi fi AC_SUBST(HWLOC_LIBS) AC_SUBST(HWLOC_CPPFLAGS) AC_SUBST(HWLOC_LDFLAGS) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_iso.m4000066400000000000000000000016241265000126300176520ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_ISO # # DESCRIPTION: # Test for ISO compliant time support. ##***************************************************************************** AC_DEFUN([X_AC_ISO], [ AC_MSG_CHECKING([whether to enable ISO 8601 time format support]) AC_ARG_ENABLE( [iso8601], AS_HELP_STRING(--disable-iso8601,disable ISO 8601 time format support), [ case "$enableval" in yes) x_ac_iso8601=yes ;; no) x_ac_iso8601=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --enable-iso8601]) ;; esac ], [x_ac_iso8601=yes] ) if test "$x_ac_iso8601" = yes; then AC_MSG_RESULT([yes]) AC_DEFINE(USE_ISO_8601,,[define if using ISO 8601 time format]) else AC_MSG_RESULT([no]) fi ]) slurm-slurm-15-08-7-1/auxdir/x_ac_json.m4000066400000000000000000000040301265000126300200230ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Derived from x_ac_munge. # # SYNOPSIS: # X_AC_JSON() # # DESCRIPTION: # Check for JSON parser libraries. # Right now, just check for json-c header and library. # # WARNINGS: # This macro must be placed after AC_PROG_CC and before AC_PROG_LIBTOOL. ##***************************************************************************** AC_DEFUN([X_AC_JSON], [ x_ac_json_dirs="/usr /usr/local" x_ac_json_libs="lib64 lib" AC_ARG_WITH( [json], AS_HELP_STRING(--with-json=PATH,Specify path to json-c installation), [x_ac_json_dirs="$withval $x_ac_json_dirs"]) AC_CACHE_CHECK( [for json installation], [x_ac_cv_json_dir], [ for d in $x_ac_json_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/json-c/json_object.h" || test -f "$d/include/json/json_object.h" || continue for bit in $x_ac_json_libs; do test -d "$d/$bit" || continue _x_ac_json_libs_save="$LIBS" LIBS="-L$d/$bit -ljson-c $LIBS" AC_LINK_IFELSE( [AC_LANG_CALL([], json_tokener_parse)], AS_VAR_SET(x_ac_cv_json_dir, $d)) LIBS="$_x_ac_json_libs_save" test -n "$x_ac_cv_json_dir" && break done test -n "$x_ac_cv_json_dir" && break done ]) if test -z "$x_ac_cv_json_dir"; then AC_MSG_WARN([unable to locate json parser library]) else if test -f "$d/include/json-c/json_object.h" ; then AC_DEFINE([HAVE_JSON_C_INC], [1], [Define if headers in include/json-c.]) fi if test -f "$d/include/json/json_object.h" ; then AC_DEFINE([HAVE_JSON_INC], [1], [Define if headers in include/json.]) fi AC_DEFINE([HAVE_JSON], [1], [Define if you are compiling with json.]) JSON_CPPFLAGS="-I$x_ac_cv_json_dir/include" JSON_LDFLAGS="-L$x_ac_cv_json_dir/$bit -ljson-c" fi AC_SUBST(JSON_CPPFLAGS) AC_SUBST(JSON_LDFLAGS) AM_CONDITIONAL(WITH_JSON_PARSER, test -n "$x_ac_cv_json_dir") ]) slurm-slurm-15-08-7-1/auxdir/x_ac_lua.m4000066400000000000000000000034251265000126300176420ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Mark Grondona # # SYNOPSIS: # AC_LUA # # DESCRIPTION: # Check for presence of lua libs and headers ##***************************************************************************** AC_DEFUN([X_AC_LUA], [ x_ac_lua_pkg_name="lua" #check for 5.2 if that fails check for 5.1 PKG_CHECK_EXISTS([lua5.2], [x_ac_lua_pkg_name=lua5.2], [PKG_CHECK_EXISTS([lua5.1], [x_ac_lua_pkg_name=lua5.1], [])]) PKG_CHECK_MODULES([lua], ${x_ac_lua_pkg_name}, [x_ac_have_lua="yes"], [x_ac_have_lua="no"]) if test "x$x_ac_have_lua" = "xyes"; then saved_CFLAGS="$CFLAGS" saved_LIBS="$LIBS" # -DLUA_COMPAT_ALL is needed to support lua 5.2 lua_CFLAGS="$lua_CFLAGS -DLUA_COMPAT_ALL" CFLAGS="$CFLAGS $lua_CFLAGS" LIBS="$LIBS $lua_LIBS" AC_MSG_CHECKING([for whether we can link to liblua]) AC_TRY_LINK( [#include #include #include ], [lua_State *L = luaL_newstate (); luaL_openlibs(L); ], [], [x_ac_have_lua="no"]) AC_MSG_RESULT([$x_ac_have_lua $x_ac_lua_pkg_name]) if test "x$x_ac_have_lua" = "xno"; then AC_MSG_WARN([unable to link against lua libraries]) fi CFLAGS="$saved_CFLAGS" LIBS="$saved_LIBS" else AC_MSG_WARN([unable to locate lua package]) fi AM_CONDITIONAL(HAVE_LUA, test "x$x_ac_have_lua" = "xyes") if test "x$x_ac_have_lua" = "xyes" ; then if test "x$x_ac_lua_pkg_name" = "xlua5.2" ; then AC_DEFINE(HAVE_LUA_5_2, 1, [Compile with Lua 5.2]) elif test "x$x_ac_lua_pkg_name" = "xlua5.1"; then AC_DEFINE(HAVE_LUA_5_1, 1, [Compile with Lua 5.1]) fi fi ]) slurm-slurm-15-08-7-1/auxdir/x_ac_man2html.m4000066400000000000000000000012611265000126300205770ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Don Lipari # # SYNOPSIS: # X_AC_MAN2HTML # # DESCRIPTION: # Test for the presence of the man2html command. # ##***************************************************************************** AC_DEFUN([X_AC_MAN2HTML], [ AC_MSG_CHECKING([whether man2html is available]) AC_CHECK_PROG(ac_have_man2html, man2html, [yes], [no], [$bindir:/usr/bin:/usr/local/bin]) AM_CONDITIONAL(HAVE_MAN2HTML, test "x$ac_have_man2html" = "xyes") if test "x$ac_have_man2html" != "xyes" ; then AC_MSG_WARN([unable to build man page html files without man2html]) fi ]) slurm-slurm-15-08-7-1/auxdir/x_ac_munge.m4000066400000000000000000000041511265000126300201710ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Chris Dunlap (originally for OpenSSL) # Modified for munge by Christopher Morrone # # SYNOPSIS: # X_AC_MUNGE() # # DESCRIPTION: # Check the usual suspects for an munge installation, # updating CPPFLAGS and LDFLAGS as necessary. # # WARNINGS: # This macro must be placed after AC_PROG_CC and before AC_PROG_LIBTOOL. ##***************************************************************************** AC_DEFUN([X_AC_MUNGE], [ _x_ac_munge_dirs="/usr /usr/local /opt/freeware /opt/munge" _x_ac_munge_libs="lib64 lib" AC_ARG_WITH( [munge], AS_HELP_STRING(--with-munge=PATH,Specify path to munge installation), [_x_ac_munge_dirs="$withval $_x_ac_munge_dirs"]) AC_CACHE_CHECK( [for munge installation], [x_ac_cv_munge_dir], [ for d in $_x_ac_munge_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/munge.h" || continue for bit in $_x_ac_munge_libs; do test -d "$d/$bit" || continue _x_ac_munge_libs_save="$LIBS" LIBS="-L$d/$bit -lmunge $LIBS" AC_LINK_IFELSE( [AC_LANG_CALL([], munge_encode)], AS_VAR_SET(x_ac_cv_munge_dir, $d)) LIBS="$_x_ac_munge_libs_save" test -n "$x_ac_cv_munge_dir" && break done test -n "$x_ac_cv_munge_dir" && break done ]) if test -z "$x_ac_cv_munge_dir"; then AC_MSG_WARN([unable to locate munge installation]) else MUNGE_LIBS="-lmunge" MUNGE_CPPFLAGS="-I$x_ac_cv_munge_dir/include" MUNGE_DIR="$x_ac_cv_munge_dir" if test "$ac_with_rpath" = "yes"; then MUNGE_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_munge_dir/$bit -L$x_ac_cv_munge_dir/$bit" else MUNGE_LDFLAGS="-L$x_ac_cv_munge_dir/$bit" fi fi AC_SUBST(MUNGE_LIBS) AC_SUBST(MUNGE_CPPFLAGS) AC_SUBST(MUNGE_LDFLAGS) AC_SUBST(MUNGE_DIR) AM_CONDITIONAL(WITH_MUNGE, test -n "$x_ac_cv_munge_dir") ]) slurm-slurm-15-08-7-1/auxdir/x_ac_ncurses.m4000066400000000000000000000027741265000126300205510ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_NCURSES # # DESCRIPTION: # Test for NCURSES or CURSES. If found define NCURSES ##***************************************************************************** AC_DEFUN([X_AC_NCURSES], [ AC_CHECK_LIB([ncurses], [initscr], [ac_have_ncurses=yes]) AC_CHECK_LIB([curses], [initscr], [ac_have_curses=yes]) AC_CHECK_LIB([tinfo], [tgetent], [ac_have_tinfo=yes]) AC_SUBST(NCURSES) if test "$ac_have_ncurses" = "yes"; then NCURSES="-lncurses" NCURSES_HEADER="ncurses.h" ac_have_some_curses="yes" elif test "$ac_have_curses" = "yes"; then NCURSES="-lcurses" NCURSES_HEADER="curses.h" ac_have_some_curses="yes" fi if test "$ac_have_tinfo" = "yes"; then NCURSES="$NCURSES -ltinfo" fi if test "$ac_have_some_curses" = "yes"; then save_LIBS="$LIBS" LIBS="$NCURSES $save_LIBS" AC_TRY_LINK([#include <${NCURSES_HEADER}>], [(void)initscr(); (void)endwin();], [], [ac_have_some_curses="no"]) LIBS="$save_LIBS" if test "$ac_have_some_curses" = "yes"; then AC_MSG_RESULT([NCURSES test program built properly.]) else AC_MSG_WARN([*** NCURSES test program execution failed.]) fi else AC_MSG_WARN([cannot build smap without curses or ncurses library]) ac_have_some_curses="no" fi ]) slurm-slurm-15-08-7-1/auxdir/x_ac_netloc.m4000066400000000000000000000052071265000126300203450ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Daniel Pou # # SYNOPSIS: # X_AC_NETLOC # # DESCRIPTION: # Determine if the NETLOC libraries exists ##***************************************************************************** AC_DEFUN([X_AC_NETLOC], [ _x_ac_netloc_dirs="/usr /usr/local" _x_ac_netloc_libs="lib64 lib" x_ac_cv_netloc_nosub="no" AC_ARG_WITH( [netloc], AS_HELP_STRING(--with-netloc=PATH,Specify path to netloc installation), [_x_ac_netloc_dirs="$withval $_x_ac_netloc_dirs"]) AC_CACHE_CHECK( [for netloc installation], [x_ac_cv_netloc_dir], [ for d in $_x_ac_netloc_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/netloc.h" || continue for bit in $_x_ac_netloc_libs; do test -d "$d/$bit" || continue _x_ac_netloc_cppflags_save="$CPPFLAGS" CPPFLAGS="-I$d/include $CPPFLAGS" _x_ac_netloc_libs_save="$LIBS" LIBS="-L$d/$bit -lnetloc $LIBS" AC_LINK_IFELSE( [AC_LANG_PROGRAM([#include #include ], [netloc_map_t map; netloc_map_create(&map);]) ], AS_VAR_SET(x_ac_cv_netloc_dir, $d)) AC_LINK_IFELSE( [AC_LANG_PROGRAM([#include #include ], [netloc_map_t map; netloc_map_create(&map)]) ], AS_VAR_SET(x_ac_cv_netloc_dir, $d) x_ac_cv_netloc_nosub="yes" ) CPPFLAGS="$_x_ac_netloc_cppflags_save" LIBS="$_x_ac_netloc_libs_save" test -n "$x_ac_cv_netloc_dir" && break done test -n "$x_ac_cv_netloc_dir" && break done ]) if test -z "$x_ac_cv_netloc_dir"; then AC_MSG_WARN([unable to locate netloc installation]) else NETLOC_CPPFLAGS="-I$x_ac_cv_netloc_dir/include" if test "$ac_with_rpath" = "yes"; then NETLOC_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_netloc_dir/$bit -L$x_ac_cv_netloc_dir/$bit" else NETLOC_LDFLAGS="-L$x_ac_cv_netloc_dir/$bit" fi NETLOC_LIBS="-lnetloc" AC_DEFINE(HAVE_NETLOC, 1, [Define to 1 if netloc library found]) if test "$x_ac_cv_netloc_nosub" = "yes"; then AC_DEFINE(HAVE_NETLOC_NOSUB, 1, [Define to 1 if netloc includes use underscore not subdirectory]) fi fi AM_CONDITIONAL(HAVE_NETLOC, test -n "$x_ac_cv_netloc_dir") AC_SUBST(NETLOC_LIBS) AC_SUBST(NETLOC_CPPFLAGS) AC_SUBST(NETLOC_LDFLAGS) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_nrt.m4000066400000000000000000000044671265000126300176730ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # AC_NRT # # DESCRIPTION: # Checks for availability of the libraries necessary to support # IBM NRT (Network Resource Table) switch management # # WARNINGS: # This macro must be placed after AC_PROG_CC or equivalent. ##***************************************************************************** AC_DEFUN([X_AC_NRT], [ nrt_default_dirs="/usr/include" AC_ARG_WITH([nrth], AS_HELP_STRING(--with-nrth=PATH,Parent directory of nrt.h and permapi.h), [ nrt_default_dirs="$withval $nrt_default_dirs"]) AC_MSG_CHECKING([Checking NRT and PERMAPI header files]) for nrt_dir in $nrt_default_dirs; do # skip dirs that don't exist if test ! -z "$nrt_dir" -a ! -d "$nrt_dir" ; then continue; fi # search for required NRT and PERMAPI header files if test -f "$nrt_dir/nrt.h" -a -f "$nrt_dir/permapi.h"; then ac_have_nrt_h="yes" NRT_CPPFLAGS="-I$nrt_dir" AC_DEFINE(HAVE_NRT_H, 1, [define if you have nrt.h]) AC_DEFINE(HAVE_PERMAPI_H, 1, [define if you have permapi_h]) break; fi done if test "x$ac_have_nrt_h" != "xyes" ; then AC_MSG_RESULT([no]) AC_MSG_NOTICE([Cannot support IBM NRT without nrt.h and permapi.h]) else AC_MSG_RESULT([yes]) fi AC_SUBST(NRT_CPPFLAGS) nrt_default_dirs="/usr/lib64 /usr/lib" AC_ARG_WITH([libnrt], AS_HELP_STRING(--with-libnrt=PATH,Parent directory of libnrt.so), [ nrt_default_dirs="$withval $nrt_default_dirs"]) AC_MSG_CHECKING([whether to enable IBM NRT support]) for nrt_dir in $nrt_default_dirs; do # skip dirs that don't exist if test ! -z "$nrt_dir" -a ! -d "$nrt_dir" ; then continue; fi # search for required NRT API libraries if test -f "$nrt_dir/libnrt.so"; then AC_DEFINE_UNQUOTED(LIBNRT_SO, "$nrt_dir/libnrt.so", [Define the libnrt.so location]) ac_have_libnrt="yes" break; fi done if test "x$ac_have_libnrt" != "xyes" ; then AC_MSG_RESULT([no]) else AC_MSG_RESULT([yes]) fi if test "x$ac_have_nrt_h" = "xyes"; then ac_have_nrt="yes" fi AM_CONDITIONAL(HAVE_NRT, test "x$ac_have_nrt" = "xyes") AC_SUBST(HAVE_NRT) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_ofed.m4000066400000000000000000000043131265000126300177730ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Yiannis Georgiou # # SYNOPSIS: # X_AC_OFED # # DESCRIPTION: # Determine if the OFED related libraries exist ##***************************************************************************** AC_DEFUN([X_AC_OFED], [ _x_ac_ofed_dirs="/usr /usr/local" _x_ac_ofed_libs="lib64 lib" AC_ARG_WITH( [ofed], AS_HELP_STRING(--with-ofed=PATH,Specify path to ofed installation), [_x_ac_ofed_dirs="$withval $_x_ac_ofed_dirs"]) AC_CACHE_CHECK( [for ofed installation], [x_ac_cv_ofed_dir], [ for d in $_x_ac_ofed_dirs; do test -d "$d" || continue test -d "$d/include/infiniband" || continue test -f "$d/include/infiniband/mad.h" || continue for bit in $_x_ac_ofed_libs; do test -d "$d/$bit" || continue _x_ac_ofed_cppflags_save="$CPPFLAGS" CPPFLAGS="-I$d/include $CPPFLAGS" _x_ac_ofed_libs_save="$LIBS" LIBS="-L$d/$bit -libmad -libumad $LIBS" AC_LINK_IFELSE( [AC_LANG_CALL([], mad_rpc_open_port)], AS_VAR_SET(x_ac_cv_ofed_dir, $d), []) AC_LINK_IFELSE( [AC_LANG_CALL([], pma_query_via)], [have_pma_query_via=yes], [AC_MSG_RESULT(Using old libmad)]) CPPFLAGS="$_x_ac_ofed_cppflags_save" LIBS="$_x_ac_ofed_libs_save" test -n "$x_ac_cv_ofed_dir" && break done test -n "$x_ac_cv_ofed_dir" && break done ]) if test -z "$x_ac_cv_ofed_dir"; then AC_MSG_WARN([unable to locate ofed installation]) else OFED_CPPFLAGS="-I$x_ac_cv_ofed_dir/include/infiniband" if test "$ac_with_rpath" = "yes"; then OFED_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_ofed_dir/$bit -L$x_ac_cv_ofed_dir/$bit" else OFED_LDFLAGS="-L$x_ac_cv_ofed_dir/$bit" fi OFED_LIBS="-libmad -libumad" AC_DEFINE(HAVE_OFED, 1, [Define to 1 if ofed library found]) if test ! -z "$have_pma_query_via" ; then AC_DEFINE(HAVE_OFED_PMA_QUERY_VIA, 1, [Define to 1 if using code with pma_query_via]) fi fi AC_SUBST(OFED_LIBS) AC_SUBST(OFED_CPPFLAGS) AC_SUBST(OFED_LDFLAGS) AM_CONDITIONAL(BUILD_OFED, test -n "$x_ac_cv_ofed_dir") ]) slurm-slurm-15-08-7-1/auxdir/x_ac_pam.m4000066400000000000000000000037601265000126300176400ustar00rootroot00000000000000##***************************************************************************** # $Id$ ##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_PAM # # DESCRIPTION: # Test for PAM (Pluggable Authentication Module) support. # # WARNINGS: # This macro must be placed after AC_PROG_CC or equivalent. ##***************************************************************************** AC_DEFUN([X_AC_PAM], [ AC_MSG_CHECKING([whether to enable PAM support]) AC_ARG_ENABLE( [pam], AS_HELP_STRING(--enable-pam,enable PAM (Pluggable Authentication Modules) support), [ case "$enableval" in yes) x_ac_pam=yes ;; no) x_ac_pam=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --enable-pam]) ;; esac ], [x_ac_pam=yes] ) if test "$x_ac_pam" = yes; then AC_MSG_RESULT([yes]) AC_CHECK_LIB([pam], [pam_get_user], [ac_have_pam=yes; PAM_LIBS="-lpam"]) AC_CHECK_LIB([pam_misc], [misc_conv], [ac_have_pam_misc=yes; PAM_LIBS="$PAM_LIBS -lpam_misc"]) AC_SUBST(PAM_LIBS) if test "x$ac_have_pam" = "xyes" -a "x$ac_have_pam_misc" = "xyes"; then AC_DEFINE(HAVE_PAM,, [define if you have the PAM library]) else AC_MSG_WARN([unable to locate PAM libraries]) fi else AC_MSG_RESULT([no]) fi AM_CONDITIONAL(HAVE_PAM, test "x$x_ac_pam" = "xyes" -a "x$ac_have_pam" = "xyes" -a "x$ac_have_pam_misc" = "xyes") AC_ARG_WITH(pam_dir, AS_HELP_STRING(--with-pam_dir=PATH,Specify path to PAM module installation), [ if test -d $withval ; then PAM_DIR="$withval" else AC_MSG_ERROR([bad value "$withval" for --with-pam_dir]) fi ], [ if test -d /lib64/security ; then PAM_DIR="/lib64/security" else PAM_DIR="/lib/security" fi ] ) AC_SUBST(PAM_DIR) AC_DEFINE_UNQUOTED(PAM_DIR, "$pam_dir", [Define PAM module installation directory.]) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_printf_null.m4000066400000000000000000000052751265000126300214220ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_PRINTF_NULL # # DESCRIPTION: # Test that printf("%s\n", NULL); does not result in invalid memory # reference. This is a known issue in Open Solaris version 118 and # some other operating systems. The potential for this problem exists # in hundreds of places in the SLURM code, so the ideal place to # address it is in the underlying print functions. # # A good description of the problem can be found here: # http://arc.opensolaris.org/caselog/PSARC/2008/403/20080625_darren.moffat # # Here is an excerpt from that document: # "The current behavior of the printf(3C) family of functions in libc when # passed a NULL value for a string format is undefined and usually # results in a SEGV and crashed application. # # The workaround to applications written to depend on this behavior is to # LD_PRELOAD=/usr/lib/0@0.so.1 (or the 64 bit equivalent). The # workaround isn't always easy to apply (or it is too late data has been # lost or corrupted by that point)." # # In the case of SLURM, setting LD_PRELOAD to the appropriate value before # building the code or running any applications will fix the problem. We # expect to release a version of SLURM supporting OpenSolaris about the same # as a version of OpenSolaris with this problem fixed is released, so the # use of LD_PRELOAD will be temporary. ##***************************************************************************** AC_DEFUN([X_AC_PRINTF_NULL], [ AC_MSG_CHECKING([for support of printf("%s", NULL)]) AC_RUN_IFELSE([AC_LANG_PROGRAM([ #include #include ], [[ char tmp[8]; char *n=NULL; snprintf(tmp,8,"%s",n); exit(0); ]])], printf_null_ok=yes, printf_null_ok=no, printf_null_ok=yes) case "$host" in *solaris*) have_solaris=yes ;; *) have_solaris=no ;; esac if test "$printf_null_ok" = "no" -a "$have_solaris" = "yes" -a -d /usr/lib64/0@0.so.1; then AC_MSG_ERROR([printf("%s", NULL) results in abort, upgrade to OpenSolaris release 119 or set LD_PRELOAD=/usr/lib64/0@0.so.1]) elif test "$printf_null_ok" = "no" -a "$have_solaris" = "yes" -a -d /usr/lib/0@0.so.1; then AC_MSG_ERROR([printf("%s", NULL) results in abort, upgrade to OpenSolaris release 119 or set LD_PRELOAD=/usr/lib/0@0.so.1]) elif test "$printf_null_ok" = "no" -a "$have_solaris" = "yes"; then AC_MSG_ERROR([printf("%s", NULL) results in abort, upgrade to OpenSolaris release 119]) elif test "$printf_null_ok" = "no"; then AC_MSG_ERROR([printf("%s", NULL) results in abort]) else AC_MSG_RESULT([yes]) fi ]) slurm-slurm-15-08-7-1/auxdir/x_ac_ptrace.m4000066400000000000000000000014201265000126300203300ustar00rootroot00000000000000##***************************************************************************** # $Id$ ##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_PTRACE # # DESCRIPTION: # Test argument count of ptrace function. # # WARNINGS: # This macro must be placed after AC_PROG_CC or equivalent. ##***************************************************************************** AC_DEFUN([X_AC_PTRACE], [ AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#include #include #include ]], [[ptrace(PT_TRACE_ME,0,0,0,0);]])],[AC_DEFINE(PTRACE_FIVE_ARGS, 1, [Define to 1 if ptrace takes five arguments.])],[]) AC_CHECK_FUNCS(ptrace64, [], []) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_readline.m4000066400000000000000000000030511265000126300206370ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Jim Garlick # # SYNOPSIS: # AC_READLINE # # DESCRIPTION: # Adds support for --without-readline. Exports READLINE_LIBS if found # # # WARNINGS: # This macro must be placed after AC_PROG_CC and X_AC_CURSES. ##***************************************************************************** AC_DEFUN([X_AC_READLINE], [ AC_MSG_CHECKING([for whether to include readline suport]) AC_ARG_WITH([readline], AS_HELP_STRING(--without-readline,compile without readline support), [ case "$withval" in yes) ac_with_readline=yes ;; no) ac_with_readline=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$withval" for --without-readline]) ;; esac ] ) AC_MSG_RESULT([${ac_with_readline=yes}]) if test "$ac_with_readline" = "yes"; then saved_LIBS="$LIBS" READLINE_LIBS="-lreadline -lhistory $NCURSES" LIBS="$saved_LIBS $READLINE_LIBS" AC_LINK_IFELSE([AC_LANG_PROGRAM([[ #include #include #include ]], [[ readline("in:");]])],[AC_DEFINE([HAVE_READLINE], [1], [Define if you are compiling with readline.])],[READLINE_LIBS=""]) LIBS="$saved_LIBS" if test "$READLINE_LIBS" = ""; then AC_MSG_WARN([configured for readline support, but couldn't find libraries]); fi fi AC_SUBST(READLINE_LIBS) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_rrdtool.m4000066400000000000000000000043521265000126300205460ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Written by Bull- Thomas Cadeau # # SYNOPSIS: # X_AC_RRDTOOL # # DESCRIPTION: # Determine if the RRDTOOL libraries exists ##***************************************************************************** AC_DEFUN([X_AC_RRDTOOL], [ _x_ac_rrdtool_dirs="/usr /usr/local" _x_ac_rrdtool_libs="lib64 lib" AC_ARG_WITH([rrdtool], AS_HELP_STRING(--with-rrdtool=PATH, Specify path to rrdtool-devel installation), [_x_ac_rrdtool_dirs="$withval $_x_ac_rrdtool_dirs"], [with_rrdtool=check]) # echo with rrdtool $with_rrdtool # echo without rrdtool $without_rrdtool AS_IF([test "x$with_rrdtool" != "xno"], [AC_CACHE_CHECK( [for rrdtool installation], [x_ac_cv_rrdtool_dir], [ for d in $_x_ac_rrdtool_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/rrd.h" || continue for bit in $_x_ac_rrdtool_libs; do test -d "$d/$bit" || continue _x_ac_rrdtool_cppflags_save="$CPPFLAGS" CPPFLAGS="-I$d/include $CPPFLAGS" _x_ac_rrdtool_libs_save="$LIBS" LIBS="-L$d/$bit -lrrd $LIBS" AC_TRY_LINK([#include ], [rrd_value_t *rrd_data;] [rrd_info_t *rrd_info;] [ rrd_test_error();], AS_VAR_SET(x_ac_cv_rrdtool_dir, $d), []) CPPFLAGS="$_x_ac_rrdtool_cppflags_save" LIBS="$_x_ac_rrdtool_libs_save" test -n "$x_ac_cv_rrdtool_dir" && break done test -n "$x_ac_cv_rrdtool_dir" && break done ]) ]) # echo x_ac_cv_rrdtool_dir $x_ac_cv_rrdtool_dir if test -z "$x_ac_cv_rrdtool_dir"; then AC_MSG_WARN([unable to locate rrdtool installation]) else RRDTOOL_CPPFLAGS="-I$x_ac_cv_rrdtool_dir/include" if test "$ac_with_rpath" = "yes"; then RRDTOOL_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_rrdtool_dir/$bit -L$x_ac_cv_rrdtool_dir/$bit" else RRDTOOL_LDFLAGS="-L$x_ac_cv_rrdtool_dir/$bit" fi RRDTOOL_LIBS="-lrrd" fi AC_SUBST(RRDTOOL_LIBS) AC_SUBST(RRDTOOL_CPPFLAGS) AC_SUBST(RRDTOOL_LDFLAGS) AM_CONDITIONAL(BUILD_RRD, test -n "$x_ac_cv_rrdtool_dir") ]) slurm-slurm-15-08-7-1/auxdir/x_ac_setpgrp.m4000066400000000000000000000013411265000126300205400ustar00rootroot00000000000000##***************************************************************************** # $Id: x_ac_setpgrp.m4 8192 2006-05-25 00:15:05Z morrone $ ##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_SETPGRP # # DESCRIPTION: # Test argument count of setpgrp function. # # WARNINGS: # This macro must be placed after AC_PROG_CC or equivalent. ##***************************************************************************** AC_DEFUN([X_AC_SETPGRP], [ AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#include ]], [[setpgrp(0,0);]])],[AC_DEFINE(SETPGRP_TWO_ARGS, 1, [Define to 1 if setpgrp takes two arguments.])],[]) ]) slurm-slurm-15-08-7-1/auxdir/x_ac_setproctitle.m4000066400000000000000000000027031265000126300216000ustar00rootroot00000000000000##***************************************************************************** # $Id$ ##***************************************************************************** # AUTHOR: # Mark Grondona # # SYNOPSIS: # X_AC_SETPROCTITLE # # DESCRIPTION: # Check for setproctitle() system call or emulation. # # WARNINGS: # This macro must be placed after AC_PROG_CC or equivalent. ##***************************************************************************** dnl dnl Perform checks related to setproctitle() emulation dnl AC_DEFUN([X_AC_SETPROCTITLE], [ # case "$host" in *-*-aix*) AC_DEFINE(SETPROCTITLE_STRATEGY,PS_USE_CLOBBER_ARGV) AC_DEFINE(SETPROCTITLE_PS_PADDING, '\0') ;; *-*-hpux*) AC_DEFINE(SETPROCTITLE_STRATEGY,PS_USE_PSTAT) ;; *-*-linux*) AC_DEFINE(SETPROCTITLE_STRATEGY,PS_USE_CLOBBER_ARGV) AC_DEFINE(SETPROCTITLE_PS_PADDING, '\0') ;; *) AC_DEFINE(SETPROCTITLE_STRATEGY,PS_USE_NONE, [Define to the setproctitle() emulation type]) AC_DEFINE(SETPROCTITLE_PS_PADDING, '\0', [Define if you need setproctitle padding]) ;; esac AC_MSG_CHECKING([for __progname]) AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include ]], [[extern char *__progname; puts(__progname);]])],[ac_have__progname=yes ],[]) AC_MSG_RESULT(${ac_have__progname=no}) if test "$ac_have__progname" = "yes"; then AC_DEFINE([HAVE__PROGNAME], 1, [Define if you have __progname.]) fi ]) slurm-slurm-15-08-7-1/auxdir/x_ac_sgi_job.m4000066400000000000000000000013251265000126300204720ustar00rootroot00000000000000##***************************************************************************** ## $Id: x_ac_aix.m4 8192 2006-05-25 00:15:05Z morrone $ ##***************************************************************************** # AUTHOR: # Mark Grondona # # SYNOPSIS: # AC_SGI_JOB # # DESCRIPTION: # Check for presence of SGI job container support via libjob.so ##***************************************************************************** AC_DEFUN([X_AC_SGI_JOB], [ AC_CHECK_LIB([job], [job_attachpid], [ac_have_sgi_job="yes"], []) AC_MSG_CHECKING([for SGI job container support]) AC_MSG_RESULT([${ac_have_sgi_job=no}]) AM_CONDITIONAL(HAVE_SGI_JOB, test "x$ac_have_sgi_job" = "xyes") ]) slurm-slurm-15-08-7-1/auxdir/x_ac_slurm_ssl.m4000066400000000000000000000107551265000126300211100ustar00rootroot00000000000000##***************************************************************************** ## $Id$ ##***************************************************************************** # AUTHOR: # Mark Grondona # (Mostly taken from OpenSSH configure.ac) # # SYNOPSIS: # X_AC_SLURM_WITH_SSL # # DESCRIPTION: # Process --with-ssl configure flag and search for OpenSSL support. # ##***************************************************************************** AC_DEFUN([X_AC_SLURM_WITH_SSL], [ ssl_default_dirs="/usr/local/openssl64 /usr/local/openssl /usr/lib/openssl \ /usr/local/ssl /usr/lib/ssl /usr/local \ /usr/pkg /opt /opt/openssl /usr" AC_SUBST(SSL_LDFLAGS) AC_SUBST(SSL_LIBS) AC_SUBST(SSL_CPPFLAGS) SSL_LIB_TEST="-lcrypto" AC_ARG_WITH(ssl, AS_HELP_STRING(--with-ssl=PATH,Specify path to OpenSSL installation), [ tryssldir=$withval # Hack around a libtool bug on AIX. # libcrypto is in a non-standard library path on AIX (/opt/freeware # which is specified with --with-ssl), and libtool is not setting # the correct runtime library path in the binaries. if test "x$ac_have_aix" = "xyes"; then SSL_LIB_TEST="-lcrypto-static" elif test "x$ac_have_nrt" = "xyes"; then # it appears on p7 machines the openssl doesn't # link correctly so we need to add -ldl SSL_LIB_TEST="$SSL_LIB_TEST -ldl" fi ]) saved_LIBS="$LIBS" saved_LDFLAGS="$LDFLAGS" saved_CPPFLAGS="$CPPFLAGS" if test "x$prefix" != "xNONE" ; then tryssldir="$tryssldir $prefix" fi if test "x$tryssldir" != "xno" ; then AC_CACHE_CHECK([for OpenSSL directory], ac_cv_openssldir, [ for ssldir in $tryssldir "" $ssl_default_dirs; do CPPFLAGS="$saved_CPPFLAGS" LDFLAGS="$saved_LDFLAGS" LIBS="$saved_LIBS $SSL_LIB_TEST" # Skip directories if they don't exist if test ! -z "$ssldir" -a ! -d "$ssldir" ; then continue; fi sslincludedir="$ssldir" if test ! -z "$ssldir"; then # Try to use $ssldir/lib if it exists, otherwise # $ssldir if test -d "$ssldir/lib" ; then LDFLAGS="-L$ssldir/lib $saved_LDFLAGS" if test ! -z "$need_dash_r" ; then LDFLAGS="-R$ssldir/lib $LDFLAGS" fi else LDFLAGS="-L$ssldir $saved_LDFLAGS" if test ! -z "$need_dash_r" ; then LDFLAGS="-R$ssldir $LDFLAGS" fi fi # Try to use $ssldir/include if it exists, otherwise # $ssldir if test -d "$ssldir/include" ; then sslincludedir="$ssldir/include" CPPFLAGS="-I$ssldir/include $saved_CPPFLAGS" else CPPFLAGS="-I$ssldir $saved_CPPFLAGS" fi fi test -f "$sslincludedir/openssl/rand.h" || continue test -f "$sslincludedir/openssl/hmac.h" || continue test -f "$sslincludedir/openssl/sha.h" || continue # Basic test to check for compatible version and correct linking AC_RUN_IFELSE([AC_LANG_SOURCE([[ #include #include #include #include #define SIZE 8 int main(void) { int a[SIZE], i; for (i=0; i]], [[EVP_MD_CTX_cleanup(NULL);]])],[AC_DEFINE(HAVE_EVP_MD_CTX_CLEANUP, 1, [Define to 1 if function EVP_MD_CTX_cleanup exists.])],[]) else SSL_LIBS="" AC_MSG_WARN([could not find working OpenSSL library]) fi LIBS="$saved_LIBS" CPPFLAGS="$saved_CPPFLAGS" LDFLAGS="$saved_LDFLAGS" ])dnl AC_SLURM_WITH_SSL slurm-slurm-15-08-7-1/auxdir/x_ac_sun_const.m4000066400000000000000000000020651265000126300210730ustar00rootroot00000000000000##***************************************************************************** # AUTHOR: # Morris Jette # # SYNOPSIS: # X_AC_SUN_CONST # # DESCRIPTION: # Test for Sun Constellation system with 3-D interconnect ##***************************************************************************** AC_DEFUN([X_AC_SUN_CONST], [ AC_MSG_CHECKING([for Sun Constellation system]) AC_ARG_ENABLE( [sun-const], AS_HELP_STRING(--enable-sun-const,enable Sun Constellation system support), [ case "$enableval" in yes) x_ac_sun_const=yes ;; no) x_ac_sun_const=no ;; *) AC_MSG_RESULT([doh!]) AC_MSG_ERROR([bad value "$enableval" for --enable-sun-const]) ;; esac ], [x_ac_sun_const=no] ) if test "$x_ac_sun_const" = yes; then AC_MSG_RESULT([yes]) AC_DEFINE(SYSTEM_DIMENSIONS, 4, [4-dimensional architecture counting the nodes under a switch as additional dimension]) AC_DEFINE(HAVE_SUN_CONST,1,[define if Sun Constellation system]) else AC_MSG_RESULT([no]) fi ]) slurm-slurm-15-08-7-1/config.h.in000066400000000000000000000407621265000126300163610ustar00rootroot00000000000000/* config.h.in. Generated from configure.ac by autoheader. */ /* Define if building universal (internal helper macro) */ #undef AC_APPLE_UNIVERSAL_BUILD /* Define the BG_BRIDGE_SO value */ #undef BG_BRIDGE_SO /* Define the BG_DB2_SO value */ #undef BG_DB2_SO /* Define the BG_SERIAL value */ #undef BG_SERIAL /* Define BLCR installation home */ #undef BLCR_HOME /* Define location of cpuset directory */ #undef CPUSET_DIR /* Define to 1 for debugger partial task attach support. */ #undef DEBUGGER_PARTIAL_ATTACH /* Define to 1 if using glib-2.32.0 or higher */ #undef GLIB_NEW_THREADS /* Define to 1 if licensed under terms of the GNU General Public License. */ #undef GPL_LICENSED /* Define to 1 if using gtk+-2.14.0 or higher */ #undef GTK2_USE_GET_FOCUS /* Define to 1 if using gtk+-2.10.0 or higher */ #undef GTK2_USE_RADIO_SET /* Define to 1 if using gtk+-2.12.0 or higher */ #undef GTK2_USE_TOOLTIP /* Make sure we get the 1.8 HDF5 API */ #undef H5_NO_DEPRECATED_SYMBOLS /* Define to 1 if 3-dimensional architecture */ #undef HAVE_3D /* Define to 1 if 4-dimensional architecture */ #undef HAVE_4D /* Define to 1 for AIX operating system */ #undef HAVE_AIX /* Define to 1 for Cray XT/XE systems using ALPS */ #undef HAVE_ALPS_CRAY /* Define to 1 for emulating a Cray XT/XE system using ALPS */ #undef HAVE_ALPS_CRAY_EMULATION /* Define to 1 if running against an ALPS emulation */ #undef HAVE_ALPS_EMULATION /* Define to 1 if emulating or running on Blue Gene system */ #undef HAVE_BG /* Define to 1 if emulating or running on Blue Gene/L system */ #undef HAVE_BGL /* Define to 1 if emulating or running on Blue Gene/P system */ #undef HAVE_BGP /* Define to 1 if emulating or running on Blue Gene/Q system */ #undef HAVE_BGQ /* Define to 1 if have Blue Gene files */ #undef HAVE_BG_FILES /* Define to 1 if using code where blocks have actions */ #undef HAVE_BG_GET_ACTION /* Define to 1 if emulating or running on Blue Gene/L or P system */ #undef HAVE_BG_L_P /* Define to 1 if using code with new iocheck */ #undef HAVE_BG_NEW_IO_CHECK /* Define to 1 if you have the `cfmakeraw' function. */ #undef HAVE_CFMAKERAW /* Define to 1 for systems with a Cray network */ #undef HAVE_CRAY_NETWORK /* Define to 1 if you have the header file. */ #undef HAVE_CURSES_H /* Define to 1 if DataWarp library found */ #undef HAVE_DATAWARP /* Define to 1 if you have the declaration of `hstrerror', and to 0 if you don't. */ #undef HAVE_DECL_HSTRERROR /* Define to 1 if you have the declaration of `strerror_r', and to 0 if you don't. */ #undef HAVE_DECL_STRERROR_R /* Define to 1 if you have the declaration of `strsignal', and to 0 if you don't. */ #undef HAVE_DECL_STRSIGNAL /* Define to 1 if you have the declaration of `sys_siglist', and to 0 if you don't. */ #undef HAVE_DECL_SYS_SIGLIST /* Define to 1 if you have the header file. */ #undef HAVE_DIRENT_H /* Define to 1 if you have the header file. */ #undef HAVE_DLFCN_H /* Define to 1 if you have the `eaccess' function. */ #undef HAVE_EACCESS /* Define to 1 if you have the header file. */ #undef HAVE_ERRNO_H /* Define to 1 if function EVP_MD_CTX_cleanup exists. */ #undef HAVE_EVP_MD_CTX_CLEANUP /* Define to 1 if you have the `faccessat' function. */ #undef HAVE_FACCESSAT /* Define to 1 if you have the `fdatasync' function. */ #undef HAVE_FDATASYNC /* Define to 1 if you have the header file. */ #undef HAVE_FLOAT_H /* Define to 1 if freeipmi library found */ #undef HAVE_FREEIPMI /* Define to 1 if running slurmd on front-end only */ #undef HAVE_FRONT_END /* Define to 1 if you have the `get_current_dir_name' function. */ #undef HAVE_GET_CURRENT_DIR_NAME /* Defined if you have HDF5 support */ #undef HAVE_HDF5 /* Define to 1 if you have the `hstrerror' function. */ #undef HAVE_HSTRERROR /* Define to 1 if hwloc library found */ #undef HAVE_HWLOC /* Define to 1 if hwloc library supports PCI devices */ #undef HAVE_HWLOC_PCI /* Define to 1 if you have the `inet_aton' function. */ #undef HAVE_INET_ATON /* Define to 1 if you have the `inet_ntop' function. */ #undef HAVE_INET_NTOP /* Define to 1 if you have the `inet_pton' function. */ #undef HAVE_INET_PTON /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define if you are compiling with json. */ #undef HAVE_JSON /* Define if headers in include/json-c. */ #undef HAVE_JSON_C_INC /* Define if headers in include/json. */ #undef HAVE_JSON_INC /* Define to 1 if you have the header file. */ #undef HAVE_KSTAT_H /* Define to 1 if you have a functional curl library. */ #undef HAVE_LIBCURL /* Define to 1 if you have the header file. */ #undef HAVE_LIMITS_H /* Define to 1 if you have the header file. */ #undef HAVE_LINUX_SCHED_H /* Compile with Lua 5.1 */ #undef HAVE_LUA_5_1 /* Compile with Lua 5.2 */ #undef HAVE_LUA_5_2 /* Define to 1 if your system has a GNU libc compatible `malloc' function, and to 0 otherwise. */ #undef HAVE_MALLOC /* Define to 1 if you have the header file. */ #undef HAVE_MCHECK_H /* Define to 1 if you have the header file. */ #undef HAVE_MEMORY_H /* Define to 1 if you have the `mtrace' function. */ #undef HAVE_MTRACE /* Define to 1 if using MySQL libaries */ #undef HAVE_MYSQL /* Define to 1 for running on a Cray in native mode without ALPS */ #undef HAVE_NATIVE_CRAY /* Define to 1 if alpscomm functions new to CLE 5.2UP01 are defined */ #undef HAVE_NATIVE_CRAY_GA /* Define to 1 if you have the header file. */ #undef HAVE_NCURSES_H /* Define to 1 if you have the header file. */ #undef HAVE_NETDB_H /* Define to 1 if netloc library found */ #undef HAVE_NETLOC /* Define to 1 if netloc includes use underscore not subdirectory */ #undef HAVE_NETLOC_NOSUB /* define if you have nrt.h */ #undef HAVE_NRT_H /* define if numa library installed */ #undef HAVE_NUMA /* Define to 1 if ofed library found */ #undef HAVE_OFED /* Define to 1 if using code with pma_query_via */ #undef HAVE_OFED_PMA_QUERY_VIA /* define if you have openssl. */ #undef HAVE_OPENSSL /* define if you have the PAM library */ #undef HAVE_PAM /* Define to 1 if you have the header file. */ #undef HAVE_PAM_PAM_APPL_H /* Define to 1 if you have the header file. */ #undef HAVE_PATHS_H /* define if you have permapi_h */ #undef HAVE_PERMAPI_H /* Define if you have Posix semaphores. */ #undef HAVE_POSIX_SEMS /* Define to 1 if you have the header file. */ #undef HAVE_PROCTRACK_H /* Define if libc sets program_invocation_name */ #undef HAVE_PROGRAM_INVOCATION_NAME /* Define if you have POSIX threads libraries and header files. */ #undef HAVE_PTHREAD /* Define to 1 if you have the header file. */ #undef HAVE_PTHREAD_H /* Have PTHREAD_PRIO_INHERIT. */ #undef HAVE_PTHREAD_PRIO_INHERIT /* Define to 1 if you have the `ptrace64' function. */ #undef HAVE_PTRACE64 /* Define to 1 if you have the header file. */ #undef HAVE_PTY_H /* Define if you are compiling with readline. */ #undef HAVE_READLINE /* Define to 1 for running on a real Cray system */ #undef HAVE_REAL_CRAY /* Define to 1 if you have the `sched_setaffinity' function. */ #undef HAVE_SCHED_SETAFFINITY /* Define to 1 if you have the header file. */ #undef HAVE_SECURITY_PAM_APPL_H /* Define to 1 if you have the `setproctitle' function. */ #undef HAVE_SETPROCTITLE /* Define to 1 if you have the `setresuid' function. */ #undef HAVE_SETRESUID /* Define to 1 if you have the header file. */ #undef HAVE_SOCKET_H /* Define to 1 if you have the `statfs' function. */ #undef HAVE_STATFS /* Define to 1 if you have the `statvfs' function. */ #undef HAVE_STATVFS /* Define to 1 if you have the header file. */ #undef HAVE_STDBOOL_H /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #undef HAVE_STDLIB_H /* Define to 1 if you have the `strerror' function. */ #undef HAVE_STRERROR /* Define to 1 if you have the `strerror_r' function. */ #undef HAVE_STRERROR_R /* Define to 1 if you have the header file. */ #undef HAVE_STRINGS_H /* Define to 1 if you have the header file. */ #undef HAVE_STRING_H /* Define to 1 if you have the `strlcpy' function. */ #undef HAVE_STRLCPY /* Define to 1 if you have the `strndup' function. */ #undef HAVE_STRNDUP /* Define to 1 if you have the `strsignal' function. */ #undef HAVE_STRSIGNAL /* define if Sun Constellation system */ #undef HAVE_SUN_CONST /* Define to 1 if you have the `sysctlbyname' function. */ #undef HAVE_SYSCTLBYNAME /* Define to 1 if you have the header file. */ #undef HAVE_SYSINT_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_DR_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_IPC_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_PRCTL_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_PTRACE_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_SEM_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_SHM_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_SOCKET_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_STATFS_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_STATVFS_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_STAT_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_SYSCTL_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_SYSLOG_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_SYSTEMCFG_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TERMIOS_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TYPES_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_VFS_H /* Define to 1 if you have that is POSIX.1 compatible. */ #undef HAVE_SYS_WAIT_H /* Define to 1 if you have the header file. */ #undef HAVE_TERMCAP_H /* Define to 1 if you have the header file. */ #undef HAVE_UNISTD_H /* Define to 1 if you have the `unsetenv' function. */ #undef HAVE_UNSETENV /* Define to 1 if you have the header file. */ #undef HAVE_UTMP_H /* Define to 1 if you have the header file. */ #undef HAVE_VALUES_H /* Define if you have __progname. */ #undef HAVE__PROGNAME /* Define to 1 if you have the external variable, _system_configuration with a member named physmem. */ #undef HAVE__SYSTEM_CONFIGURATION /* Defined if libcurl supports AsynchDNS */ #undef LIBCURL_FEATURE_ASYNCHDNS /* Defined if libcurl supports IDN */ #undef LIBCURL_FEATURE_IDN /* Defined if libcurl supports IPv6 */ #undef LIBCURL_FEATURE_IPV6 /* Defined if libcurl supports KRB4 */ #undef LIBCURL_FEATURE_KRB4 /* Defined if libcurl supports libz */ #undef LIBCURL_FEATURE_LIBZ /* Defined if libcurl supports NTLM */ #undef LIBCURL_FEATURE_NTLM /* Defined if libcurl supports SSL */ #undef LIBCURL_FEATURE_SSL /* Defined if libcurl supports SSPI */ #undef LIBCURL_FEATURE_SSPI /* Defined if libcurl supports DICT */ #undef LIBCURL_PROTOCOL_DICT /* Defined if libcurl supports FILE */ #undef LIBCURL_PROTOCOL_FILE /* Defined if libcurl supports FTP */ #undef LIBCURL_PROTOCOL_FTP /* Defined if libcurl supports FTPS */ #undef LIBCURL_PROTOCOL_FTPS /* Defined if libcurl supports HTTP */ #undef LIBCURL_PROTOCOL_HTTP /* Defined if libcurl supports HTTPS */ #undef LIBCURL_PROTOCOL_HTTPS /* Defined if libcurl supports IMAP */ #undef LIBCURL_PROTOCOL_IMAP /* Defined if libcurl supports LDAP */ #undef LIBCURL_PROTOCOL_LDAP /* Defined if libcurl supports POP3 */ #undef LIBCURL_PROTOCOL_POP3 /* Defined if libcurl supports RTSP */ #undef LIBCURL_PROTOCOL_RTSP /* Defined if libcurl supports SMTP */ #undef LIBCURL_PROTOCOL_SMTP /* Defined if libcurl supports TELNET */ #undef LIBCURL_PROTOCOL_TELNET /* Defined if libcurl supports TFTP */ #undef LIBCURL_PROTOCOL_TFTP /* Define the libnrt.so location */ #undef LIBNRT_SO /* Define to 1 for --get-user-env to load user environment without .login */ #undef LOAD_ENV_NO_LOGIN /* Define to the sub-directory in which libtool stores uninstalled libraries. */ #undef LT_OBJDIR /* Define to 1 for memory leak debugging. */ #undef MEMORY_LEAK_DEBUG /* Enable multiple slurmd on one node */ #undef MULTIPLE_SLURMD /* Define to 1 if with non thread-safe code */ #undef MYSQL_NOT_THREAD_SAFE /* Define to 1 if you are building a production release. */ #undef NDEBUG /* Define to the address where bug reports for this package should be sent. */ #undef PACKAGE_BUGREPORT /* Define to the full name of this package. */ #undef PACKAGE_NAME /* Define to the full name and version of this package. */ #undef PACKAGE_STRING /* Define to the one symbol short name of this package. */ #undef PACKAGE_TARNAME /* Define to the home page for this package. */ #undef PACKAGE_URL /* Define to the version of this package. */ #undef PACKAGE_VERSION /* Define PAM module installation directory. */ #undef PAM_DIR /* Define the project's name. */ #undef PROJECT /* Define to necessary symbol if this constant uses a non-standard name on your system. */ #undef PTHREAD_CREATE_JOINABLE /* Define to 1 if ptrace takes five arguments. */ #undef PTRACE_FIVE_ARGS /* Define the project's release. */ #undef RELEASE /* Define to 1 for salloc to kill child processes at job termination */ #undef SALLOC_KILL_CMD /* Define to 1 to require salloc execution in the foreground. */ #undef SALLOC_RUN_FOREGROUND /* Define to 1 if sched_getaffinity takes three arguments. */ #undef SCHED_GETAFFINITY_THREE_ARGS /* Define to 1 if sched_getaffinity takes two arguments. */ #undef SCHED_GETAFFINITY_TWO_ARGS /* Define to 1 if setpgrp takes two arguments. */ #undef SETPGRP_TWO_ARGS /* Define if you need setproctitle padding */ #undef SETPROCTITLE_PS_PADDING /* Define to the setproctitle() emulation type */ #undef SETPROCTITLE_STRATEGY /* Define path to sleep command */ #undef SLEEP_CMD /* Define the default port number for slurmctld */ #undef SLURMCTLD_PORT /* Define the default port count for slurmctld */ #undef SLURMCTLD_PORT_COUNT /* Define the default port number for slurmdbd */ #undef SLURMDBD_PORT /* Define the default port number for slurmd */ #undef SLURMD_PORT /* API current age */ #undef SLURM_API_AGE /* API current version */ #undef SLURM_API_CURRENT /* API current major */ #undef SLURM_API_MAJOR /* API current rev */ #undef SLURM_API_REVISION /* Define the API's version */ #undef SLURM_API_VERSION /* Define if your architecture's byteorder is big endian. */ #undef SLURM_BIGENDIAN /* Define the project's major version. */ #undef SLURM_MAJOR /* Define the project's micro version. */ #undef SLURM_MICRO /* Define the project's minor version. */ #undef SLURM_MINOR /* Define Slurm installation prefix */ #undef SLURM_PREFIX /* Define to 1 if running slurm simulator */ #undef SLURM_SIMULATOR /* SLURM Version Number */ #undef SLURM_VERSION_NUMBER /* Define the project's version string. */ #undef SLURM_VERSION_STRING /* Define to 1 if you have the ANSI C header files. */ #undef STDC_HEADERS /* Define to 1 if strerror_r returns char *. */ #undef STRERROR_R_CHAR_P /* Define path to su command */ #undef SUCMD /* 3-dimensional architecture */ #undef SYSTEM_DIMENSIONS /* Define to 1 if you can safely include both and . */ #undef TIME_WITH_SYS_TIME /* Define slurm_ prefix function aliases for plugins */ #undef USE_ALIAS /* define if using ISO 8601 time format */ #undef USE_ISO_8601 /* Define the project's version. */ #undef VERSION /* Define if you have pthreads. */ #undef WITH_PTHREADS /* Define WORDS_BIGENDIAN to 1 if your processor stores words with the most significant byte first (like Motorola and SPARC, unlike Intel). */ #if defined AC_APPLE_UNIVERSAL_BUILD # if defined __BIG_ENDIAN__ # define WORDS_BIGENDIAN 1 # endif #else # ifndef WORDS_BIGENDIAN # undef WORDS_BIGENDIAN # endif #endif /* Enable large inode numbers on Mac OS X 10.5. */ #ifndef _DARWIN_USE_64_BIT_INODE # define _DARWIN_USE_64_BIT_INODE 1 #endif /* Number of bits in a file offset, on hosts where this is settable. */ #undef _FILE_OFFSET_BITS /* Define for large files, on AIX-style hosts. */ #undef _LARGE_FILES /* Define curl_free() as free() if our version of curl lacks curl_free. */ #undef curl_free /* Define to rpl_malloc if the replacement function should be used. */ #undef malloc slurm-slurm-15-08-7-1/config.xml.in000066400000000000000000000037671265000126300167360ustar00rootroot00000000000000 SLURM: Simple Linux Utility for Resource Management @SLURM_MAJOR@ @SLURM_MINOR@ @SLURM_MICRO@ @RELEASE@ 1 included SLURM: Simple Linux Utility for Resource Management GNU General Public License Applications/batch Morris Jette jette@schedmd.com The Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. SLURM requires no kernel modifications for its operation and is relatively self-contained. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work. http://slurm.schedmd.com slurm slurm-devel slurm-auth-authd slurm-auth-none slurm-auth-munge slurm-sched-wiki slurm-switch-elan torque pbs package slurm-slurm-15-08-7-1/configure000077500000000000000000032053421265000126300162450ustar00rootroot00000000000000#! /bin/sh # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.69 for slurm 15.08. # # Report bugs to . # # # Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc. # # # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi as_nl=' ' export as_nl # Printing a long string crashes Solaris 7 /usr/bin/printf. as_echo='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo$as_echo # Prefer a ksh shell builtin over an external printf program on Solaris, # but without wasting forks for bash or zsh. if test -z "$BASH_VERSION$ZSH_VERSION" \ && (test "X`print -r -- $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='print -r --' as_echo_n='print -rn --' elif (test "X`printf %s $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='printf %s\n' as_echo_n='printf %s' else if test "X`(/usr/ucb/echo -n -n $as_echo) 2>/dev/null`" = "X-n $as_echo"; then as_echo_body='eval /usr/ucb/echo -n "$1$as_nl"' as_echo_n='/usr/ucb/echo -n' else as_echo_body='eval expr "X$1" : "X\\(.*\\)"' as_echo_n_body='eval arg=$1; case $arg in #( *"$as_nl"*) expr "X$arg" : "X\\(.*\\)$as_nl"; arg=`expr "X$arg" : ".*$as_nl\\(.*\\)"`;; esac; expr "X$arg" : "X\\(.*\\)" | tr -d "$as_nl" ' export as_echo_n_body as_echo_n='sh -c $as_echo_n_body as_echo' fi export as_echo_body as_echo='sh -c $as_echo_body as_echo' fi # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # IFS # We need space, tab and new line, in precisely that order. Quoting is # there to prevent editors from complaining about space-tab. # (If _AS_PATH_WALK were called with IFS unset, it would disable word # splitting by setting IFS to empty value.) IFS=" "" $as_nl" # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then $as_echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Unset variables that we do not need and which cause bugs (e.g. in # pre-3.0 UWIN ksh). But do not cause bugs in bash 2.01; the "|| exit 1" # suppresses any "Segmentation fault" message there. '((' could # trigger a bug in pdksh 5.2.14. for as_var in BASH_ENV ENV MAIL MAILPATH do eval test x\${$as_var+set} = xset \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH # Use a proper internal environment variable to ensure we don't fall # into an infinite loop, continuously re-executing ourselves. if test x"${_as_can_reexec}" != xno && test "x$CONFIG_SHELL" != x; then _as_can_reexec=no; export _as_can_reexec; # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} # Admittedly, this is quite paranoid, since all the known shells bail # out after a failed `exec'. $as_echo "$0: could not re-execute with $CONFIG_SHELL" >&2 as_fn_exit 255 fi # We don't want this to propagate to other subprocesses. { _as_can_reexec=; unset _as_can_reexec;} if test "x$CONFIG_SHELL" = x; then as_bourne_compatible="if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else case \`(set -o) 2>/dev/null\` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi " as_required="as_fn_return () { (exit \$1); } as_fn_success () { as_fn_return 0; } as_fn_failure () { as_fn_return 1; } as_fn_ret_success () { return 0; } as_fn_ret_failure () { return 1; } exitcode=0 as_fn_success || { exitcode=1; echo as_fn_success failed.; } as_fn_failure && { exitcode=1; echo as_fn_failure succeeded.; } as_fn_ret_success || { exitcode=1; echo as_fn_ret_success failed.; } as_fn_ret_failure && { exitcode=1; echo as_fn_ret_failure succeeded.; } if ( set x; as_fn_ret_success y && test x = \"\$1\" ); then : else exitcode=1; echo positional parameters were not saved. fi test x\$exitcode = x0 || exit 1 test -x / || exit 1" as_suggested=" as_lineno_1=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_1a=\$LINENO as_lineno_2=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_2a=\$LINENO eval 'test \"x\$as_lineno_1'\$as_run'\" != \"x\$as_lineno_2'\$as_run'\" && test \"x\`expr \$as_lineno_1'\$as_run' + 1\`\" = \"x\$as_lineno_2'\$as_run'\"' || exit 1 test \$(( 1 + 1 )) = 2 || exit 1 test -n \"\${ZSH_VERSION+set}\${BASH_VERSION+set}\" || ( ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO PATH=/empty FPATH=/empty; export PATH FPATH test \"X\`printf %s \$ECHO\`\" = \"X\$ECHO\" \\ || test \"X\`print -r -- \$ECHO\`\" = \"X\$ECHO\" ) || exit 1" if (eval "$as_required") 2>/dev/null; then : as_have_required=yes else as_have_required=no fi if test x$as_have_required = xyes && (eval "$as_suggested") 2>/dev/null; then : else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR as_found=false for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. as_found=: case $as_dir in #( /*) for as_base in sh bash ksh sh5; do # Try only shells that exist, to save several forks. as_shell=$as_dir/$as_base if { test -f "$as_shell" || test -f "$as_shell.exe"; } && { $as_echo "$as_bourne_compatible""$as_required" | as_run=a "$as_shell"; } 2>/dev/null; then : CONFIG_SHELL=$as_shell as_have_required=yes if { $as_echo "$as_bourne_compatible""$as_suggested" | as_run=a "$as_shell"; } 2>/dev/null; then : break 2 fi fi done;; esac as_found=false done $as_found || { if { test -f "$SHELL" || test -f "$SHELL.exe"; } && { $as_echo "$as_bourne_compatible""$as_required" | as_run=a "$SHELL"; } 2>/dev/null; then : CONFIG_SHELL=$SHELL as_have_required=yes fi; } IFS=$as_save_IFS if test "x$CONFIG_SHELL" != x; then : export CONFIG_SHELL # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} # Admittedly, this is quite paranoid, since all the known shells bail # out after a failed `exec'. $as_echo "$0: could not re-execute with $CONFIG_SHELL" >&2 exit 255 fi if test x$as_have_required = xno; then : $as_echo "$0: This script requires a shell more modern than all" $as_echo "$0: the shells that I found on your system." if test x${ZSH_VERSION+set} = xset ; then $as_echo "$0: In particular, zsh $ZSH_VERSION has bugs and should" $as_echo "$0: be upgraded to zsh 4.3.4 or later." else $as_echo "$0: Please tell bug-autoconf@gnu.org and $0: slurm-dev@schedmd.com about your system, including any $0: error possibly output before this message. Then install $0: a modern shell, or manually run the script under such a $0: shell if you do have one." fi exit 1 fi fi fi SHELL=${CONFIG_SHELL-/bin/sh} export SHELL # Unset more variables known to interfere with behavior of common tools. CLICOLOR_FORCE= GREP_OPTIONS= unset CLICOLOR_FORCE GREP_OPTIONS ## --------------------- ## ## M4sh Shell Functions. ## ## --------------------- ## # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`$as_echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p # as_fn_executable_p FILE # ----------------------- # Test if FILE is an executable regular file. as_fn_executable_p () { test -f "$1" && test -x "$1" } # as_fn_executable_p # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null; then : eval 'as_fn_append () { eval $1+=\$2 }' else as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null; then : eval 'as_fn_arith () { as_val=$(( $* )) }' else as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack $as_echo "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi $as_echo "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || $as_echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits as_lineno_1=$LINENO as_lineno_1a=$LINENO as_lineno_2=$LINENO as_lineno_2a=$LINENO eval 'test "x$as_lineno_1'$as_run'" != "x$as_lineno_2'$as_run'" && test "x`expr $as_lineno_1'$as_run' + 1`" = "x$as_lineno_2'$as_run'"' || { # Blame Lee E. McMahon (1931-1989) for sed's syntax. :-) sed -n ' p /[$]LINENO/= ' <$as_myself | sed ' s/[$]LINENO.*/&-/ t lineno b :lineno N :loop s/[$]LINENO\([^'$as_cr_alnum'_].*\n\)\(.*\)/\2\1\2/ t loop s/-\n.*// ' >$as_me.lineno && chmod +x "$as_me.lineno" || { $as_echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2; as_fn_exit 1; } # If we had to re-execute with $CONFIG_SHELL, we're ensured to have # already done that, so ensure we don't try to do so again and fall # in an infinite loop. This has already happened in practice. _as_can_reexec=no; export _as_can_reexec # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensitive to this). . "./$as_me.lineno" # Exit status is that of the last command. exit } ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -pR' fi else as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi as_test_x='test -x' as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" SHELL=${CONFIG_SHELL-/bin/sh} test -n "$DJDIR" || exec 7<&0 &1 # Name of the host. # hostname on some systems (SVR3.2, old GNU/Linux) returns a bogus exit status, # so uname gets run too. ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q` # # Initializations. # ac_default_prefix=/usr/local ac_clean_files= ac_config_libobj_dir=. LIBOBJS= cross_compiling=no subdirs= MFLAGS= MAKEFLAGS= # Identity of this package. PACKAGE_NAME='slurm' PACKAGE_TARNAME='slurm' PACKAGE_VERSION='15.08' PACKAGE_STRING='slurm 15.08' PACKAGE_BUGREPORT='slurm-dev@schedmd.com' PACKAGE_URL='http://slurm.schedmd.com' ac_unique_file="configure.ac" # Factoring default headers for most tests. ac_includes_default="\ #include #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_STAT_H # include #endif #ifdef STDC_HEADERS # include # include #else # ifdef HAVE_STDLIB_H # include # endif #endif #ifdef HAVE_STRING_H # if !defined STDC_HEADERS && defined HAVE_MEMORY_H # include # endif # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_INTTYPES_H # include #endif #ifdef HAVE_STDINT_H # include #endif #ifdef HAVE_UNISTD_H # include #endif" ac_subst_vars='am__EXEEXT_FALSE am__EXEEXT_TRUE LTLIBOBJS BUILD_SMAP_FALSE BUILD_SMAP_TRUE WITH_CURL_FALSE WITH_CURL_TRUE LIBCURL LIBCURL_CPPFLAGS _libcurl_config WITH_BLCR_FALSE WITH_BLCR_TRUE BLCR_LDFLAGS BLCR_CPPFLAGS BLCR_LIBS BLCR_HOME UTIL_LIBS WITH_AUTHD_FALSE WITH_AUTHD_TRUE AUTHD_CFLAGS AUTHD_LIBS WITH_MUNGE_FALSE WITH_MUNGE_TRUE MUNGE_DIR MUNGE_LDFLAGS MUNGE_CPPFLAGS MUNGE_LIBS HAVE_OPENSSL HAVE_OPENSSL_FALSE HAVE_OPENSSL_TRUE SSL_CPPFLAGS SSL_LIBS SSL_LDFLAGS READLINE_LIBS HAVE_MAN2HTML HAVE_MAN2HTML_FALSE HAVE_MAN2HTML_TRUE ac_have_man2html HAVE_LUA_FALSE HAVE_LUA_TRUE lua_LIBS lua_CFLAGS NETLOC_LDFLAGS NETLOC_CPPFLAGS NETLOC_LIBS HAVE_NETLOC_FALSE HAVE_NETLOC_TRUE HAVE_SGI_JOB_FALSE HAVE_SGI_JOB_TRUE HAVE_NRT HAVE_NRT_FALSE HAVE_NRT_TRUE NRT_CPPFLAGS SLURM_PREFIX SLURMCTLD_PORT_COUNT SLURMDBD_PORT SLURMD_PORT SLURMCTLD_PORT DEBUG_MODULES_FALSE DEBUG_MODULES_TRUE DATAWARP_LDFLAGS DATAWARP_CPPFLAGS CRAY_TASK_LDFLAGS CRAY_TASK_CPPFLAGS CRAY_SWITCH_LDFLAGS CRAY_SWITCH_CPPFLAGS CRAY_SELECT_LDFLAGS CRAY_SELECT_CPPFLAGS CRAY_JOB_LDFLAGS CRAY_JOB_CPPFLAGS HAVE_ALPS_CRAY_EMULATION_FALSE HAVE_ALPS_CRAY_EMULATION_TRUE HAVE_ALPS_EMULATION_FALSE HAVE_ALPS_EMULATION_TRUE HAVE_CRAY_NETWORK_FALSE HAVE_CRAY_NETWORK_TRUE HAVE_REAL_CRAY_FALSE HAVE_REAL_CRAY_TRUE HAVE_ALPS_CRAY_FALSE HAVE_ALPS_CRAY_TRUE HAVE_NATIVE_CRAY_FALSE HAVE_NATIVE_CRAY_TRUE WITH_MYSQL_FALSE WITH_MYSQL_TRUE MYSQL_CFLAGS MYSQL_LIBS HAVEMYSQLCONFIG BUILD_SVIEW_FALSE BUILD_SVIEW_TRUE GTK_LIBS GTK_CFLAGS GLIB_COMPILE_RESOURCES GLIB_MKENUMS GOBJECT_QUERY GLIB_GENMARSHAL GLIB_LIBS GLIB_CFLAGS HAVE_CHECK_FALSE HAVE_CHECK_TRUE CHECK_LIBS CHECK_CFLAGS HAVE_SOME_CURSES HAVE_SOME_CURSES_FALSE HAVE_SOME_CURSES_TRUE NCURSES BUILD_RRD_FALSE BUILD_RRD_TRUE RRDTOOL_LDFLAGS RRDTOOL_CPPFLAGS RRDTOOL_LIBS SEMAPHORE_LIBS SEMAPHORE_SOURCES BUILD_IPMI_FALSE BUILD_IPMI_TRUE FREEIPMI_LDFLAGS FREEIPMI_CPPFLAGS FREEIPMI_LIBS HWLOC_LDFLAGS HWLOC_CPPFLAGS HWLOC_LIBS BUILD_HDF5_FALSE BUILD_HDF5_TRUE HDF5_FLIBS HDF5_FFLAGS HDF5_FC HDF5_LIBS HDF5_LDFLAGS HDF5_CPPFLAGS HDF5_CFLAGS HDF5_CC HDF5_VERSION H5FC H5CC BUILD_OFED_FALSE BUILD_OFED_TRUE OFED_LDFLAGS OFED_CPPFLAGS OFED_LIBS PTHREAD_CFLAGS PTHREAD_LIBS PTHREAD_CC ax_pthread_config HAVE_UNSETENV_FALSE HAVE_UNSETENV_TRUE LIBOBJS WITH_JSON_PARSER_FALSE WITH_JSON_PARSER_TRUE JSON_LDFLAGS JSON_CPPFLAGS PAM_DIR HAVE_PAM_FALSE HAVE_PAM_TRUE PAM_LIBS HAVE_SCHED_SETAFFINITY_FALSE HAVE_SCHED_SETAFFINITY_TRUE HAVE_NUMA_FALSE HAVE_NUMA_TRUE NUMA_LIBS DL_LIBS SUCMD SLEEP_CMD WITH_GNU_LD_FALSE WITH_GNU_LD_TRUE WITH_CXX_FALSE WITH_CXX_TRUE PKG_CONFIG_LIBDIR PKG_CONFIG_PATH PKG_CONFIG CXXCPP OTOOL64 OTOOL LIPO NMEDIT DSYMUTIL MANIFEST_TOOL RANLIB ac_ct_AR AR DLLTOOL OBJDUMP LN_S NM ac_ct_DUMPBIN DUMPBIN LD FGREP SED LIBTOOL WITH_CYGWIN_FALSE WITH_CYGWIN_TRUE HAVE_AIX_PROCTRACK_FALSE HAVE_AIX_PROCTRACK_TRUE EGREP GREP CPP PROCTRACKDIR HAVE_AIX HAVE_AIX_FALSE HAVE_AIX_TRUE SO_LDFLAGS LIB_LDFLAGS CMD_LDFLAGS BLUEGENE_LOADED BLUEGENE_LOADED_FALSE BLUEGENE_LOADED_TRUE REAL_BGQ_LOADED REAL_BGQ_LOADED_FALSE REAL_BGQ_LOADED_TRUE BGQ_LOADED BGQ_LOADED_FALSE BGQ_LOADED_TRUE RUNJOB_LDFLAGS BG_LDFLAGS am__fastdepCXX_FALSE am__fastdepCXX_TRUE CXXDEPMODE ac_ct_CXX CXXFLAGS CXX REAL_BG_L_P_LOADED REAL_BG_L_P_LOADED_FALSE REAL_BG_L_P_LOADED_TRUE BG_L_P_LOADED BG_L_P_LOADED_FALSE BG_L_P_LOADED_TRUE BGL_LOADED BGL_LOADED_FALSE BGL_LOADED_TRUE BG_INCLUDES am__fastdepCC_FALSE am__fastdepCC_TRUE CCDEPMODE am__nodep AMDEPBACKSLASH AMDEP_FALSE AMDEP_TRUE am__quote am__include DEPDIR OBJEXT EXEEXT ac_ct_CC CPPFLAGS LDFLAGS CFLAGS CC MAINT MAINTAINER_MODE_FALSE MAINTAINER_MODE_TRUE AM_BACKSLASH AM_DEFAULT_VERBOSITY AM_DEFAULT_V AM_V am__untar am__tar AMTAR am__leading_dot SET_MAKE AWK mkdir_p MKDIR_P INSTALL_STRIP_PROGRAM STRIP install_sh MAKEINFO AUTOHEADER AUTOMAKE AUTOCONF ACLOCAL PACKAGE CYGPATH_W am__isrc INSTALL_DATA INSTALL_SCRIPT INSTALL_PROGRAM SLURM_VERSION_STRING RELEASE SLURM_MICRO SLURM_MINOR SLURM_MAJOR SLURM_VERSION_NUMBER VERSION SLURM_API_REVISION SLURM_API_AGE SLURM_API_MAJOR SLURM_API_CURRENT SLURM_API_VERSION PROJECT DONT_BUILD_FALSE DONT_BUILD_TRUE target_os target_vendor target_cpu target host_os host_vendor host_cpu host build_os build_vendor build_cpu build target_alias host_alias build_alias LIBS ECHO_T ECHO_N ECHO_C DEFS mandir localedir libdir psdir pdfdir dvidir htmldir infodir docdir oldincludedir includedir localstatedir sharedstatedir sysconfdir datadir datarootdir libexecdir sbindir bindir program_transform_name prefix exec_prefix PACKAGE_URL PACKAGE_BUGREPORT PACKAGE_STRING PACKAGE_VERSION PACKAGE_TARNAME PACKAGE_NAME PATH_SEPARATOR SHELL' ac_subst_files='' ac_user_opts=' enable_option_checking enable_silent_rules enable_maintainer_mode with_rpath with_db2_dir enable_bluegene_emulation enable_bgl_emulation enable_dependency_tracking with_bg_serial enable_bgp_emulation enable_bgq_emulation with_proctrack enable_largefile enable_shared enable_static with_pic enable_fast_install with_gnu_ld with_sysroot enable_libtool_lock with_cpusetdir enable_pam with_pam_dir enable_iso8601 enable_load_env_no_login with_json enable_sun_const with_dimensions with_ofed with_hdf5 with_hwloc with_freeipmi with_rrdtool enable_glibtest enable_gtktest with_mysql_config with_alps_emulation enable_cray_emulation enable_native_cray enable_cray_network enable_really_no_cray with_datawarp enable_developer enable_debug enable_memory_leak_debug enable_front_end enable_partial_attach enable_salloc_kill_cmd enable_salloc_background enable_simulator with_slurmctld_port with_slurmd_port with_slurmdbd_port with_slurmctld_port_count with_nrth with_libnrt with_netloc with_readline with_ssl with_munge enable_multiple_slurmd with_blcr with_libcurl ' ac_precious_vars='build_alias host_alias target_alias CC CFLAGS LDFLAGS LIBS CPPFLAGS CXX CXXFLAGS CCC CPP CXXCPP PKG_CONFIG PKG_CONFIG_PATH PKG_CONFIG_LIBDIR CHECK_CFLAGS CHECK_LIBS lua_CFLAGS lua_LIBS' # Initialize some variables set by options. ac_init_help= ac_init_version=false ac_unrecognized_opts= ac_unrecognized_sep= # The variables have the same names as the options, with # dashes changed to underlines. cache_file=/dev/null exec_prefix=NONE no_create= no_recursion= prefix=NONE program_prefix=NONE program_suffix=NONE program_transform_name=s,x,x, silent= site= srcdir= verbose= x_includes=NONE x_libraries=NONE # Installation directory options. # These are left unexpanded so users can "make install exec_prefix=/foo" # and all the variables that are supposed to be based on exec_prefix # by default will actually change. # Use braces instead of parens because sh, perl, etc. also accept them. # (The list follows the same order as the GNU Coding Standards.) bindir='${exec_prefix}/bin' sbindir='${exec_prefix}/sbin' libexecdir='${exec_prefix}/libexec' datarootdir='${prefix}/share' datadir='${datarootdir}' sysconfdir='${prefix}/etc' sharedstatedir='${prefix}/com' localstatedir='${prefix}/var' includedir='${prefix}/include' oldincludedir='/usr/include' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' infodir='${datarootdir}/info' htmldir='${docdir}' dvidir='${docdir}' pdfdir='${docdir}' psdir='${docdir}' libdir='${exec_prefix}/lib' localedir='${datarootdir}/locale' mandir='${datarootdir}/man' ac_prev= ac_dashdash= for ac_option do # If the previous option needs an argument, assign it. if test -n "$ac_prev"; then eval $ac_prev=\$ac_option ac_prev= continue fi case $ac_option in *=?*) ac_optarg=`expr "X$ac_option" : '[^=]*=\(.*\)'` ;; *=) ac_optarg= ;; *) ac_optarg=yes ;; esac # Accept the important Cygnus configure options, so we can diagnose typos. case $ac_dashdash$ac_option in --) ac_dashdash=yes ;; -bindir | --bindir | --bindi | --bind | --bin | --bi) ac_prev=bindir ;; -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) bindir=$ac_optarg ;; -build | --build | --buil | --bui | --bu) ac_prev=build_alias ;; -build=* | --build=* | --buil=* | --bui=* | --bu=*) build_alias=$ac_optarg ;; -cache-file | --cache-file | --cache-fil | --cache-fi \ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) ac_prev=cache_file ;; -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) cache_file=$ac_optarg ;; --config-cache | -C) cache_file=config.cache ;; -datadir | --datadir | --datadi | --datad) ac_prev=datadir ;; -datadir=* | --datadir=* | --datadi=* | --datad=*) datadir=$ac_optarg ;; -datarootdir | --datarootdir | --datarootdi | --datarootd | --dataroot \ | --dataroo | --dataro | --datar) ac_prev=datarootdir ;; -datarootdir=* | --datarootdir=* | --datarootdi=* | --datarootd=* \ | --dataroot=* | --dataroo=* | --dataro=* | --datar=*) datarootdir=$ac_optarg ;; -disable-* | --disable-*) ac_useropt=`expr "x$ac_option" : 'x-*disable-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--disable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=no ;; -docdir | --docdir | --docdi | --doc | --do) ac_prev=docdir ;; -docdir=* | --docdir=* | --docdi=* | --doc=* | --do=*) docdir=$ac_optarg ;; -dvidir | --dvidir | --dvidi | --dvid | --dvi | --dv) ac_prev=dvidir ;; -dvidir=* | --dvidir=* | --dvidi=* | --dvid=* | --dvi=* | --dv=*) dvidir=$ac_optarg ;; -enable-* | --enable-*) ac_useropt=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--enable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=\$ac_optarg ;; -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ | --exec | --exe | --ex) ac_prev=exec_prefix ;; -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ | --exec=* | --exe=* | --ex=*) exec_prefix=$ac_optarg ;; -gas | --gas | --ga | --g) # Obsolete; use --with-gas. with_gas=yes ;; -help | --help | --hel | --he | -h) ac_init_help=long ;; -help=r* | --help=r* | --hel=r* | --he=r* | -hr*) ac_init_help=recursive ;; -help=s* | --help=s* | --hel=s* | --he=s* | -hs*) ac_init_help=short ;; -host | --host | --hos | --ho) ac_prev=host_alias ;; -host=* | --host=* | --hos=* | --ho=*) host_alias=$ac_optarg ;; -htmldir | --htmldir | --htmldi | --htmld | --html | --htm | --ht) ac_prev=htmldir ;; -htmldir=* | --htmldir=* | --htmldi=* | --htmld=* | --html=* | --htm=* \ | --ht=*) htmldir=$ac_optarg ;; -includedir | --includedir | --includedi | --included | --include \ | --includ | --inclu | --incl | --inc) ac_prev=includedir ;; -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ | --includ=* | --inclu=* | --incl=* | --inc=*) includedir=$ac_optarg ;; -infodir | --infodir | --infodi | --infod | --info | --inf) ac_prev=infodir ;; -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) infodir=$ac_optarg ;; -libdir | --libdir | --libdi | --libd) ac_prev=libdir ;; -libdir=* | --libdir=* | --libdi=* | --libd=*) libdir=$ac_optarg ;; -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ | --libexe | --libex | --libe) ac_prev=libexecdir ;; -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ | --libexe=* | --libex=* | --libe=*) libexecdir=$ac_optarg ;; -localedir | --localedir | --localedi | --localed | --locale) ac_prev=localedir ;; -localedir=* | --localedir=* | --localedi=* | --localed=* | --locale=*) localedir=$ac_optarg ;; -localstatedir | --localstatedir | --localstatedi | --localstated \ | --localstate | --localstat | --localsta | --localst | --locals) ac_prev=localstatedir ;; -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ | --localstate=* | --localstat=* | --localsta=* | --localst=* | --locals=*) localstatedir=$ac_optarg ;; -mandir | --mandir | --mandi | --mand | --man | --ma | --m) ac_prev=mandir ;; -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) mandir=$ac_optarg ;; -nfp | --nfp | --nf) # Obsolete; use --without-fp. with_fp=no ;; -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c | -n) no_create=yes ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) no_recursion=yes ;; -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ | --oldin | --oldi | --old | --ol | --o) ac_prev=oldincludedir ;; -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) oldincludedir=$ac_optarg ;; -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) ac_prev=prefix ;; -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) prefix=$ac_optarg ;; -program-prefix | --program-prefix | --program-prefi | --program-pref \ | --program-pre | --program-pr | --program-p) ac_prev=program_prefix ;; -program-prefix=* | --program-prefix=* | --program-prefi=* \ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) program_prefix=$ac_optarg ;; -program-suffix | --program-suffix | --program-suffi | --program-suff \ | --program-suf | --program-su | --program-s) ac_prev=program_suffix ;; -program-suffix=* | --program-suffix=* | --program-suffi=* \ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) program_suffix=$ac_optarg ;; -program-transform-name | --program-transform-name \ | --program-transform-nam | --program-transform-na \ | --program-transform-n | --program-transform- \ | --program-transform | --program-transfor \ | --program-transfo | --program-transf \ | --program-trans | --program-tran \ | --progr-tra | --program-tr | --program-t) ac_prev=program_transform_name ;; -program-transform-name=* | --program-transform-name=* \ | --program-transform-nam=* | --program-transform-na=* \ | --program-transform-n=* | --program-transform-=* \ | --program-transform=* | --program-transfor=* \ | --program-transfo=* | --program-transf=* \ | --program-trans=* | --program-tran=* \ | --progr-tra=* | --program-tr=* | --program-t=*) program_transform_name=$ac_optarg ;; -pdfdir | --pdfdir | --pdfdi | --pdfd | --pdf | --pd) ac_prev=pdfdir ;; -pdfdir=* | --pdfdir=* | --pdfdi=* | --pdfd=* | --pdf=* | --pd=*) pdfdir=$ac_optarg ;; -psdir | --psdir | --psdi | --psd | --ps) ac_prev=psdir ;; -psdir=* | --psdir=* | --psdi=* | --psd=* | --ps=*) psdir=$ac_optarg ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) silent=yes ;; -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) ac_prev=sbindir ;; -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ | --sbi=* | --sb=*) sbindir=$ac_optarg ;; -sharedstatedir | --sharedstatedir | --sharedstatedi \ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ | --sharedst | --shareds | --shared | --share | --shar \ | --sha | --sh) ac_prev=sharedstatedir ;; -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ | --sha=* | --sh=*) sharedstatedir=$ac_optarg ;; -site | --site | --sit) ac_prev=site ;; -site=* | --site=* | --sit=*) site=$ac_optarg ;; -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) ac_prev=srcdir ;; -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) srcdir=$ac_optarg ;; -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ | --syscon | --sysco | --sysc | --sys | --sy) ac_prev=sysconfdir ;; -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) sysconfdir=$ac_optarg ;; -target | --target | --targe | --targ | --tar | --ta | --t) ac_prev=target_alias ;; -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) target_alias=$ac_optarg ;; -v | -verbose | --verbose | --verbos | --verbo | --verb) verbose=yes ;; -version | --version | --versio | --versi | --vers | -V) ac_init_version=: ;; -with-* | --with-*) ac_useropt=`expr "x$ac_option" : 'x-*with-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--with-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=\$ac_optarg ;; -without-* | --without-*) ac_useropt=`expr "x$ac_option" : 'x-*without-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--without-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=no ;; --x) # Obsolete; use --with-x. with_x=yes ;; -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ | --x-incl | --x-inc | --x-in | --x-i) ac_prev=x_includes ;; -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) x_includes=$ac_optarg ;; -x-libraries | --x-libraries | --x-librarie | --x-librari \ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) ac_prev=x_libraries ;; -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) x_libraries=$ac_optarg ;; -*) as_fn_error $? "unrecognized option: \`$ac_option' Try \`$0 --help' for more information" ;; *=*) ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='` # Reject names that are not valid shell variable names. case $ac_envvar in #( '' | [0-9]* | *[!_$as_cr_alnum]* ) as_fn_error $? "invalid variable name: \`$ac_envvar'" ;; esac eval $ac_envvar=\$ac_optarg export $ac_envvar ;; *) # FIXME: should be removed in autoconf 3.0. $as_echo "$as_me: WARNING: you should use --build, --host, --target" >&2 expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null && $as_echo "$as_me: WARNING: invalid host type: $ac_option" >&2 : "${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option}" ;; esac done if test -n "$ac_prev"; then ac_option=--`echo $ac_prev | sed 's/_/-/g'` as_fn_error $? "missing argument to $ac_option" fi if test -n "$ac_unrecognized_opts"; then case $enable_option_checking in no) ;; fatal) as_fn_error $? "unrecognized options: $ac_unrecognized_opts" ;; *) $as_echo "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2 ;; esac fi # Check all directory arguments for consistency. for ac_var in exec_prefix prefix bindir sbindir libexecdir datarootdir \ datadir sysconfdir sharedstatedir localstatedir includedir \ oldincludedir docdir infodir htmldir dvidir pdfdir psdir \ libdir localedir mandir do eval ac_val=\$$ac_var # Remove trailing slashes. case $ac_val in */ ) ac_val=`expr "X$ac_val" : 'X\(.*[^/]\)' \| "X$ac_val" : 'X\(.*\)'` eval $ac_var=\$ac_val;; esac # Be sure to have absolute directory names. case $ac_val in [\\/$]* | ?:[\\/]* ) continue;; NONE | '' ) case $ac_var in *prefix ) continue;; esac;; esac as_fn_error $? "expected an absolute directory name for --$ac_var: $ac_val" done # There might be people who depend on the old broken behavior: `$host' # used to hold the argument of --host etc. # FIXME: To remove some day. build=$build_alias host=$host_alias target=$target_alias # FIXME: To remove some day. if test "x$host_alias" != x; then if test "x$build_alias" = x; then cross_compiling=maybe elif test "x$build_alias" != "x$host_alias"; then cross_compiling=yes fi fi ac_tool_prefix= test -n "$host_alias" && ac_tool_prefix=$host_alias- test "$silent" = yes && exec 6>/dev/null ac_pwd=`pwd` && test -n "$ac_pwd" && ac_ls_di=`ls -di .` && ac_pwd_ls_di=`cd "$ac_pwd" && ls -di .` || as_fn_error $? "working directory cannot be determined" test "X$ac_ls_di" = "X$ac_pwd_ls_di" || as_fn_error $? "pwd does not report name of working directory" # Find the source files, if location was not specified. if test -z "$srcdir"; then ac_srcdir_defaulted=yes # Try the directory containing this script, then the parent directory. ac_confdir=`$as_dirname -- "$as_myself" || $as_expr X"$as_myself" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_myself" : 'X\(//\)[^/]' \| \ X"$as_myself" : 'X\(//\)$' \| \ X"$as_myself" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_myself" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` srcdir=$ac_confdir if test ! -r "$srcdir/$ac_unique_file"; then srcdir=.. fi else ac_srcdir_defaulted=no fi if test ! -r "$srcdir/$ac_unique_file"; then test "$ac_srcdir_defaulted" = yes && srcdir="$ac_confdir or .." as_fn_error $? "cannot find sources ($ac_unique_file) in $srcdir" fi ac_msg="sources are in $srcdir, but \`cd $srcdir' does not work" ac_abs_confdir=`( cd "$srcdir" && test -r "./$ac_unique_file" || as_fn_error $? "$ac_msg" pwd)` # When building in place, set srcdir=. if test "$ac_abs_confdir" = "$ac_pwd"; then srcdir=. fi # Remove unnecessary trailing slashes from srcdir. # Double slashes in file names in object file debugging info # mess up M-x gdb in Emacs. case $srcdir in */) srcdir=`expr "X$srcdir" : 'X\(.*[^/]\)' \| "X$srcdir" : 'X\(.*\)'`;; esac for ac_var in $ac_precious_vars; do eval ac_env_${ac_var}_set=\${${ac_var}+set} eval ac_env_${ac_var}_value=\$${ac_var} eval ac_cv_env_${ac_var}_set=\${${ac_var}+set} eval ac_cv_env_${ac_var}_value=\$${ac_var} done # # Report the --help message. # if test "$ac_init_help" = "long"; then # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF \`configure' configures slurm 15.08 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... To assign environment variables (e.g., CC, CFLAGS...), specify them as VAR=VALUE. See below for descriptions of some of the useful variables. Defaults for the options are specified in brackets. Configuration: -h, --help display this help and exit --help=short display options specific to this package --help=recursive display the short help of all the included packages -V, --version display version information and exit -q, --quiet, --silent do not print \`checking ...' messages --cache-file=FILE cache test results in FILE [disabled] -C, --config-cache alias for \`--cache-file=config.cache' -n, --no-create do not create output files --srcdir=DIR find the sources in DIR [configure dir or \`..'] Installation directories: --prefix=PREFIX install architecture-independent files in PREFIX [$ac_default_prefix] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [PREFIX] By default, \`make install' will install all the files in \`$ac_default_prefix/bin', \`$ac_default_prefix/lib' etc. You can specify an installation prefix other than \`$ac_default_prefix' using \`--prefix', for instance \`--prefix=\$HOME'. For better control, use the options below. Fine tuning of the installation directories: --bindir=DIR user executables [EPREFIX/bin] --sbindir=DIR system admin executables [EPREFIX/sbin] --libexecdir=DIR program executables [EPREFIX/libexec] --sysconfdir=DIR read-only single-machine data [PREFIX/etc] --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com] --localstatedir=DIR modifiable single-machine data [PREFIX/var] --libdir=DIR object code libraries [EPREFIX/lib] --includedir=DIR C header files [PREFIX/include] --oldincludedir=DIR C header files for non-gcc [/usr/include] --datarootdir=DIR read-only arch.-independent data root [PREFIX/share] --datadir=DIR read-only architecture-independent data [DATAROOTDIR] --infodir=DIR info documentation [DATAROOTDIR/info] --localedir=DIR locale-dependent data [DATAROOTDIR/locale] --mandir=DIR man documentation [DATAROOTDIR/man] --docdir=DIR documentation root [DATAROOTDIR/doc/slurm] --htmldir=DIR html documentation [DOCDIR] --dvidir=DIR dvi documentation [DOCDIR] --pdfdir=DIR pdf documentation [DOCDIR] --psdir=DIR ps documentation [DOCDIR] _ACEOF cat <<\_ACEOF Program names: --program-prefix=PREFIX prepend PREFIX to installed program names --program-suffix=SUFFIX append SUFFIX to installed program names --program-transform-name=PROGRAM run sed PROGRAM on installed program names System types: --build=BUILD configure for building on BUILD [guessed] --host=HOST cross-compile to build programs to run on HOST [BUILD] --target=TARGET configure for building compilers for TARGET [HOST] _ACEOF fi if test -n "$ac_init_help"; then case $ac_init_help in short | recursive ) echo "Configuration of slurm 15.08:";; esac cat <<\_ACEOF Optional Features: --disable-option-checking ignore unrecognized --enable/--with options --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --enable-FEATURE[=ARG] include FEATURE [ARG=yes] --enable-silent-rules less verbose build output (undo: "make V=1") --disable-silent-rules verbose build output (undo: "make V=0") --enable-maintainer-mode enable make rules and dependencies not useful (and sometimes confusing) to the casual installer --enable-bluegene-emulation deprecated use --enable-bgl-emulation --enable-bgl-emulation Run SLURM in BGL mode on a non-bluegene system --enable-dependency-tracking do not reject slow dependency extractors --disable-dependency-tracking speeds up one-time build --enable-bgp-emulation Run SLURM in BG/P mode on a non-bluegene system --enable-bgq-emulation Run SLURM in BG/Q mode on a non-bluegene system --disable-largefile omit support for large files --enable-shared[=PKGS] build shared libraries [default=yes] --enable-static[=PKGS] build static libraries [default=yes] --enable-fast-install[=PKGS] optimize for fast installation [default=yes] --disable-libtool-lock avoid locking (might break parallel builds) --enable-pam enable PAM (Pluggable Authentication Modules) support --disable-iso8601 disable ISO 8601 time format support --enable-load-env-no-login enable --get-user-env option to load user environment without .login --enable-sun-const enable Sun Constellation system support --disable-glibtest do not try to compile and run a test GLIB program --disable-gtktest do not try to compile and run a test GTK+ program --enable-alps-cray-emulation Run SLURM in an emulated Cray mode --enable-native-cray Run SLURM natively on a Cray without ALPS --enable-cray-network Run SLURM on a non-Cray system with a Cray network --enable-really-no-cray Disable cray support for eslogin machines --enable-developer enable developer options (asserts, -Werror - also sets --enable-debug as well) --disable-debug disable debugging symbols and compile with optimizations --enable-memory-leak-debug enable memory leak debugging code for development --enable-front-end enable slurmd operation on a front-end --disable-partial-attach disable debugger partial task attach support --enable-salloc-kill-cmd salloc should kill child processes at job termination --disable-salloc-background disable salloc execution in the background --enable-simulator enable slurm simulator --enable-multiple-slurmd enable multiple-slurmd support Optional Packages: --with-PACKAGE[=ARG] use PACKAGE [ARG=yes] --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no) --without-rpath Do not include rpath in build --with-db2-dir=PATH Specify path to parent directory of DB2 library --with-bg-serial=NAME set BG_SERIAL value --with-proctrack=PATH Specify path to proctrack sources --with-pic[=PKGS] try to use only PIC/non-PIC objects [default=use both] --with-gnu-ld assume the C compiler uses GNU ld [default=no] --with-sysroot=DIR Search for dependent libraries within DIR (or the compiler's sysroot if not specified). --with-cpusetdir=PATH specify path to cpuset directory default is /dev/cpuset --with-pam_dir=PATH Specify path to PAM module installation --with-json=PATH Specify path to json-c installation --with-dimensions=N set system dimension count for generic computer system --with-ofed=PATH Specify path to ofed installation --with-hdf5=yes/no/PATH location of h5cc or h5pcc for HDF5 configuration --with-hwloc=PATH Specify path to hwloc installation --with-freeipmi=PATH Specify path to freeipmi installation --with-rrdtool=PATH Specify path to rrdtool-devel installation --with-mysql_config=PATH Specify path to mysql_config binary --with-alps-emulation Run SLURM against an emulated ALPS system - requires option cray.conf [default=no] --with-datawarp=PATH Specify path to DataWarp installation --with-slurmctld-port=N set slurmctld default port [6817] --with-slurmd-port=N set slurmd default port [6818] --with-slurmdbd-port=N set slurmdbd default port [6819] --with-slurmctld-port-count=N set slurmctld default port count [1] --with-nrth=PATH Parent directory of nrt.h and permapi.h --with-libnrt=PATH Parent directory of libnrt.so --with-netloc=PATH Specify path to netloc installation --without-readline compile without readline support --with-ssl=PATH Specify path to OpenSSL installation --with-munge=PATH Specify path to munge installation --with-blcr=PATH Specify path to BLCR installation --with-libcurl=PREFIX look for the curl library in PREFIX/lib and headers in PREFIX/include Some influential environment variables: CC C compiler command CFLAGS C compiler flags LDFLAGS linker flags, e.g. -L if you have libraries in a nonstandard directory LIBS libraries to pass to the linker, e.g. -l CPPFLAGS (Objective) C/C++ preprocessor flags, e.g. -I if you have headers in a nonstandard directory CXX C++ compiler command CXXFLAGS C++ compiler flags CPP C preprocessor CXXCPP C++ preprocessor PKG_CONFIG path to pkg-config utility PKG_CONFIG_PATH directories to add to pkg-config's search path PKG_CONFIG_LIBDIR path overriding pkg-config's built-in search path CHECK_CFLAGS C compiler flags for CHECK, overriding pkg-config CHECK_LIBS linker flags for CHECK, overriding pkg-config lua_CFLAGS C compiler flags for lua, overriding pkg-config lua_LIBS linker flags for lua, overriding pkg-config Use these variables to override the choices made by `configure' or to help it to find libraries and programs with nonstandard names/locations. Report bugs to . slurm home page: . _ACEOF ac_status=$? fi if test "$ac_init_help" = "recursive"; then # If there are subdirs, report their specific --help. for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue test -d "$ac_dir" || { cd "$srcdir" && ac_pwd=`pwd` && srcdir=. && test -d "$ac_dir"; } || continue ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`$as_echo "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`$as_echo "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix cd "$ac_dir" || { ac_status=$?; continue; } # Check for guested configure. if test -f "$ac_srcdir/configure.gnu"; then echo && $SHELL "$ac_srcdir/configure.gnu" --help=recursive elif test -f "$ac_srcdir/configure"; then echo && $SHELL "$ac_srcdir/configure" --help=recursive else $as_echo "$as_me: WARNING: no configuration information is in $ac_dir" >&2 fi || ac_status=$? cd "$ac_pwd" || { ac_status=$?; break; } done fi test -n "$ac_init_help" && exit $ac_status if $ac_init_version; then cat <<\_ACEOF slurm configure 15.08 generated by GNU Autoconf 2.69 Copyright (C) 2012 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. _ACEOF exit fi ## ------------------------ ## ## Autoconf initialization. ## ## ------------------------ ## # ac_fn_c_try_compile LINENO # -------------------------- # Try to compile conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest.$ac_objext; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_compile # ac_fn_c_try_link LINENO # ----------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_link () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest$ac_exeext if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest$ac_exeext && { test "$cross_compiling" = yes || test -x conftest$ac_exeext }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would # interfere with the next link command; also delete a directory that is # left behind by Apple's compiler. We do this before executing the actions. rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_link # ac_fn_cxx_try_compile LINENO # ---------------------------- # Try to compile conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_cxx_werror_flag" || test ! -s conftest.err } && test -s conftest.$ac_objext; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_compile # ac_fn_cxx_try_link LINENO # ------------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_link () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest$ac_exeext if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_cxx_werror_flag" || test ! -s conftest.err } && test -s conftest$ac_exeext && { test "$cross_compiling" = yes || test -x conftest$ac_exeext }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would # interfere with the next link command; also delete a directory that is # left behind by Apple's compiler. We do this before executing the actions. rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_link # ac_fn_c_try_cpp LINENO # ---------------------- # Try to preprocess conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_cpp () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_cpp conftest.$ac_ext" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_cpp conftest.$ac_ext") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } > conftest.i && { test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || test ! -s conftest.err }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_cpp # ac_fn_c_check_header_mongrel LINENO HEADER VAR INCLUDES # ------------------------------------------------------- # Tests whether HEADER exists, giving a warning if it cannot be compiled using # the include files in INCLUDES and setting the cache variable VAR # accordingly. ac_fn_c_check_header_mongrel () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if eval \${$3+:} false; then : { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } else # Is the header compilable? { $as_echo "$as_me:${as_lineno-$LINENO}: checking $2 usability" >&5 $as_echo_n "checking $2 usability... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_header_compiler=yes else ac_header_compiler=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_header_compiler" >&5 $as_echo "$ac_header_compiler" >&6; } # Is the header present? { $as_echo "$as_me:${as_lineno-$LINENO}: checking $2 presence" >&5 $as_echo_n "checking $2 presence... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include <$2> _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : ac_header_preproc=yes else ac_header_preproc=no fi rm -f conftest.err conftest.i conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_header_preproc" >&5 $as_echo "$ac_header_preproc" >&6; } # So? What about this header? case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in #(( yes:no: ) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: accepted by the compiler, rejected by the preprocessor!" >&5 $as_echo "$as_me: WARNING: $2: accepted by the compiler, rejected by the preprocessor!" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: proceeding with the compiler's result" >&5 $as_echo "$as_me: WARNING: $2: proceeding with the compiler's result" >&2;} ;; no:yes:* ) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: present but cannot be compiled" >&5 $as_echo "$as_me: WARNING: $2: present but cannot be compiled" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: check for missing prerequisite headers?" >&5 $as_echo "$as_me: WARNING: $2: check for missing prerequisite headers?" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: see the Autoconf documentation" >&5 $as_echo "$as_me: WARNING: $2: see the Autoconf documentation" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: section \"Present But Cannot Be Compiled\"" >&5 $as_echo "$as_me: WARNING: $2: section \"Present But Cannot Be Compiled\"" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: proceeding with the compiler's result" >&5 $as_echo "$as_me: WARNING: $2: proceeding with the compiler's result" >&2;} ( $as_echo "## ------------------------------------ ## ## Report this to slurm-dev@schedmd.com ## ## ------------------------------------ ##" ) | sed "s/^/$as_me: WARNING: /" >&2 ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else eval "$3=\$ac_header_compiler" fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_header_mongrel # ac_fn_c_try_run LINENO # ---------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. Assumes # that executables *can* be run. ac_fn_c_try_run () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { ac_try='./conftest$ac_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then : ac_retval=0 else $as_echo "$as_me: program exited with status $ac_status" >&5 $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=$ac_status fi rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_run # ac_fn_c_check_header_compile LINENO HEADER VAR INCLUDES # ------------------------------------------------------- # Tests whether HEADER exists and can be compiled using the include files in # INCLUDES, setting the cache variable VAR accordingly. ac_fn_c_check_header_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_c_try_compile "$LINENO"; then : eval "$3=yes" else eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_header_compile # ac_fn_c_check_func LINENO FUNC VAR # ---------------------------------- # Tests whether FUNC exists, setting the cache variable VAR accordingly ac_fn_c_check_func () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Define $2 to an innocuous variant, in case declares $2. For example, HP-UX 11i declares gettimeofday. */ #define $2 innocuous_$2 /* System header to define __stub macros and hopefully few prototypes, which can conflict with char $2 (); below. Prefer to if __STDC__ is defined, since exists even on freestanding compilers. */ #ifdef __STDC__ # include #else # include #endif #undef $2 /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char $2 (); /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined __stub_$2 || defined __stub___$2 choke me #endif int main () { return $2 (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : eval "$3=yes" else eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_func # ac_fn_cxx_try_cpp LINENO # ------------------------ # Try to preprocess conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_cpp () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_cpp conftest.$ac_ext" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_cpp conftest.$ac_ext") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } > conftest.i && { test -z "$ac_cxx_preproc_warn_flag$ac_cxx_werror_flag" || test ! -s conftest.err }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_cpp # ac_fn_c_check_decl LINENO SYMBOL VAR INCLUDES # --------------------------------------------- # Tests whether SYMBOL is declared in INCLUDES, setting cache variable VAR # accordingly. ac_fn_c_check_decl () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack as_decl_name=`echo $2|sed 's/ *(.*//'` as_decl_use=`echo $2|sed -e 's/(/((/' -e 's/)/) 0&/' -e 's/,/) 0& (/g'` { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $as_decl_name is declared" >&5 $as_echo_n "checking whether $as_decl_name is declared... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 int main () { #ifndef $as_decl_name #ifdef __cplusplus (void) $as_decl_use; #else (void) $as_decl_name; #endif #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : eval "$3=yes" else eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_decl cat >config.log <<_ACEOF This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by slurm $as_me 15.08, which was generated by GNU Autoconf 2.69. Invocation command line was $ $0 $@ _ACEOF exec 5>>config.log { cat <<_ASUNAME ## --------- ## ## Platform. ## ## --------- ## hostname = `(hostname || uname -n) 2>/dev/null | sed 1q` uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown` /bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown` /bin/arch = `(/bin/arch) 2>/dev/null || echo unknown` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown` /usr/bin/hostinfo = `(/usr/bin/hostinfo) 2>/dev/null || echo unknown` /bin/machine = `(/bin/machine) 2>/dev/null || echo unknown` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown` /bin/universe = `(/bin/universe) 2>/dev/null || echo unknown` _ASUNAME as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. $as_echo "PATH: $as_dir" done IFS=$as_save_IFS } >&5 cat >&5 <<_ACEOF ## ----------- ## ## Core tests. ## ## ----------- ## _ACEOF # Keep a trace of the command line. # Strip out --no-create and --no-recursion so they do not pile up. # Strip out --silent because we don't want to record it for future runs. # Also quote any args containing shell meta-characters. # Make two passes to allow for proper duplicate-argument suppression. ac_configure_args= ac_configure_args0= ac_configure_args1= ac_must_keep_next=false for ac_pass in 1 2 do for ac_arg do case $ac_arg in -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) continue ;; *\'*) ac_arg=`$as_echo "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac case $ac_pass in 1) as_fn_append ac_configure_args0 " '$ac_arg'" ;; 2) as_fn_append ac_configure_args1 " '$ac_arg'" if test $ac_must_keep_next = true; then ac_must_keep_next=false # Got value, back to normal. else case $ac_arg in *=* | --config-cache | -C | -disable-* | --disable-* \ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \ | -with-* | --with-* | -without-* | --without-* | --x) case "$ac_configure_args0 " in "$ac_configure_args1"*" '$ac_arg' "* ) continue ;; esac ;; -* ) ac_must_keep_next=true ;; esac fi as_fn_append ac_configure_args " '$ac_arg'" ;; esac done done { ac_configure_args0=; unset ac_configure_args0;} { ac_configure_args1=; unset ac_configure_args1;} # When interrupted or exit'd, cleanup temporary files, and complete # config.log. We remove comments because anyway the quotes in there # would cause problems or look ugly. # WARNING: Use '\'' to represent an apostrophe within the trap. # WARNING: Do not start the trap code with a newline, due to a FreeBSD 4.0 bug. trap 'exit_status=$? # Save into config.log some information that might help in debugging. { echo $as_echo "## ---------------- ## ## Cache variables. ## ## ---------------- ##" echo # The following way of writing the cache mishandles newlines in values, ( for ac_var in `(set) 2>&1 | sed -n '\''s/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'\''`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 $as_echo "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space='\'' '\''; set) 2>&1` in #( *${as_nl}ac_space=\ *) sed -n \ "s/'\''/'\''\\\\'\'''\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\''\\2'\''/p" ;; #( *) sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) echo $as_echo "## ----------------- ## ## Output variables. ## ## ----------------- ##" echo for ac_var in $ac_subst_vars do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`$as_echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac $as_echo "$ac_var='\''$ac_val'\''" done | sort echo if test -n "$ac_subst_files"; then $as_echo "## ------------------- ## ## File substitutions. ## ## ------------------- ##" echo for ac_var in $ac_subst_files do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`$as_echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac $as_echo "$ac_var='\''$ac_val'\''" done | sort echo fi if test -s confdefs.h; then $as_echo "## ----------- ## ## confdefs.h. ## ## ----------- ##" echo cat confdefs.h echo fi test "$ac_signal" != 0 && $as_echo "$as_me: caught signal $ac_signal" $as_echo "$as_me: exit $exit_status" } >&5 rm -f core *.core core.conftest.* && rm -f -r conftest* confdefs* conf$$* $ac_clean_files && exit $exit_status ' 0 for ac_signal in 1 2 13 15; do trap 'ac_signal='$ac_signal'; as_fn_exit 1' $ac_signal done ac_signal=0 # confdefs.h avoids OS command line length limits that DEFS can exceed. rm -f -r conftest* confdefs.h $as_echo "/* confdefs.h */" > confdefs.h # Predefined preprocessor variables. cat >>confdefs.h <<_ACEOF #define PACKAGE_NAME "$PACKAGE_NAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_TARNAME "$PACKAGE_TARNAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_VERSION "$PACKAGE_VERSION" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_STRING "$PACKAGE_STRING" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_BUGREPORT "$PACKAGE_BUGREPORT" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_URL "$PACKAGE_URL" _ACEOF # Let the site file select an alternate cache file if it wants to. # Prefer an explicitly selected file to automatically selected ones. ac_site_file1=NONE ac_site_file2=NONE if test -n "$CONFIG_SITE"; then # We do not want a PATH search for config.site. case $CONFIG_SITE in #(( -*) ac_site_file1=./$CONFIG_SITE;; */*) ac_site_file1=$CONFIG_SITE;; *) ac_site_file1=./$CONFIG_SITE;; esac elif test "x$prefix" != xNONE; then ac_site_file1=$prefix/share/config.site ac_site_file2=$prefix/etc/config.site else ac_site_file1=$ac_default_prefix/share/config.site ac_site_file2=$ac_default_prefix/etc/config.site fi for ac_site_file in "$ac_site_file1" "$ac_site_file2" do test "x$ac_site_file" = xNONE && continue if test /dev/null != "$ac_site_file" && test -r "$ac_site_file"; then { $as_echo "$as_me:${as_lineno-$LINENO}: loading site script $ac_site_file" >&5 $as_echo "$as_me: loading site script $ac_site_file" >&6;} sed 's/^/| /' "$ac_site_file" >&5 . "$ac_site_file" \ || { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "failed to load site script $ac_site_file See \`config.log' for more details" "$LINENO" 5; } fi done if test -r "$cache_file"; then # Some versions of bash will fail to source /dev/null (special files # actually), so we avoid doing that. DJGPP emulates it as a regular file. if test /dev/null != "$cache_file" && test -f "$cache_file"; then { $as_echo "$as_me:${as_lineno-$LINENO}: loading cache $cache_file" >&5 $as_echo "$as_me: loading cache $cache_file" >&6;} case $cache_file in [\\/]* | ?:[\\/]* ) . "$cache_file";; *) . "./$cache_file";; esac fi else { $as_echo "$as_me:${as_lineno-$LINENO}: creating cache $cache_file" >&5 $as_echo "$as_me: creating cache $cache_file" >&6;} >$cache_file fi # Check that the precious variables saved in the cache have kept the same # value. ac_cache_corrupted=false for ac_var in $ac_precious_vars; do eval ac_old_set=\$ac_cv_env_${ac_var}_set eval ac_new_set=\$ac_env_${ac_var}_set eval ac_old_val=\$ac_cv_env_${ac_var}_value eval ac_new_val=\$ac_env_${ac_var}_value case $ac_old_set,$ac_new_set in set,) { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&5 $as_echo "$as_me: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&2;} ac_cache_corrupted=: ;; ,set) { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was not set in the previous run" >&5 $as_echo "$as_me: error: \`$ac_var' was not set in the previous run" >&2;} ac_cache_corrupted=: ;; ,);; *) if test "x$ac_old_val" != "x$ac_new_val"; then # differences in whitespace do not lead to failure. ac_old_val_w=`echo x $ac_old_val` ac_new_val_w=`echo x $ac_new_val` if test "$ac_old_val_w" != "$ac_new_val_w"; then { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' has changed since the previous run:" >&5 $as_echo "$as_me: error: \`$ac_var' has changed since the previous run:" >&2;} ac_cache_corrupted=: else { $as_echo "$as_me:${as_lineno-$LINENO}: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&5 $as_echo "$as_me: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&2;} eval $ac_var=\$ac_old_val fi { $as_echo "$as_me:${as_lineno-$LINENO}: former value: \`$ac_old_val'" >&5 $as_echo "$as_me: former value: \`$ac_old_val'" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: current value: \`$ac_new_val'" >&5 $as_echo "$as_me: current value: \`$ac_new_val'" >&2;} fi;; esac # Pass precious variables to config.status. if test "$ac_new_set" = set; then case $ac_new_val in *\'*) ac_arg=$ac_var=`$as_echo "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;; *) ac_arg=$ac_var=$ac_new_val ;; esac case " $ac_configure_args " in *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy. *) as_fn_append ac_configure_args " '$ac_arg'" ;; esac fi done if $ac_cache_corrupted; then { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: error: changes in the environment can compromise the build" >&5 $as_echo "$as_me: error: changes in the environment can compromise the build" >&2;} as_fn_error $? "run \`make distclean' and/or \`rm $cache_file' and start over" "$LINENO" 5 fi ## -------------------- ## ## Main body of script. ## ## -------------------- ## ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ac_aux_dir= for ac_dir in auxdir "$srcdir"/auxdir; do if test -f "$ac_dir/install-sh"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install-sh -c" break elif test -f "$ac_dir/install.sh"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install.sh -c" break elif test -f "$ac_dir/shtool"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/shtool install -c" break fi done if test -z "$ac_aux_dir"; then as_fn_error $? "cannot find install-sh, install.sh, or shtool in auxdir \"$srcdir\"/auxdir" "$LINENO" 5 fi # These three variables are undocumented and unsupported, # and are intended to be withdrawn in a future Autoconf release. # They can cause serious problems if a builder's source tree is in a directory # whose full name contains unusual characters. ac_config_guess="$SHELL $ac_aux_dir/config.guess" # Please don't use this var. ac_config_sub="$SHELL $ac_aux_dir/config.sub" # Please don't use this var. ac_configure="$SHELL $ac_aux_dir/configure" # Please don't use this var. # Make sure we can run config.sub. $SHELL "$ac_aux_dir/config.sub" sun4 >/dev/null 2>&1 || as_fn_error $? "cannot run $SHELL $ac_aux_dir/config.sub" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking build system type" >&5 $as_echo_n "checking build system type... " >&6; } if ${ac_cv_build+:} false; then : $as_echo_n "(cached) " >&6 else ac_build_alias=$build_alias test "x$ac_build_alias" = x && ac_build_alias=`$SHELL "$ac_aux_dir/config.guess"` test "x$ac_build_alias" = x && as_fn_error $? "cannot guess build type; you must specify one" "$LINENO" 5 ac_cv_build=`$SHELL "$ac_aux_dir/config.sub" $ac_build_alias` || as_fn_error $? "$SHELL $ac_aux_dir/config.sub $ac_build_alias failed" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_build" >&5 $as_echo "$ac_cv_build" >&6; } case $ac_cv_build in *-*-*) ;; *) as_fn_error $? "invalid value of canonical build" "$LINENO" 5;; esac build=$ac_cv_build ac_save_IFS=$IFS; IFS='-' set x $ac_cv_build shift build_cpu=$1 build_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: build_os=$* IFS=$ac_save_IFS case $build_os in *\ *) build_os=`echo "$build_os" | sed 's/ /-/g'`;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking host system type" >&5 $as_echo_n "checking host system type... " >&6; } if ${ac_cv_host+:} false; then : $as_echo_n "(cached) " >&6 else if test "x$host_alias" = x; then ac_cv_host=$ac_cv_build else ac_cv_host=`$SHELL "$ac_aux_dir/config.sub" $host_alias` || as_fn_error $? "$SHELL $ac_aux_dir/config.sub $host_alias failed" "$LINENO" 5 fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_host" >&5 $as_echo "$ac_cv_host" >&6; } case $ac_cv_host in *-*-*) ;; *) as_fn_error $? "invalid value of canonical host" "$LINENO" 5;; esac host=$ac_cv_host ac_save_IFS=$IFS; IFS='-' set x $ac_cv_host shift host_cpu=$1 host_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: host_os=$* IFS=$ac_save_IFS case $host_os in *\ *) host_os=`echo "$host_os" | sed 's/ /-/g'`;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking target system type" >&5 $as_echo_n "checking target system type... " >&6; } if ${ac_cv_target+:} false; then : $as_echo_n "(cached) " >&6 else if test "x$target_alias" = x; then ac_cv_target=$ac_cv_host else ac_cv_target=`$SHELL "$ac_aux_dir/config.sub" $target_alias` || as_fn_error $? "$SHELL $ac_aux_dir/config.sub $target_alias failed" "$LINENO" 5 fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_target" >&5 $as_echo "$ac_cv_target" >&6; } case $ac_cv_target in *-*-*) ;; *) as_fn_error $? "invalid value of canonical target" "$LINENO" 5;; esac target=$ac_cv_target ac_save_IFS=$IFS; IFS='-' set x $ac_cv_target shift target_cpu=$1 target_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: target_os=$* IFS=$ac_save_IFS case $target_os in *\ *) target_os=`echo "$target_os" | sed 's/ /-/g'`;; esac # The aliases save the names the user supplied, while $host etc. # will get canonicalized. test -n "$target_alias" && test "$program_prefix$program_suffix$program_transform_name" = \ NONENONEs,x,x, && program_prefix=${target_alias}- if test "1" = "0"; then DONT_BUILD_TRUE= DONT_BUILD_FALSE='#' else DONT_BUILD_TRUE='#' DONT_BUILD_FALSE= fi $as_echo "#define GPL_LICENSED 1" >>confdefs.h # Determine project/version from META file. # Sets PACKAGE, VERSION, SLURM_VERSION # # Determine project/version from META file. # These are substituted into the Makefile and config.h. # PROJECT="`perl -ne 'print,exit if s/^\s*NAME:\s*(\S*).*/\1/i' $srcdir/META`" cat >>confdefs.h <<_ACEOF #define PROJECT "$PROJECT" _ACEOF # Automake desires "PACKAGE" variable instead of PROJECT PACKAGE=$PROJECT ## Build the API version ## NOTE: We map API_MAJOR to be (API_CURRENT - API_AGE) to match the ## behavior of libtool in setting the library version number. For more ## information see src/api/Makefile.am for name in CURRENT REVISION AGE; do API=`perl -ne "print,exit if s/^\s*API_$name:\s*(\S*).*/\1/i" $srcdir/META` eval SLURM_API_$name=$API done SLURM_API_MAJOR=`expr $SLURM_API_CURRENT - $SLURM_API_AGE` SLURM_API_VERSION=`printf "0x%02x%02x%02x" $((10#$SLURM_API_MAJOR)) $((10#$SLURM_API_AGE)) $((10#$SLURM_API_REVISION))` cat >>confdefs.h <<_ACEOF #define SLURM_API_VERSION $SLURM_API_VERSION _ACEOF cat >>confdefs.h <<_ACEOF #define SLURM_API_CURRENT $SLURM_API_CURRENT _ACEOF cat >>confdefs.h <<_ACEOF #define SLURM_API_MAJOR $SLURM_API_MAJOR _ACEOF cat >>confdefs.h <<_ACEOF #define SLURM_API_AGE $SLURM_API_AGE _ACEOF cat >>confdefs.h <<_ACEOF #define SLURM_API_REVISION $SLURM_API_REVISION _ACEOF # rpm make target needs Version in META, not major and minor version numbers VERSION="`perl -ne 'print,exit if s/^\s*VERSION:\s*(\S*).*/\1/i' $srcdir/META`" # If you ever use AM_INIT_AUTOMAKE(subdir-objects) do not define VERSION # since it will do it this automatically cat >>confdefs.h <<_ACEOF #define VERSION "$VERSION" _ACEOF SLURM_MAJOR="`perl -ne 'print,exit if s/^\s*MAJOR:\s*(\S*).*/\1/i' $srcdir/META`" SLURM_MINOR="`perl -ne 'print,exit if s/^\s*MINOR:\s*(\S*).*/\1/i' $srcdir/META`" SLURM_MICRO="`perl -ne 'print,exit if s/^\s*MICRO:\s*(\S*).*/\1/i' $srcdir/META`" RELEASE="`perl -ne 'print,exit if s/^\s*RELEASE:\s*(\S*).*/\1/i' $srcdir/META`" # NOTE: SLURM_VERSION_NUMBER excludes any non-numeric component # (e.g. "pre1" in the MICRO), but may be suitable for the user determining # how to use the APIs or other differences. SLURM_VERSION_NUMBER="`printf "0x%02x%02x%02x" $((10#$SLURM_MAJOR)) $((10#$SLURM_MINOR)) $((10#$SLURM_MICRO))`" cat >>confdefs.h <<_ACEOF #define SLURM_VERSION_NUMBER $SLURM_VERSION_NUMBER _ACEOF if test "$SLURM_MAJOR.$SLURM_MINOR.$SLURM_MICRO" != "$VERSION"; then as_fn_error $? "META information is inconsistent: $VERSION != $SLURM_MAJOR.$SLURM_MINOR.$SLURM_MICRO!" "$LINENO" 5 fi # Check to see if we're on an unstable branch (no prereleases yet) if echo "$RELEASE" | grep -e "UNSTABLE"; then DATE=`date +"%Y%m%d%H%M"` SLURM_RELEASE="unstable svn build $DATE" SLURM_VERSION_STRING="$SLURM_MAJOR.$SLURM_MINOR ($SLURM_RELEASE)" else SLURM_RELEASE="`echo $RELEASE | sed 's/^0\.//'`" SLURM_VERSION_STRING="$SLURM_MAJOR.$SLURM_MINOR.$SLURM_MICRO" test $RELEASE = "1" || SLURM_VERSION_STRING="$SLURM_VERSION_STRING-$SLURM_RELEASE" fi cat >>confdefs.h <<_ACEOF #define SLURM_MAJOR "$SLURM_MAJOR" _ACEOF cat >>confdefs.h <<_ACEOF #define SLURM_MINOR "$SLURM_MINOR" _ACEOF cat >>confdefs.h <<_ACEOF #define SLURM_MICRO "$SLURM_MICRO" _ACEOF cat >>confdefs.h <<_ACEOF #define RELEASE "$RELEASE" _ACEOF cat >>confdefs.h <<_ACEOF #define SLURM_VERSION_STRING "$SLURM_VERSION_STRING" _ACEOF am__api_version='1.14' # Find a good install program. We prefer a C program (faster), # so one script is as good as another. But avoid the broken or # incompatible versions: # SysV /etc/install, /usr/sbin/install # SunOS /usr/etc/install # IRIX /sbin/install # AIX /bin/install # AmigaOS /C/install, which installs bootblocks on floppy discs # AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag # AFS /usr/afsws/bin/install, which mishandles nonexistent args # SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff" # OS/2's system install, which has a completely different semantic # ./install, which can be erroneously created by make from ./install.sh. # Reject install programs that cannot install multiple files. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a BSD-compatible install" >&5 $as_echo_n "checking for a BSD-compatible install... " >&6; } if test -z "$INSTALL"; then if ${ac_cv_path_install+:} false; then : $as_echo_n "(cached) " >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. # Account for people who put trailing slashes in PATH elements. case $as_dir/ in #(( ./ | .// | /[cC]/* | \ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \ ?:[\\/]os2[\\/]install[\\/]* | ?:[\\/]OS2[\\/]INSTALL[\\/]* | \ /usr/ucb/* ) ;; *) # OSF1 and SCO ODT 3.0 have their own names for install. # Don't use installbsd from OSF since it installs stuff as root # by default. for ac_prog in ginstall scoinst install; do for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_prog$ac_exec_ext"; then if test $ac_prog = install && grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # AIX install. It has an incompatible calling convention. : elif test $ac_prog = install && grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # program-specific install script used by HP pwplus--don't use. : else rm -rf conftest.one conftest.two conftest.dir echo one > conftest.one echo two > conftest.two mkdir conftest.dir if "$as_dir/$ac_prog$ac_exec_ext" -c conftest.one conftest.two "`pwd`/conftest.dir" && test -s conftest.one && test -s conftest.two && test -s conftest.dir/conftest.one && test -s conftest.dir/conftest.two then ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c" break 3 fi fi fi done done ;; esac done IFS=$as_save_IFS rm -rf conftest.one conftest.two conftest.dir fi if test "${ac_cv_path_install+set}" = set; then INSTALL=$ac_cv_path_install else # As a last resort, use the slow shell script. Don't cache a # value for INSTALL within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the value is a relative name. INSTALL=$ac_install_sh fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $INSTALL" >&5 $as_echo "$INSTALL" >&6; } # Use test -z because SunOS4 sh mishandles braces in ${var-val}. # It thinks the first close brace ends the variable substitution. test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}' test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}' test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644' { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether build environment is sane" >&5 $as_echo_n "checking whether build environment is sane... " >&6; } # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' ' case `pwd` in *[\\\"\#\$\&\'\`$am_lf]*) as_fn_error $? "unsafe absolute working directory name" "$LINENO" 5;; esac case $srcdir in *[\\\"\#\$\&\'\`$am_lf\ \ ]*) as_fn_error $? "unsafe srcdir value: '$srcdir'" "$LINENO" 5;; esac # Do 'set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( am_has_slept=no for am_try in 1 2; do echo "timestamp, slept: $am_has_slept" > conftest.file set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t "$srcdir/configure" conftest.file` fi if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". as_fn_error $? "ls -t appears to fail. Make sure there is not a broken alias in your environment" "$LINENO" 5 fi if test "$2" = conftest.file || test $am_try -eq 2; then break fi # Just in case. sleep 1 am_has_slept=yes done test "$2" = conftest.file ) then # Ok. : else as_fn_error $? "newly created file is older than distributed files! Check your system clock" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } # If we didn't sleep, we still need to ensure time stamps of config.status and # generated files are strictly newer. am_sleep_pid= if grep 'slept: no' conftest.file >/dev/null 2>&1; then ( sleep 1 ) & am_sleep_pid=$! fi rm -f conftest.file test "$program_prefix" != NONE && program_transform_name="s&^&$program_prefix&;$program_transform_name" # Use a double $ so make ignores it. test "$program_suffix" != NONE && program_transform_name="s&\$&$program_suffix&;$program_transform_name" # Double any \ or $. # By default was `s,x,x', remove it if useless. ac_script='s/[\\$]/&&/g;s/;s,x,x,$//' program_transform_name=`$as_echo "$program_transform_name" | sed "$ac_script"` # expand $ac_aux_dir to an absolute path am_aux_dir=`cd $ac_aux_dir && pwd` if test x"${MISSING+set}" != xset; then case $am_aux_dir in *\ * | *\ *) MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;; *) MISSING="\${SHELL} $am_aux_dir/missing" ;; esac fi # Use eval to expand $SHELL if eval "$MISSING --is-lightweight"; then am_missing_run="$MISSING " else am_missing_run= { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: 'missing' script is too old or missing" >&5 $as_echo "$as_me: WARNING: 'missing' script is too old or missing" >&2;} fi if test x"${install_sh}" != xset; then case $am_aux_dir in *\ * | *\ *) install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;; *) install_sh="\${SHELL} $am_aux_dir/install-sh" esac fi # Installed binaries are usually stripped using 'strip' when the user # run "make install-strip". However 'strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the 'STRIP' environment variable to overrule this program. if test "$cross_compiling" != no; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5 $as_echo "$STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5 $as_echo "$ac_ct_STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_STRIP" = x; then STRIP=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac STRIP=$ac_ct_STRIP fi else STRIP="$ac_cv_prog_STRIP" fi fi INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a thread-safe mkdir -p" >&5 $as_echo_n "checking for a thread-safe mkdir -p... " >&6; } if test -z "$MKDIR_P"; then if ${ac_cv_path_mkdir+:} false; then : $as_echo_n "(cached) " >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/opt/sfw/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in mkdir gmkdir; do for ac_exec_ext in '' $ac_executable_extensions; do as_fn_executable_p "$as_dir/$ac_prog$ac_exec_ext" || continue case `"$as_dir/$ac_prog$ac_exec_ext" --version 2>&1` in #( 'mkdir (GNU coreutils) '* | \ 'mkdir (coreutils) '* | \ 'mkdir (fileutils) '4.1*) ac_cv_path_mkdir=$as_dir/$ac_prog$ac_exec_ext break 3;; esac done done done IFS=$as_save_IFS fi test -d ./--version && rmdir ./--version if test "${ac_cv_path_mkdir+set}" = set; then MKDIR_P="$ac_cv_path_mkdir -p" else # As a last resort, use the slow shell script. Don't cache a # value for MKDIR_P within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the value is a relative name. MKDIR_P="$ac_install_sh -d" fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MKDIR_P" >&5 $as_echo "$MKDIR_P" >&6; } for ac_prog in gawk mawk nawk awk do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_AWK+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$AWK"; then ac_cv_prog_AWK="$AWK" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_AWK="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi AWK=$ac_cv_prog_AWK if test -n "$AWK"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $AWK" >&5 $as_echo "$AWK" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$AWK" && break done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} sets \$(MAKE)" >&5 $as_echo_n "checking whether ${MAKE-make} sets \$(MAKE)... " >&6; } set x ${MAKE-make} ac_make=`$as_echo "$2" | sed 's/+/p/g; s/[^a-zA-Z0-9_]/_/g'` if eval \${ac_cv_prog_make_${ac_make}_set+:} false; then : $as_echo_n "(cached) " >&6 else cat >conftest.make <<\_ACEOF SHELL = /bin/sh all: @echo '@@@%%%=$(MAKE)=@@@%%%' _ACEOF # GNU make sometimes prints "make[1]: Entering ...", which would confuse us. case `${MAKE-make} -f conftest.make 2>/dev/null` in *@@@%%%=?*=@@@%%%*) eval ac_cv_prog_make_${ac_make}_set=yes;; *) eval ac_cv_prog_make_${ac_make}_set=no;; esac rm -f conftest.make fi if eval test \$ac_cv_prog_make_${ac_make}_set = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } SET_MAKE= else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } SET_MAKE="MAKE=${MAKE-make}" fi rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null # Check whether --enable-silent-rules was given. if test "${enable_silent_rules+set}" = set; then : enableval=$enable_silent_rules; fi case $enable_silent_rules in # ((( yes) AM_DEFAULT_VERBOSITY=0;; no) AM_DEFAULT_VERBOSITY=1;; *) AM_DEFAULT_VERBOSITY=1;; esac am_make=${MAKE-make} { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $am_make supports nested variables" >&5 $as_echo_n "checking whether $am_make supports nested variables... " >&6; } if ${am_cv_make_support_nested_variables+:} false; then : $as_echo_n "(cached) " >&6 else if $as_echo 'TRUE=$(BAR$(V)) BAR0=false BAR1=true V=1 am__doit: @$(TRUE) .PHONY: am__doit' | $am_make -f - >/dev/null 2>&1; then am_cv_make_support_nested_variables=yes else am_cv_make_support_nested_variables=no fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_make_support_nested_variables" >&5 $as_echo "$am_cv_make_support_nested_variables" >&6; } if test $am_cv_make_support_nested_variables = yes; then AM_V='$(V)' AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)' else AM_V=$AM_DEFAULT_VERBOSITY AM_DEFAULT_V=$AM_DEFAULT_VERBOSITY fi AM_BACKSLASH='\' if test "`cd $srcdir && pwd`" != "`pwd`"; then # Use -I$(srcdir) only when $(srcdir) != ., so that make's output # is not polluted with repeated "-I." am__isrc=' -I$(srcdir)' # test to see if srcdir already configured if test -f $srcdir/config.status; then as_fn_error $? "source directory already configured; run \"make distclean\" there first" "$LINENO" 5 fi fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi # Define the identity of the package. PACKAGE='slurm' VERSION='15.08' # Some tools Automake needs. ACLOCAL=${ACLOCAL-"${am_missing_run}aclocal-${am__api_version}"} AUTOCONF=${AUTOCONF-"${am_missing_run}autoconf"} AUTOMAKE=${AUTOMAKE-"${am_missing_run}automake-${am__api_version}"} AUTOHEADER=${AUTOHEADER-"${am_missing_run}autoheader"} MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"} # For better backward compatibility. To be removed once Automake 1.9.x # dies out for good. For more background, see: # # mkdir_p='$(MKDIR_P)' # We need awk for the "check" target. The system "awk" is bad on # some platforms. # Always define AMTAR for backward compatibility. Yes, it's still used # in the wild :-( We should find a proper way to deprecate it ... AMTAR='$${TAR-tar}' # We'll loop over all known methods to create a tar archive until one works. _am_tools='gnutar pax cpio none' am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -' # POSIX will say in a future version that running "rm -f" with no argument # is OK; and we want to be able to make that assumption in our Makefile # recipes. So use an aggressive probe to check that the usage we want is # actually supported "in the wild" to an acceptable degree. # See automake bug#10828. # To make any issue more visible, cause the running configure to be aborted # by default if the 'rm' program in use doesn't match our expectations; the # user can still override this though. if rm -f && rm -fr && rm -rf; then : OK; else cat >&2 <<'END' Oops! Your 'rm' program seems unable to run without file operands specified on the command line, even when the '-f' option is present. This is contrary to the behaviour of most rm programs out there, and not conforming with the upcoming POSIX standard: Please tell bug-automake@gnu.org about your system, including the value of your $PATH and any error possibly output before this message. This can help us improve future automake versions. END if test x"$ACCEPT_INFERIOR_RM_PROGRAM" = x"yes"; then echo 'Configuration will proceed anyway, since you have set the' >&2 echo 'ACCEPT_INFERIOR_RM_PROGRAM variable to "yes"' >&2 echo >&2 else cat >&2 <<'END' Aborting the configuration process, to ensure you take notice of the issue. You can download and install GNU coreutils to get an 'rm' implementation that behaves properly: . If you want to complete the configuration process using your problematic 'rm' anyway, export the environment variable ACCEPT_INFERIOR_RM_PROGRAM to "yes", and re-run configure. END as_fn_error $? "Your 'rm' program is bad, sorry." "$LINENO" 5 fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable maintainer-specific portions of Makefiles" >&5 $as_echo_n "checking whether to enable maintainer-specific portions of Makefiles... " >&6; } # Check whether --enable-maintainer-mode was given. if test "${enable_maintainer_mode+set}" = set; then : enableval=$enable_maintainer_mode; USE_MAINTAINER_MODE=$enableval else USE_MAINTAINER_MODE=no fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $USE_MAINTAINER_MODE" >&5 $as_echo "$USE_MAINTAINER_MODE" >&6; } if test $USE_MAINTAINER_MODE = yes; then MAINTAINER_MODE_TRUE= MAINTAINER_MODE_FALSE='#' else MAINTAINER_MODE_TRUE='#' MAINTAINER_MODE_FALSE= fi MAINT=$MAINTAINER_MODE_TRUE ac_config_headers="$ac_config_headers config.h" ac_config_headers="$ac_config_headers slurm/slurm.h" ac_with_rpath=yes { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to include rpath in build" >&5 $as_echo_n "checking whether to include rpath in build... " >&6; } # Check whether --with-rpath was given. if test "${with_rpath+set}" = set; then : withval=$with_rpath; case "$withval" in yes) ac_with_rpath=yes ;; no) ac_with_rpath=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$withval\" for --without-rpath" "$LINENO" 5 ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_with_rpath" >&5 $as_echo "$ac_with_rpath" >&6; } DEPDIR="${am__leading_dot}deps" ac_config_commands="$ac_config_commands depfiles" am_make=${MAKE-make} cat > confinc << 'END' am__doit: @echo this is the am__doit target .PHONY: am__doit END # If we don't find an include directive, just comment out the code. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for style of include used by $am_make" >&5 $as_echo_n "checking for style of include used by $am_make... " >&6; } am__include="#" am__quote= _am_result=none # First try GNU make style include. echo "include confinc" > confmf # Ignore all kinds of additional output from 'make'. case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=include am__quote= _am_result=GNU ;; esac # Now try BSD make style include. if test "$am__include" = "#"; then echo '.include "confinc"' > confmf case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=.include am__quote="\"" _am_result=BSD ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $_am_result" >&5 $as_echo "$_am_result" >&6; } rm -f confinc confmf # Check whether --enable-dependency-tracking was given. if test "${enable_dependency_tracking+set}" = set; then : enableval=$enable_dependency_tracking; fi if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' am__nodep='_no' fi if test "x$enable_dependency_tracking" != xno; then AMDEP_TRUE= AMDEP_FALSE='#' else AMDEP_TRUE='#' AMDEP_FALSE= fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args. set dummy ${ac_tool_prefix}gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_CC"; then ac_ct_CC=$CC # Extract the first word of "gcc", so it can be a program name with args. set dummy gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi else CC="$ac_cv_prog_CC" fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args. set dummy ${ac_tool_prefix}cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi fi if test -z "$CC"; then # Extract the first word of "cc", so it can be a program name with args. set dummy cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else ac_prog_rejected=no as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then if test "$as_dir/$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then ac_prog_rejected=yes continue fi ac_cv_prog_CC="cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS if test $ac_prog_rejected = yes; then # We found a bogon in the path, so make sure we never use it. set dummy $ac_cv_prog_CC shift if test $# != 0; then # We chose a different compiler from the bogus one. # However, it has the same basename, so the bogon will be chosen # first if we set CC to just the basename; use the full file name. shift ac_cv_prog_CC="$as_dir/$ac_word${1+' '}$@" fi fi fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then for ac_prog in cl.exe do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$CC" && break done fi if test -z "$CC"; then ac_ct_CC=$CC for ac_prog in cl.exe do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_CC" && break done if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi fi fi test -z "$CC" && { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "no acceptable C compiler found in \$PATH See \`config.log' for more details" "$LINENO" 5; } # Provide some information about the compiler. $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files a.out a.out.dSYM a.exe b.out" # Try to create an executable without -o first, disregard a.out. # It will help us diagnose broken compilers, and finding out an intuition # of exeext. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the C compiler works" >&5 $as_echo_n "checking whether the C compiler works... " >&6; } ac_link_default=`$as_echo "$ac_link" | sed 's/ -o *conftest[^ ]*//'` # The possible output files: ac_files="a.out conftest.exe conftest a.exe a_out.exe b.out conftest.*" ac_rmfiles= for ac_file in $ac_files do case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; * ) ac_rmfiles="$ac_rmfiles $ac_file";; esac done rm -f $ac_rmfiles if { { ac_try="$ac_link_default" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link_default") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : # Autoconf-2.13 could set the ac_cv_exeext variable to `no'. # So ignore a value of `no', otherwise this would lead to `EXEEXT = no' # in a Makefile. We should not override ac_cv_exeext if it was cached, # so that the user can short-circuit this test for compilers unknown to # Autoconf. for ac_file in $ac_files '' do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; [ab].out ) # We found the default executable, but exeext='' is most # certainly right. break;; *.* ) if test "${ac_cv_exeext+set}" = set && test "$ac_cv_exeext" != no; then :; else ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` fi # We set ac_cv_exeext here because the later test for it is not # safe: cross compilers may not add the suffix if given an `-o' # argument, so we may need to know it at that point already. # Even if this section looks crufty: it has the advantage of # actually working. break;; * ) break;; esac done test "$ac_cv_exeext" = no && ac_cv_exeext= else ac_file='' fi if test -z "$ac_file"; then : { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error 77 "C compiler cannot create executables See \`config.log' for more details" "$LINENO" 5; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler default output file name" >&5 $as_echo_n "checking for C compiler default output file name... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_file" >&5 $as_echo "$ac_file" >&6; } ac_exeext=$ac_cv_exeext rm -f -r a.out a.out.dSYM a.exe conftest$ac_cv_exeext b.out ac_clean_files=$ac_clean_files_save { $as_echo "$as_me:${as_lineno-$LINENO}: checking for suffix of executables" >&5 $as_echo_n "checking for suffix of executables... " >&6; } if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : # If both `conftest.exe' and `conftest' are `present' (well, observable) # catch `conftest.exe'. For instance with Cygwin, `ls conftest' will # work properly (i.e., refer to `conftest.exe'), while it won't with # `rm'. for ac_file in conftest.exe conftest conftest.*; do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; *.* ) ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` break;; * ) break;; esac done else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of executables: cannot compile and link See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest conftest$ac_cv_exeext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_exeext" >&5 $as_echo "$ac_cv_exeext" >&6; } rm -f conftest.$ac_ext EXEEXT=$ac_cv_exeext ac_exeext=$EXEEXT cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { FILE *f = fopen ("conftest.out", "w"); return ferror (f) || fclose (f) != 0; ; return 0; } _ACEOF ac_clean_files="$ac_clean_files conftest.out" # Check that the compiler produces executables we can run. If not, either # the compiler is broken, or we cross compile. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are cross compiling" >&5 $as_echo_n "checking whether we are cross compiling... " >&6; } if test "$cross_compiling" != yes; then { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if { ac_try='./conftest$ac_cv_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then cross_compiling=no else if test "$cross_compiling" = maybe; then cross_compiling=yes else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot run C compiled programs. If you meant to cross compile, use \`--host'. See \`config.log' for more details" "$LINENO" 5; } fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $cross_compiling" >&5 $as_echo "$cross_compiling" >&6; } rm -f conftest.$ac_ext conftest$ac_cv_exeext conftest.out ac_clean_files=$ac_clean_files_save { $as_echo "$as_me:${as_lineno-$LINENO}: checking for suffix of object files" >&5 $as_echo_n "checking for suffix of object files... " >&6; } if ${ac_cv_objext+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF rm -f conftest.o conftest.obj if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : for ac_file in conftest.o conftest.obj conftest.*; do test -f "$ac_file" || continue; case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM ) ;; *) ac_cv_objext=`expr "$ac_file" : '.*\.\(.*\)'` break;; esac done else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of object files: cannot compile See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest.$ac_cv_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_objext" >&5 $as_echo "$ac_cv_objext" >&6; } OBJEXT=$ac_cv_objext ac_objext=$OBJEXT { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are using the GNU C compiler" >&5 $as_echo_n "checking whether we are using the GNU C compiler... " >&6; } if ${ac_cv_c_compiler_gnu+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_compiler_gnu=yes else ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_c_compiler_gnu=$ac_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu" >&5 $as_echo "$ac_cv_c_compiler_gnu" >&6; } if test $ac_compiler_gnu = yes; then GCC=yes else GCC= fi ac_test_CFLAGS=${CFLAGS+set} ac_save_CFLAGS=$CFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g" >&5 $as_echo_n "checking whether $CC accepts -g... " >&6; } if ${ac_cv_prog_cc_g+:} false; then : $as_echo_n "(cached) " >&6 else ac_save_c_werror_flag=$ac_c_werror_flag ac_c_werror_flag=yes ac_cv_prog_cc_g=no CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes else CFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : else ac_c_werror_flag=$ac_save_c_werror_flag CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_c_werror_flag=$ac_save_c_werror_flag fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g" >&5 $as_echo "$ac_cv_prog_cc_g" >&6; } if test "$ac_test_CFLAGS" = set; then CFLAGS=$ac_save_CFLAGS elif test $ac_cv_prog_cc_g = yes; then if test "$GCC" = yes; then CFLAGS="-g -O2" else CFLAGS="-g" fi else if test "$GCC" = yes; then CFLAGS="-O2" else CFLAGS= fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $CC option to accept ISO C89" >&5 $as_echo_n "checking for $CC option to accept ISO C89... " >&6; } if ${ac_cv_prog_cc_c89+:} false; then : $as_echo_n "(cached) " >&6 else ac_cv_prog_cc_c89=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include struct stat; /* Most of the following tests are stolen from RCS 5.7's src/conf.sh. */ struct buf { int x; }; FILE * (*rcsopen) (struct buf *, struct stat *, int); static char *e (p, i) char **p; int i; { return p[i]; } static char *f (char * (*g) (char **, int), char **p, ...) { char *s; va_list v; va_start (v,p); s = g (p, va_arg (v,int)); va_end (v); return s; } /* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has function prototypes and stuff, but not '\xHH' hex character constants. These don't provoke an error unfortunately, instead are silently treated as 'x'. The following induces an error, until -std is added to get proper ANSI mode. Curiously '\x00'!='x' always comes out true, for an array size at least. It's necessary to write '\x00'==0 to get something that's true only with -std. */ int osf4_cc_array ['\x00' == 0 ? 1 : -1]; /* IBM C 6 for AIX is almost-ANSI by default, but it replaces macro parameters inside strings and character constants. */ #define FOO(x) 'x' int xlc6_cc_array[FOO(a) == 'x' ? 1 : -1]; int test (int i, double x); struct s1 {int (*f) (int a);}; struct s2 {int (*f) (double a);}; int pairnames (int, char **, FILE *(*)(struct buf *, struct stat *, int), int, int); int argc; char **argv; int main () { return f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1]; ; return 0; } _ACEOF for ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std \ -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__" do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_c89=$ac_arg fi rm -f core conftest.err conftest.$ac_objext test "x$ac_cv_prog_cc_c89" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC fi # AC_CACHE_VAL case "x$ac_cv_prog_cc_c89" in x) { $as_echo "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 $as_echo "none needed" >&6; } ;; xno) { $as_echo "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 $as_echo "unsupported" >&6; } ;; *) CC="$CC $ac_cv_prog_cc_c89" { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89" >&5 $as_echo "$ac_cv_prog_cc_c89" >&6; } ;; esac if test "x$ac_cv_prog_cc_c89" != xno; then : fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC understands -c and -o together" >&5 $as_echo_n "checking whether $CC understands -c and -o together... " >&6; } if ${am_cv_prog_cc_c_o+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF # Make sure it works both with $CC and with simple cc. # Following AC_PROG_CC_C_O, we do the test twice because some # compilers refuse to overwrite an existing .o file with -o, # though they will create one. am_cv_prog_cc_c_o=yes for am_i in 1 2; do if { echo "$as_me:$LINENO: $CC -c conftest.$ac_ext -o conftest2.$ac_objext" >&5 ($CC -c conftest.$ac_ext -o conftest2.$ac_objext) >&5 2>&5 ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } \ && test -f conftest2.$ac_objext; then : OK else am_cv_prog_cc_c_o=no break fi done rm -f core conftest* unset am_i fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_prog_cc_c_o" >&5 $as_echo "$am_cv_prog_cc_c_o" >&6; } if test "$am_cv_prog_cc_c_o" != yes; then # Losing compiler, so override with the script. # FIXME: It is wrong to rewrite CC. # But if we don't then we get into trouble of one sort or another. # A longer-term fix would be to have automake use am__CC in this case, # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)" CC="$am_aux_dir/compile $CC" fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu depcc="$CC" am_compiler_list= { $as_echo "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 $as_echo_n "checking dependency style of $depcc... " >&6; } if ${am_cv_CC_dependencies_compiler_type+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named 'D' -- because '-MD' means "put the output # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CC_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with # Solaris 10 /bin/sh. echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle '-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # After this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CC_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CC_dependencies_compiler_type=none fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_CC_dependencies_compiler_type" >&5 $as_echo "$am_cv_CC_dependencies_compiler_type" >&6; } CCDEPMODE=depmode=$am_cv_CC_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CC_dependencies_compiler_type" = gcc3; then am__fastdepCC_TRUE= am__fastdepCC_FALSE='#' else am__fastdepCC_TRUE='#' am__fastdepCC_FALSE= fi ac_real_bluegene_loaded=no ac_bluegene_loaded=no # Check whether --with-db2-dir was given. if test "${with_db2_dir+set}" = set; then : withval=$with_db2_dir; trydb2dir=$withval fi # test for bluegene emulation mode # Check whether --enable-bluegene-emulation was given. if test "${enable_bluegene_emulation+set}" = set; then : enableval=$enable_bluegene_emulation; case "$enableval" in yes) bluegene_emulation=yes ;; no) bluegene_emulation=no ;; *) as_fn_error $? "bad value \"$enableval\" for --enable-bluegene-emulation" "$LINENO" 5 ;; esac fi # Check whether --enable-bgl-emulation was given. if test "${enable_bgl_emulation+set}" = set; then : enableval=$enable_bgl_emulation; case "$enableval" in yes) bgl_emulation=yes ;; no) bgl_emulation=no ;; *) as_fn_error $? "bad value \"$enableval\" for --enable-bgl-emulation" "$LINENO" 5 ;; esac fi if test "x$bluegene_emulation" = "xyes" -o "x$bgl_emulation" = "xyes"; then $as_echo "#define HAVE_3D 1" >>confdefs.h $as_echo "#define SYSTEM_DIMENSIONS 3" >>confdefs.h $as_echo "#define HAVE_BG 1" >>confdefs.h $as_echo "#define HAVE_BG_L_P 1" >>confdefs.h $as_echo "#define HAVE_BGL 1" >>confdefs.h $as_echo "#define HAVE_FRONT_END 1" >>confdefs.h { $as_echo "$as_me:${as_lineno-$LINENO}: Running in BG/L emulation mode" >&5 $as_echo "$as_me: Running in BG/L emulation mode" >&6;} bg_default_dirs="" #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes else bg_default_dirs="/bgl/BlueLight/ppcfloor/bglsys /opt/IBM/db2/V8.1 /u/bgdb2cli/sqllib /home/bgdb2cli/sqllib" fi for bg_dir in $trydb2dir "" $bg_default_dirs; do # Skip directories that don't exist if test ! -z "$bg_dir" -a ! -d "$bg_dir" ; then continue; fi # Search for required BG API libraries in the directory if test -z "$have_bg_ar" -a -f "$bg_dir/lib64/libbglbridge.so" ; then have_bg_ar=yes bg_bridge_so="$bg_dir/lib64/libbglbridge.so" bg_ldflags="$bg_ldflags -L$bg_dir/lib64 -L/usr/lib64 -Wl,--unresolved-symbols=ignore-in-shared-libs -lbglbridge -lbgldb -ltableapi -lbglmachine -lexpat -lsaymessage" fi # Search for required DB2 library in the directory if test -z "$have_db2" -a -f "$bg_dir/lib64/libdb2.so" ; then have_db2=yes bg_db2_so="$bg_dir/lib64/libdb2.so" bg_ldflags="$bg_ldflags -L$bg_dir/lib64 -ldb2" fi # Search for headers in the directory if test -z "$have_bg_hdr" -a -f "$bg_dir/include/rm_api.h" ; then have_bg_hdr=yes bg_includes="-I$bg_dir/include" fi done if test ! -z "$have_bg_ar" -a ! -z "$have_bg_hdr" -a ! -z "$have_db2" ; then # ac_with_readline="no" # Test to make sure the api is good have_bg_files=yes saved_LDFLAGS="$LDFLAGS" LDFLAGS="$saved_LDFLAGS $bg_ldflags -m64" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int rm_set_serial(char *); int main () { rm_set_serial(""); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : have_bg_files=yes else as_fn_error $? "There is a problem linking to the BG/L api." "$LINENO" 5 fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS="$saved_LDFLAGS" fi if test ! -z "$have_bg_files" ; then BG_INCLUDES="$bg_includes" CFLAGS="$CFLAGS -m64 --std=gnu99" CXXFLAGS="$CXXFLAGS $CFLAGS" $as_echo "#define HAVE_3D 1" >>confdefs.h $as_echo "#define SYSTEM_DIMENSIONS 3" >>confdefs.h $as_echo "#define HAVE_BG 1" >>confdefs.h $as_echo "#define HAVE_BG_L_P 1" >>confdefs.h $as_echo "#define HAVE_BGL 1" >>confdefs.h $as_echo "#define HAVE_FRONT_END 1" >>confdefs.h $as_echo "#define HAVE_BG_FILES 1" >>confdefs.h cat >>confdefs.h <<_ACEOF #define BG_BRIDGE_SO "$bg_bridge_so" _ACEOF cat >>confdefs.h <<_ACEOF #define BG_DB2_SO "$bg_db2_so" _ACEOF { $as_echo "$as_me:${as_lineno-$LINENO}: checking for BG serial value" >&5 $as_echo_n "checking for BG serial value... " >&6; } bg_serial="BGL" # Check whether --with-bg-serial was given. if test "${with_bg_serial+set}" = set; then : withval=$with_bg_serial; bg_serial="$withval" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $bg_serial" >&5 $as_echo "$bg_serial" >&6; } cat >>confdefs.h <<_ACEOF #define BG_SERIAL "$bg_serial" _ACEOF #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes ac_real_bluegene_loaded=yes fi if test "x$ac_bluegene_loaded" = "xyes"; then BGL_LOADED_TRUE= BGL_LOADED_FALSE='#' else BGL_LOADED_TRUE='#' BGL_LOADED_FALSE= fi # test for bluegene emulation mode # Check whether --enable-bgp-emulation was given. if test "${enable_bgp_emulation+set}" = set; then : enableval=$enable_bgp_emulation; case "$enableval" in yes) bgp_emulation=yes ;; no) bgp_emulation=no ;; *) as_fn_error $? "bad value \"$enableval\" for --enable-bgp-emulation" "$LINENO" 5 ;; esac fi # Skip if already set if test "x$ac_bluegene_loaded" = "xyes" ; then bg_default_dirs="" elif test "x$bgp_emulation" = "xyes"; then $as_echo "#define HAVE_3D 1" >>confdefs.h $as_echo "#define SYSTEM_DIMENSIONS 3" >>confdefs.h $as_echo "#define HAVE_BG 1" >>confdefs.h $as_echo "#define HAVE_BG_L_P 1" >>confdefs.h $as_echo "#define HAVE_BGP 1" >>confdefs.h $as_echo "#define HAVE_FRONT_END 1" >>confdefs.h { $as_echo "$as_me:${as_lineno-$LINENO}: Running in BG/P emulation mode" >&5 $as_echo "$as_me: Running in BG/P emulation mode" >&6;} bg_default_dirs="" #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes else bg_default_dirs="/bgsys/drivers/ppcfloor" fi libname=bgpbridge for bg_dir in $trydb2dir "" $bg_default_dirs; do # Skip directories that don't exist if test ! -z "$bg_dir" -a ! -d "$bg_dir" ; then continue; fi soloc=$bg_dir/lib64/lib$libname.so # Search for required BG API libraries in the directory if test -z "$have_bg_ar" -a -f "$soloc" ; then have_bgp_ar=yes bg_ldflags="$bg_ldflags -L$bg_dir/lib64 -L/usr/lib64 -Wl,--unresolved-symbols=ignore-in-shared-libs -l$libname" fi # Search for headers in the directory if test -z "$have_bg_hdr" -a -f "$bg_dir/include/rm_api.h" ; then have_bgp_hdr=yes bg_includes="-I$bg_dir/include" fi done if test ! -z "$have_bgp_ar" -a ! -z "$have_bgp_hdr" ; then # ac_with_readline="no" # Test to make sure the api is good saved_LDFLAGS="$LDFLAGS" LDFLAGS="$saved_LDFLAGS $bg_ldflags -m64" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int rm_set_serial(char *); int main () { rm_set_serial(""); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : have_bgp_files=yes else as_fn_error $? "There is a problem linking to the BG/P api." "$LINENO" 5 fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS="$saved_LDFLAGS" fi if test ! -z "$have_bgp_files" ; then BG_INCLUDES="$bg_includes" CFLAGS="$CFLAGS -m64" CXXFLAGS="$CXXFLAGS $CFLAGS" $as_echo "#define HAVE_3D 1" >>confdefs.h $as_echo "#define SYSTEM_DIMENSIONS 3" >>confdefs.h $as_echo "#define HAVE_BG 1" >>confdefs.h $as_echo "#define HAVE_BG_L_P 1" >>confdefs.h $as_echo "#define HAVE_BGP 1" >>confdefs.h $as_echo "#define HAVE_FRONT_END 1" >>confdefs.h $as_echo "#define HAVE_BG_FILES 1" >>confdefs.h cat >>confdefs.h <<_ACEOF #define BG_BRIDGE_SO "$soloc" _ACEOF { $as_echo "$as_me:${as_lineno-$LINENO}: checking for BG serial value" >&5 $as_echo_n "checking for BG serial value... " >&6; } bg_serial="BGP" # Check whether --with-bg-serial was given. if test "${with_bg_serial+set}" = set; then : withval=$with_bg_serial; bg_serial="$withval" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $bg_serial" >&5 $as_echo "$bg_serial" >&6; } cat >>confdefs.h <<_ACEOF #define BG_SERIAL "$bg_serial" _ACEOF #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes ac_real_bluegene_loaded=yes fi if test "x$ac_bluegene_loaded" = "xyes"; then BG_L_P_LOADED_TRUE= BG_L_P_LOADED_FALSE='#' else BG_L_P_LOADED_TRUE='#' BG_L_P_LOADED_FALSE= fi if test "x$ac_real_bluegene_loaded" = "xyes"; then REAL_BG_L_P_LOADED_TRUE= REAL_BG_L_P_LOADED_FALSE='#' else REAL_BG_L_P_LOADED_TRUE='#' REAL_BG_L_P_LOADED_FALSE= fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu if test -z "$CXX"; then if test -n "$CCC"; then CXX=$CCC else if test -n "$ac_tool_prefix"; then for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CXX+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CXX"; then ac_cv_prog_CXX="$CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CXX="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CXX=$ac_cv_prog_CXX if test -n "$CXX"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CXX" >&5 $as_echo "$CXX" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$CXX" && break done fi if test -z "$CXX"; then ac_ct_CXX=$CXX for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CXX+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CXX"; then ac_cv_prog_ac_ct_CXX="$ac_ct_CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CXX="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CXX=$ac_cv_prog_ac_ct_CXX if test -n "$ac_ct_CXX"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CXX" >&5 $as_echo "$ac_ct_CXX" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_CXX" && break done if test "x$ac_ct_CXX" = x; then CXX="g++" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CXX=$ac_ct_CXX fi fi fi fi # Provide some information about the compiler. $as_echo "$as_me:${as_lineno-$LINENO}: checking for C++ compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are using the GNU C++ compiler" >&5 $as_echo_n "checking whether we are using the GNU C++ compiler... " >&6; } if ${ac_cv_cxx_compiler_gnu+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_compiler_gnu=yes else ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_cxx_compiler_gnu=$ac_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_cxx_compiler_gnu" >&5 $as_echo "$ac_cv_cxx_compiler_gnu" >&6; } if test $ac_compiler_gnu = yes; then GXX=yes else GXX= fi ac_test_CXXFLAGS=${CXXFLAGS+set} ac_save_CXXFLAGS=$CXXFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CXX accepts -g" >&5 $as_echo_n "checking whether $CXX accepts -g... " >&6; } if ${ac_cv_prog_cxx_g+:} false; then : $as_echo_n "(cached) " >&6 else ac_save_cxx_werror_flag=$ac_cxx_werror_flag ac_cxx_werror_flag=yes ac_cv_prog_cxx_g=no CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_prog_cxx_g=yes else CXXFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : else ac_cxx_werror_flag=$ac_save_cxx_werror_flag CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_prog_cxx_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cxx_werror_flag=$ac_save_cxx_werror_flag fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_g" >&5 $as_echo "$ac_cv_prog_cxx_g" >&6; } if test "$ac_test_CXXFLAGS" = set; then CXXFLAGS=$ac_save_CXXFLAGS elif test $ac_cv_prog_cxx_g = yes; then if test "$GXX" = yes; then CXXFLAGS="-g -O2" else CXXFLAGS="-g" fi else if test "$GXX" = yes; then CXXFLAGS="-O2" else CXXFLAGS= fi fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu depcc="$CXX" am_compiler_list= { $as_echo "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 $as_echo_n "checking dependency style of $depcc... " >&6; } if ${am_cv_CXX_dependencies_compiler_type+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named 'D' -- because '-MD' means "put the output # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CXX_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with # Solaris 10 /bin/sh. echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle '-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # After this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CXX_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CXX_dependencies_compiler_type=none fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_CXX_dependencies_compiler_type" >&5 $as_echo "$am_cv_CXX_dependencies_compiler_type" >&6; } CXXDEPMODE=depmode=$am_cv_CXX_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CXX_dependencies_compiler_type" = gcc3; then am__fastdepCXX_TRUE= am__fastdepCXX_FALSE='#' else am__fastdepCXX_TRUE='#' am__fastdepCXX_FALSE= fi # test for bluegene emulation mode # Check whether --enable-bgq-emulation was given. if test "${enable_bgq_emulation+set}" = set; then : enableval=$enable_bgq_emulation; case "$enableval" in yes) bgq_emulation=yes ;; no) bgq_emulation=no ;; *) as_fn_error $? "bad value \"$enableval\" for --enable-bgq-emulation" "$LINENO" 5 ;; esac fi # Skip if already set if test "x$ac_bluegene_loaded" = "xyes" ; then bg_default_dirs="" elif test "x$bgq_emulation" = "xyes"; then $as_echo "#define HAVE_4D 1" >>confdefs.h $as_echo "#define SYSTEM_DIMENSIONS 4" >>confdefs.h $as_echo "#define HAVE_BG 1" >>confdefs.h $as_echo "#define HAVE_BGQ 1" >>confdefs.h $as_echo "#define HAVE_FRONT_END 1" >>confdefs.h { $as_echo "$as_me:${as_lineno-$LINENO}: Running in BG/Q emulation mode" >&5 $as_echo "$as_me: Running in BG/Q emulation mode" >&6;} bg_default_dirs="" #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes ac_bgq_loaded=yes else bg_default_dirs="/bgsys/drivers/ppcfloor" fi libname=bgsched loglibname=log4cxx runjoblibname=runjob_client for bg_dir in $trydb2dir "" $bg_default_dirs; do # Skip directories that don't exist if test ! -z "$bg_dir" -a ! -d "$bg_dir" ; then continue; fi soloc=$bg_dir/hlcs/lib/lib$libname.so # Search for required BG API libraries in the directory if test -z "$have_bg_ar" -a -f "$soloc" ; then have_bgq_ar=yes if test "$ac_with_rpath" = "yes"; then bg_libs="$bg_libs -Wl,-rpath -Wl,$bg_dir/hlcs/lib -L$bg_dir/hlcs/lib -l$libname" else bg_libs="$bg_libs -L$bg_dir/hlcs/lib -l$libname" fi fi soloc=$bg_dir/extlib/lib/lib$loglibname.so if test -z "$have_bg_ar" -a -f "$soloc" ; then have_bgq_ar=yes if test "$ac_with_rpath" = "yes"; then bg_libs="$bg_libs -Wl,-rpath -Wl,$bg_dir/extlib/lib -L$bg_dir/extlib/lib -l$loglibname" else bg_libs="$bg_libs -L$bg_dir/extlib/lib -l$loglibname" fi fi soloc=$bg_dir/hlcs/lib/lib$runjoblibname.so # Search for required BG API libraries in the directory if test -z "$have_bg_ar" -a -f "$soloc" ; then have_bgq_ar=yes if test "$ac_with_rpath" = "yes"; then runjob_ldflags="$runjob_ldflags -Wl,-rpath -Wl,$bg_dir/hlcs/lib -L$bg_dir/hlcs/lib -l$runjoblibname" else runjob_ldflags="$runjob_ldflags -L$bg_dir/hlcs/lib -l$runjoblibname" fi fi # Search for headers in the directory if test -z "$have_bg_hdr" -a -f "$bg_dir/hlcs/include/bgsched/bgsched.h" ; then have_bgq_hdr=yes bg_includes="-I$bg_dir -I$bg_dir/hlcs/include" fi if test -z "$have_bg_hdr" -a -f "$bg_dir/extlib/include/log4cxx/logger.h" ; then have_bgq_hdr=yes bg_includes="$bg_includes -I$bg_dir/extlib/include" fi done if test ! -z "$have_bgq_ar" -a ! -z "$have_bgq_hdr" ; then # ac_with_readline="no" # Test to make sure the api is good saved_LIBS="$LIBS" saved_CPPFLAGS="$CPPFLAGS" LIBS="$saved_LIBS $bg_libs" CPPFLAGS="$saved_CPPFLAGS -m64 $bg_includes" ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { bgsched::init(""); log4cxx::LoggerPtr logger_ptr(log4cxx::Logger::getLogger( "ibm" )); ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO"; then : have_bgq_files=yes else as_fn_error $? "There is a problem linking to the BG/Q api." "$LINENO" 5 fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext # In later versions of the driver IBM added a better function # to see if blocks were IO connected or not. Here is a check # to not break backwards compatibility cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { bgsched::Block::checkIO("", NULL, NULL); ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO"; then : have_bgq_new_io_check=yes else { $as_echo "$as_me:${as_lineno-$LINENO}: result: Using old iocheck." >&5 $as_echo "Using old iocheck." >&6; } fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext # In later versions of the driver IBM added an "action" to a # block. Here is a check to not break backwards compatibility cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { bgsched::Block::Ptr block_ptr; block_ptr->getAction(); ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO"; then : have_bgq_get_action=yes else { $as_echo "$as_me:${as_lineno-$LINENO}: result: Blocks do not have actions!" >&5 $as_echo "Blocks do not have actions!" >&6; } fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu LIBS="$saved_LIBS" CPPFLAGS="$saved_CPPFLAGS" fi if test ! -z "$have_bgq_files" ; then BG_LDFLAGS="$bg_libs" RUNJOB_LDFLAGS="$runjob_ldflags" BG_INCLUDES="$bg_includes" CFLAGS="$CFLAGS -m64" CXXFLAGS="$CXXFLAGS $CFLAGS" $as_echo "#define HAVE_4D 1" >>confdefs.h $as_echo "#define SYSTEM_DIMENSIONS 4" >>confdefs.h $as_echo "#define HAVE_BG 1" >>confdefs.h $as_echo "#define HAVE_BGQ 1" >>confdefs.h $as_echo "#define HAVE_FRONT_END 1" >>confdefs.h $as_echo "#define HAVE_BG_FILES 1" >>confdefs.h #AC_DEFINE_UNQUOTED(BG_BRIDGE_SO, "$soloc", [Define the BG_BRIDGE_SO value]) if test ! -z "$have_bgq_new_io_check" ; then $as_echo "#define HAVE_BG_NEW_IO_CHECK 1" >>confdefs.h fi if test ! -z "$have_bgq_get_action" ; then $as_echo "#define HAVE_BG_GET_ACTION 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: Running on a legitimate BG/Q system" >&5 $as_echo "$as_me: Running on a legitimate BG/Q system" >&6;} # AC_MSG_CHECKING(for BG serial value) # bg_serial="BGQ" # AC_ARG_WITH(bg-serial,, [bg_serial="$withval"]) # AC_MSG_RESULT($bg_serial) # AC_DEFINE_UNQUOTED(BG_SERIAL, "$bg_serial", [Define the BG_SERIAL value]) #define ac_bluegene_loaded so we don't load another bluegene conf ac_bluegene_loaded=yes ac_real_bluegene_loaded=yes ac_bgq_loaded=yes fi if test "x$ac_bgq_loaded" = "xyes"; then BGQ_LOADED_TRUE= BGQ_LOADED_FALSE='#' else BGQ_LOADED_TRUE='#' BGQ_LOADED_FALSE= fi if test "x$ac_real_bluegene_loaded" = "xyes"; then REAL_BGQ_LOADED_TRUE= REAL_BGQ_LOADED_FALSE='#' else REAL_BGQ_LOADED_TRUE='#' REAL_BGQ_LOADED_FALSE= fi if test "x$ac_bluegene_loaded" = "xyes"; then BLUEGENE_LOADED_TRUE= BLUEGENE_LOADED_FALSE='#' else BLUEGENE_LOADED_TRUE='#' BLUEGENE_LOADED_FALSE= fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to run the C preprocessor" >&5 $as_echo_n "checking how to run the C preprocessor... " >&6; } # On Suns, sometimes $CPP names a directory. if test -n "$CPP" && test -d "$CPP"; then CPP= fi if test -z "$CPP"; then if ${ac_cv_prog_CPP+:} false; then : $as_echo_n "(cached) " >&6 else # Double quotes because CPP needs to be expanded for CPP in "$CC -E" "$CC -E -traditional-cpp" "/lib/cpp" do ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : break fi done ac_cv_prog_CPP=$CPP fi CPP=$ac_cv_prog_CPP else ac_cv_prog_CPP=$CPP fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CPP" >&5 $as_echo "$CPP" >&6; } ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "C preprocessor \"$CPP\" fails sanity check See \`config.log' for more details" "$LINENO" 5; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking for grep that handles long lines and -e" >&5 $as_echo_n "checking for grep that handles long lines and -e... " >&6; } if ${ac_cv_path_GREP+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$GREP"; then ac_path_GREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in grep ggrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_GREP="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_GREP" || continue # Check for GNU ac_path_GREP and select it if it is found. # Check for GNU $ac_path_GREP case `"$ac_path_GREP" --version 2>&1` in *GNU*) ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'GREP' >> "conftest.nl" "$ac_path_GREP" -e 'GREP$' -e '-(cannot match)-' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_GREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_GREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_GREP"; then as_fn_error $? "no acceptable grep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_GREP=$GREP fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_GREP" >&5 $as_echo "$ac_cv_path_GREP" >&6; } GREP="$ac_cv_path_GREP" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for egrep" >&5 $as_echo_n "checking for egrep... " >&6; } if ${ac_cv_path_EGREP+:} false; then : $as_echo_n "(cached) " >&6 else if echo a | $GREP -E '(a|b)' >/dev/null 2>&1 then ac_cv_path_EGREP="$GREP -E" else if test -z "$EGREP"; then ac_path_EGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in egrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_EGREP="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_EGREP" || continue # Check for GNU ac_path_EGREP and select it if it is found. # Check for GNU $ac_path_EGREP case `"$ac_path_EGREP" --version 2>&1` in *GNU*) ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'EGREP' >> "conftest.nl" "$ac_path_EGREP" 'EGREP$' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_EGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_EGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_EGREP"; then as_fn_error $? "no acceptable egrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_EGREP=$EGREP fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_EGREP" >&5 $as_echo "$ac_cv_path_EGREP" >&6; } EGREP="$ac_cv_path_EGREP" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ANSI C header files" >&5 $as_echo_n "checking for ANSI C header files... " >&6; } if ${ac_cv_header_stdc+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include #include int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_header_stdc=yes else ac_cv_header_stdc=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext if test $ac_cv_header_stdc = yes; then # SunOS 4.x string.h does not declare mem*, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "memchr" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "free" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi. if test "$cross_compiling" = yes; then : : else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #if ((' ' & 0x0FF) == 0x020) # define ISLOWER(c) ('a' <= (c) && (c) <= 'z') # define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) #else # define ISLOWER(c) \ (('a' <= (c) && (c) <= 'i') \ || ('j' <= (c) && (c) <= 'r') \ || ('s' <= (c) && (c) <= 'z')) # define TOUPPER(c) (ISLOWER(c) ? ((c) | 0x40) : (c)) #endif #define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) int main () { int i; for (i = 0; i < 256; i++) if (XOR (islower (i), ISLOWER (i)) || toupper (i) != TOUPPER (i)) return 2; return 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : else ac_cv_header_stdc=no fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_header_stdc" >&5 $as_echo "$ac_cv_header_stdc" >&6; } if test $ac_cv_header_stdc = yes; then $as_echo "#define STDC_HEADERS 1" >>confdefs.h fi # On IRIX 5.3, sys/types and inttypes.h are conflicting. for ac_header in sys/types.h sys/stat.h stdlib.h string.h memory.h strings.h \ inttypes.h stdint.h unistd.h do : as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh` ac_fn_c_check_header_compile "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default " if eval test \"x\$"$as_ac_Header"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_header" | $as_tr_cpp` 1 _ACEOF fi done case "$host" in *-*-aix*) LDFLAGS="$LDFLAGS -Wl,-brtl" # permit run time linking LIB_LDFLAGS="$LDFLAGS -Wl,-G -Wl,-bnoentry -Wl,-bgcbypass:1000 -Wl,-bexpfull" SO_LDFLAGS=" $LDFLAGS -Wl,-G -Wl,-bnoentry -Wl,-bgcbypass:1000 -Wl,-bexpfull" if test "$OBJECT_MODE" = "64"; then CFLAGS="-maix64 $CFLAGS" CMD_LDFLAGS="$LDFLAGS -Wl,-bgcbypass:1000 -Wl,-bexpfull" # keep all common functions else CFLAGS="-maix32 $CFLAGS" CMD_LDFLAGS="$LDFLAGS -Wl,-bgcbypass:1000 -Wl,-bexpfull -Wl,-bmaxdata:0x70000000" # keep all common functions fi ac_have_aix="yes" ac_with_readline="no" $as_echo "#define HAVE_AIX 1" >>confdefs.h ;; *) ac_have_aix="no" ;; esac if test "x$ac_have_aix" = "xyes"; then HAVE_AIX_TRUE= HAVE_AIX_FALSE='#' else HAVE_AIX_TRUE='#' HAVE_AIX_FALSE= fi HAVE_AIX="$ac_have_aix" if test "x$ac_have_aix" = "xyes"; then # Check whether --with-proctrack was given. if test "${with_proctrack+set}" = set; then : withval=$with_proctrack; PROCTRACKDIR="$withval" fi if test -f "$PROCTRACKDIR/lib/proctrackext.exp"; then PROCTRACKDIR="$PROCTRACKDIR/lib" CPPFLAGS="-I$PROCTRACKDIR/include $CPPFLAGS" for ac_header in proctrack.h do : ac_fn_c_check_header_mongrel "$LINENO" "proctrack.h" "ac_cv_header_proctrack_h" "$ac_includes_default" if test "x$ac_cv_header_proctrack_h" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_PROCTRACK_H 1 _ACEOF fi done ac_have_aix_proctrack="yes" elif test -f "$prefix/lib/proctrackext.exp"; then PROCTRACKDIR="$prefix/lib" CPPFLAGS="$CPPFLAGS -I$prefix/include" for ac_header in proctrack.h do : ac_fn_c_check_header_mongrel "$LINENO" "proctrack.h" "ac_cv_header_proctrack_h" "$ac_includes_default" if test "x$ac_cv_header_proctrack_h" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_PROCTRACK_H 1 _ACEOF fi done ac_have_aix_proctrack="yes" else { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: proctrackext.exp is required for AIX proctrack support, specify location with --with-proctrack" >&5 $as_echo "$as_me: WARNING: proctrackext.exp is required for AIX proctrack support, specify location with --with-proctrack" >&2;} ac_have_aix_proctrack="no" fi else ac_have_aix_proctrack="no" # Check whether --enable-largefile was given. if test "${enable_largefile+set}" = set; then : enableval=$enable_largefile; fi if test "$enable_largefile" != no; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for special C compiler options needed for large files" >&5 $as_echo_n "checking for special C compiler options needed for large files... " >&6; } if ${ac_cv_sys_largefile_CC+:} false; then : $as_echo_n "(cached) " >&6 else ac_cv_sys_largefile_CC=no if test "$GCC" != yes; then ac_save_CC=$CC while :; do # IRIX 6.2 and later do not support large files by default, # so use the C compiler's -n32 option if that helps. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include /* Check that off_t can represent 2**63 - 1 correctly. We can't simply define LARGE_OFF_T to be 9223372036854775807, since some C++ compilers masquerading as C compilers incorrectly reject 9223372036854775807. */ #define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31)) int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721 && LARGE_OFF_T % 2147483647 == 1) ? 1 : -1]; int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : break fi rm -f core conftest.err conftest.$ac_objext CC="$CC -n32" if ac_fn_c_try_compile "$LINENO"; then : ac_cv_sys_largefile_CC=' -n32'; break fi rm -f core conftest.err conftest.$ac_objext break done CC=$ac_save_CC rm -f conftest.$ac_ext fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_sys_largefile_CC" >&5 $as_echo "$ac_cv_sys_largefile_CC" >&6; } if test "$ac_cv_sys_largefile_CC" != no; then CC=$CC$ac_cv_sys_largefile_CC fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for _FILE_OFFSET_BITS value needed for large files" >&5 $as_echo_n "checking for _FILE_OFFSET_BITS value needed for large files... " >&6; } if ${ac_cv_sys_file_offset_bits+:} false; then : $as_echo_n "(cached) " >&6 else while :; do cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include /* Check that off_t can represent 2**63 - 1 correctly. We can't simply define LARGE_OFF_T to be 9223372036854775807, since some C++ compilers masquerading as C compilers incorrectly reject 9223372036854775807. */ #define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31)) int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721 && LARGE_OFF_T % 2147483647 == 1) ? 1 : -1]; int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_sys_file_offset_bits=no; break fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #define _FILE_OFFSET_BITS 64 #include /* Check that off_t can represent 2**63 - 1 correctly. We can't simply define LARGE_OFF_T to be 9223372036854775807, since some C++ compilers masquerading as C compilers incorrectly reject 9223372036854775807. */ #define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31)) int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721 && LARGE_OFF_T % 2147483647 == 1) ? 1 : -1]; int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_sys_file_offset_bits=64; break fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_sys_file_offset_bits=unknown break done fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_sys_file_offset_bits" >&5 $as_echo "$ac_cv_sys_file_offset_bits" >&6; } case $ac_cv_sys_file_offset_bits in #( no | unknown) ;; *) cat >>confdefs.h <<_ACEOF #define _FILE_OFFSET_BITS $ac_cv_sys_file_offset_bits _ACEOF ;; esac rm -rf conftest* if test $ac_cv_sys_file_offset_bits = unknown; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for _LARGE_FILES value needed for large files" >&5 $as_echo_n "checking for _LARGE_FILES value needed for large files... " >&6; } if ${ac_cv_sys_large_files+:} false; then : $as_echo_n "(cached) " >&6 else while :; do cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include /* Check that off_t can represent 2**63 - 1 correctly. We can't simply define LARGE_OFF_T to be 9223372036854775807, since some C++ compilers masquerading as C compilers incorrectly reject 9223372036854775807. */ #define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31)) int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721 && LARGE_OFF_T % 2147483647 == 1) ? 1 : -1]; int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_sys_large_files=no; break fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #define _LARGE_FILES 1 #include /* Check that off_t can represent 2**63 - 1 correctly. We can't simply define LARGE_OFF_T to be 9223372036854775807, since some C++ compilers masquerading as C compilers incorrectly reject 9223372036854775807. */ #define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31)) int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721 && LARGE_OFF_T % 2147483647 == 1) ? 1 : -1]; int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_sys_large_files=1; break fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_sys_large_files=unknown break done fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_sys_large_files" >&5 $as_echo "$ac_cv_sys_large_files" >&6; } case $ac_cv_sys_large_files in #( no | unknown) ;; *) cat >>confdefs.h <<_ACEOF #define _LARGE_FILES $ac_cv_sys_large_files _ACEOF ;; esac rm -rf conftest* fi fi fi if test "x$ac_have_aix_proctrack" = "xyes"; then HAVE_AIX_PROCTRACK_TRUE= HAVE_AIX_PROCTRACK_FALSE='#' else HAVE_AIX_PROCTRACK_TRUE='#' HAVE_AIX_PROCTRACK_FALSE= fi case "$host" in *-*-aix*) $as_echo "#define USE_ALIAS 0" >>confdefs.h ;; *darwin*) $as_echo "#define USE_ALIAS 0" >>confdefs.h ;; *) $as_echo "#define USE_ALIAS 1" >>confdefs.h ;; esac ac_have_cygwin=no case "$host" in *cygwin) LDFLAGS="$LDFLAGS -no-undefined" SO_LDFLAGS="$SO_LDFLAGS \$(top_builddir)/src/api/libslurmhelper.la" ac_have_cygwin=yes ;; *solaris*) CC="/usr/sfw/bin/gcc" CFLAGS="$CFLAGS -D_POSIX_PTHREAD_SEMANTICS -I/usr/sfw/include" LDFLAGS="$LDFLAGS -L/usr/sfw/lib" ;; esac if test x"$ac_have_cygwin" = x"yes"; then WITH_CYGWIN_TRUE= WITH_CYGWIN_FALSE='#' else WITH_CYGWIN_TRUE='#' WITH_CYGWIN_FALSE= fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args. set dummy ${ac_tool_prefix}gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_CC"; then ac_ct_CC=$CC # Extract the first word of "gcc", so it can be a program name with args. set dummy gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi else CC="$ac_cv_prog_CC" fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args. set dummy ${ac_tool_prefix}cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi fi if test -z "$CC"; then # Extract the first word of "cc", so it can be a program name with args. set dummy cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else ac_prog_rejected=no as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then if test "$as_dir/$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then ac_prog_rejected=yes continue fi ac_cv_prog_CC="cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS if test $ac_prog_rejected = yes; then # We found a bogon in the path, so make sure we never use it. set dummy $ac_cv_prog_CC shift if test $# != 0; then # We chose a different compiler from the bogus one. # However, it has the same basename, so the bogon will be chosen # first if we set CC to just the basename; use the full file name. shift ac_cv_prog_CC="$as_dir/$ac_word${1+' '}$@" fi fi fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then for ac_prog in cl.exe do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$CC" && break done fi if test -z "$CC"; then ac_ct_CC=$CC for ac_prog in cl.exe do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_CC" && break done if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi fi fi test -z "$CC" && { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "no acceptable C compiler found in \$PATH See \`config.log' for more details" "$LINENO" 5; } # Provide some information about the compiler. $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are using the GNU C compiler" >&5 $as_echo_n "checking whether we are using the GNU C compiler... " >&6; } if ${ac_cv_c_compiler_gnu+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_compiler_gnu=yes else ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_c_compiler_gnu=$ac_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu" >&5 $as_echo "$ac_cv_c_compiler_gnu" >&6; } if test $ac_compiler_gnu = yes; then GCC=yes else GCC= fi ac_test_CFLAGS=${CFLAGS+set} ac_save_CFLAGS=$CFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g" >&5 $as_echo_n "checking whether $CC accepts -g... " >&6; } if ${ac_cv_prog_cc_g+:} false; then : $as_echo_n "(cached) " >&6 else ac_save_c_werror_flag=$ac_c_werror_flag ac_c_werror_flag=yes ac_cv_prog_cc_g=no CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes else CFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : else ac_c_werror_flag=$ac_save_c_werror_flag CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_c_werror_flag=$ac_save_c_werror_flag fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g" >&5 $as_echo "$ac_cv_prog_cc_g" >&6; } if test "$ac_test_CFLAGS" = set; then CFLAGS=$ac_save_CFLAGS elif test $ac_cv_prog_cc_g = yes; then if test "$GCC" = yes; then CFLAGS="-g -O2" else CFLAGS="-g" fi else if test "$GCC" = yes; then CFLAGS="-O2" else CFLAGS= fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $CC option to accept ISO C89" >&5 $as_echo_n "checking for $CC option to accept ISO C89... " >&6; } if ${ac_cv_prog_cc_c89+:} false; then : $as_echo_n "(cached) " >&6 else ac_cv_prog_cc_c89=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include struct stat; /* Most of the following tests are stolen from RCS 5.7's src/conf.sh. */ struct buf { int x; }; FILE * (*rcsopen) (struct buf *, struct stat *, int); static char *e (p, i) char **p; int i; { return p[i]; } static char *f (char * (*g) (char **, int), char **p, ...) { char *s; va_list v; va_start (v,p); s = g (p, va_arg (v,int)); va_end (v); return s; } /* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has function prototypes and stuff, but not '\xHH' hex character constants. These don't provoke an error unfortunately, instead are silently treated as 'x'. The following induces an error, until -std is added to get proper ANSI mode. Curiously '\x00'!='x' always comes out true, for an array size at least. It's necessary to write '\x00'==0 to get something that's true only with -std. */ int osf4_cc_array ['\x00' == 0 ? 1 : -1]; /* IBM C 6 for AIX is almost-ANSI by default, but it replaces macro parameters inside strings and character constants. */ #define FOO(x) 'x' int xlc6_cc_array[FOO(a) == 'x' ? 1 : -1]; int test (int i, double x); struct s1 {int (*f) (int a);}; struct s2 {int (*f) (double a);}; int pairnames (int, char **, FILE *(*)(struct buf *, struct stat *, int), int, int); int argc; char **argv; int main () { return f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1]; ; return 0; } _ACEOF for ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std \ -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__" do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_c89=$ac_arg fi rm -f core conftest.err conftest.$ac_objext test "x$ac_cv_prog_cc_c89" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC fi # AC_CACHE_VAL case "x$ac_cv_prog_cc_c89" in x) { $as_echo "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 $as_echo "none needed" >&6; } ;; xno) { $as_echo "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 $as_echo "unsupported" >&6; } ;; *) CC="$CC $ac_cv_prog_cc_c89" { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89" >&5 $as_echo "$ac_cv_prog_cc_c89" >&6; } ;; esac if test "x$ac_cv_prog_cc_c89" != xno; then : fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC understands -c and -o together" >&5 $as_echo_n "checking whether $CC understands -c and -o together... " >&6; } if ${am_cv_prog_cc_c_o+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF # Make sure it works both with $CC and with simple cc. # Following AC_PROG_CC_C_O, we do the test twice because some # compilers refuse to overwrite an existing .o file with -o, # though they will create one. am_cv_prog_cc_c_o=yes for am_i in 1 2; do if { echo "$as_me:$LINENO: $CC -c conftest.$ac_ext -o conftest2.$ac_objext" >&5 ($CC -c conftest.$ac_ext -o conftest2.$ac_objext) >&5 2>&5 ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } \ && test -f conftest2.$ac_objext; then : OK else am_cv_prog_cc_c_o=no break fi done rm -f core conftest* unset am_i fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_prog_cc_c_o" >&5 $as_echo "$am_cv_prog_cc_c_o" >&6; } if test "$am_cv_prog_cc_c_o" != yes; then # Losing compiler, so override with the script. # FIXME: It is wrong to rewrite CC. # But if we don't then we get into trouble of one sort or another. # A longer-term fix would be to have automake use am__CC in this case, # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)" CC="$am_aux_dir/compile $CC" fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu depcc="$CC" am_compiler_list= { $as_echo "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 $as_echo_n "checking dependency style of $depcc... " >&6; } if ${am_cv_CC_dependencies_compiler_type+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named 'D' -- because '-MD' means "put the output # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CC_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with # Solaris 10 /bin/sh. echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle '-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # After this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CC_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CC_dependencies_compiler_type=none fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_CC_dependencies_compiler_type" >&5 $as_echo "$am_cv_CC_dependencies_compiler_type" >&6; } CCDEPMODE=depmode=$am_cv_CC_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CC_dependencies_compiler_type" = gcc3; then am__fastdepCC_TRUE= am__fastdepCC_FALSE='#' else am__fastdepCC_TRUE='#' am__fastdepCC_FALSE= fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu if test -z "$CXX"; then if test -n "$CCC"; then CXX=$CCC else if test -n "$ac_tool_prefix"; then for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CXX+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CXX"; then ac_cv_prog_CXX="$CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CXX="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CXX=$ac_cv_prog_CXX if test -n "$CXX"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CXX" >&5 $as_echo "$CXX" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$CXX" && break done fi if test -z "$CXX"; then ac_ct_CXX=$CXX for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CXX+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CXX"; then ac_cv_prog_ac_ct_CXX="$ac_ct_CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CXX="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CXX=$ac_cv_prog_ac_ct_CXX if test -n "$ac_ct_CXX"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CXX" >&5 $as_echo "$ac_ct_CXX" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_CXX" && break done if test "x$ac_ct_CXX" = x; then CXX="g++" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CXX=$ac_ct_CXX fi fi fi fi # Provide some information about the compiler. $as_echo "$as_me:${as_lineno-$LINENO}: checking for C++ compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are using the GNU C++ compiler" >&5 $as_echo_n "checking whether we are using the GNU C++ compiler... " >&6; } if ${ac_cv_cxx_compiler_gnu+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_compiler_gnu=yes else ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_cxx_compiler_gnu=$ac_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_cxx_compiler_gnu" >&5 $as_echo "$ac_cv_cxx_compiler_gnu" >&6; } if test $ac_compiler_gnu = yes; then GXX=yes else GXX= fi ac_test_CXXFLAGS=${CXXFLAGS+set} ac_save_CXXFLAGS=$CXXFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CXX accepts -g" >&5 $as_echo_n "checking whether $CXX accepts -g... " >&6; } if ${ac_cv_prog_cxx_g+:} false; then : $as_echo_n "(cached) " >&6 else ac_save_cxx_werror_flag=$ac_cxx_werror_flag ac_cxx_werror_flag=yes ac_cv_prog_cxx_g=no CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_prog_cxx_g=yes else CXXFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : else ac_cxx_werror_flag=$ac_save_cxx_werror_flag CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_prog_cxx_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cxx_werror_flag=$ac_save_cxx_werror_flag fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_g" >&5 $as_echo "$ac_cv_prog_cxx_g" >&6; } if test "$ac_test_CXXFLAGS" = set; then CXXFLAGS=$ac_save_CXXFLAGS elif test $ac_cv_prog_cxx_g = yes; then if test "$GXX" = yes; then CXXFLAGS="-g -O2" else CXXFLAGS="-g" fi else if test "$GXX" = yes; then CXXFLAGS="-O2" else CXXFLAGS= fi fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu depcc="$CXX" am_compiler_list= { $as_echo "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 $as_echo_n "checking dependency style of $depcc... " >&6; } if ${am_cv_CXX_dependencies_compiler_type+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named 'D' -- because '-MD' means "put the output # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CXX_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with # Solaris 10 /bin/sh. echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle '-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # After this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CXX_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CXX_dependencies_compiler_type=none fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_CXX_dependencies_compiler_type" >&5 $as_echo "$am_cv_CXX_dependencies_compiler_type" >&6; } CXXDEPMODE=depmode=$am_cv_CXX_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CXX_dependencies_compiler_type" = gcc3; then am__fastdepCXX_TRUE= am__fastdepCXX_FALSE='#' else am__fastdepCXX_TRUE='#' am__fastdepCXX_FALSE= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} sets \$(MAKE)" >&5 $as_echo_n "checking whether ${MAKE-make} sets \$(MAKE)... " >&6; } set x ${MAKE-make} ac_make=`$as_echo "$2" | sed 's/+/p/g; s/[^a-zA-Z0-9_]/_/g'` if eval \${ac_cv_prog_make_${ac_make}_set+:} false; then : $as_echo_n "(cached) " >&6 else cat >conftest.make <<\_ACEOF SHELL = /bin/sh all: @echo '@@@%%%=$(MAKE)=@@@%%%' _ACEOF # GNU make sometimes prints "make[1]: Entering ...", which would confuse us. case `${MAKE-make} -f conftest.make 2>/dev/null` in *@@@%%%=?*=@@@%%%*) eval ac_cv_prog_make_${ac_make}_set=yes;; *) eval ac_cv_prog_make_${ac_make}_set=no;; esac rm -f conftest.make fi if eval test \$ac_cv_prog_make_${ac_make}_set = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } SET_MAKE= else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } SET_MAKE="MAKE=${MAKE-make}" fi case `pwd` in *\ * | *\ *) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&5 $as_echo "$as_me: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&2;} ;; esac macro_version='2.4.2' macro_revision='1.3337' ltmain="$ac_aux_dir/ltmain.sh" # Backslashify metacharacters that are still active within # double-quoted strings. sed_quote_subst='s/\(["`$\\]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\(["`\\]\)/\\\1/g' # Sed substitution to delay expansion of an escaped shell variable in a # double_quote_subst'ed string. delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' # Sed substitution to delay expansion of an escaped single quote. delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g' # Sed substitution to avoid accidental globbing in evaled expressions no_glob_subst='s/\*/\\\*/g' ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5 $as_echo_n "checking how to print strings... " >&6; } # Test print first, because it will be a builtin if present. if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \ test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='print -r --' elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='printf %s\n' else # Use this function as a fallback that always works. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $1 _LTECHO_EOF' } ECHO='func_fallback_echo' fi # func_echo_all arg... # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "" } case "$ECHO" in printf*) { $as_echo "$as_me:${as_lineno-$LINENO}: result: printf" >&5 $as_echo "printf" >&6; } ;; print*) { $as_echo "$as_me:${as_lineno-$LINENO}: result: print -r" >&5 $as_echo "print -r" >&6; } ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: cat" >&5 $as_echo "cat" >&6; } ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a sed that does not truncate output" >&5 $as_echo_n "checking for a sed that does not truncate output... " >&6; } if ${ac_cv_path_SED+:} false; then : $as_echo_n "(cached) " >&6 else ac_script=s/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb/ for ac_i in 1 2 3 4 5 6 7; do ac_script="$ac_script$as_nl$ac_script" done echo "$ac_script" 2>/dev/null | sed 99q >conftest.sed { ac_script=; unset ac_script;} if test -z "$SED"; then ac_path_SED_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in sed gsed; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_SED="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_SED" || continue # Check for GNU ac_path_SED and select it if it is found. # Check for GNU $ac_path_SED case `"$ac_path_SED" --version 2>&1` in *GNU*) ac_cv_path_SED="$ac_path_SED" ac_path_SED_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo '' >> "conftest.nl" "$ac_path_SED" -f conftest.sed < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_SED_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_SED="$ac_path_SED" ac_path_SED_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_SED_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_SED"; then as_fn_error $? "no acceptable sed could be found in \$PATH" "$LINENO" 5 fi else ac_cv_path_SED=$SED fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_SED" >&5 $as_echo "$ac_cv_path_SED" >&6; } SED="$ac_cv_path_SED" rm -f conftest.sed test -z "$SED" && SED=sed Xsed="$SED -e 1s/^X//" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for fgrep" >&5 $as_echo_n "checking for fgrep... " >&6; } if ${ac_cv_path_FGREP+:} false; then : $as_echo_n "(cached) " >&6 else if echo 'ab*c' | $GREP -F 'ab*c' >/dev/null 2>&1 then ac_cv_path_FGREP="$GREP -F" else if test -z "$FGREP"; then ac_path_FGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in fgrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_FGREP="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_FGREP" || continue # Check for GNU ac_path_FGREP and select it if it is found. # Check for GNU $ac_path_FGREP case `"$ac_path_FGREP" --version 2>&1` in *GNU*) ac_cv_path_FGREP="$ac_path_FGREP" ac_path_FGREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'FGREP' >> "conftest.nl" "$ac_path_FGREP" FGREP < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_FGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_FGREP="$ac_path_FGREP" ac_path_FGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_FGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_FGREP"; then as_fn_error $? "no acceptable fgrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_FGREP=$FGREP fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_FGREP" >&5 $as_echo "$ac_cv_path_FGREP" >&6; } FGREP="$ac_cv_path_FGREP" test -z "$GREP" && GREP=grep # Check whether --with-gnu-ld was given. if test "${with_gnu_ld+set}" = set; then : withval=$with_gnu_ld; test "$withval" = no || with_gnu_ld=yes else with_gnu_ld=no fi ac_prog=ld if test "$GCC" = yes; then # Check if gcc -print-prog-name=ld gives a path. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ld used by $CC" >&5 $as_echo_n "checking for ld used by $CC... " >&6; } case $host in *-*-mingw*) # gcc leaves a trailing carriage return which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [\\/]* | ?:[\\/]*) re_direlt='/[^/][^/]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done test -z "$LD" && LD="$ac_prog" ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac elif test "$with_gnu_ld" = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GNU ld" >&5 $as_echo_n "checking for GNU ld... " >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for non-GNU ld" >&5 $as_echo_n "checking for non-GNU ld... " >&6; } fi if ${lt_cv_path_LD+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$LD"; then lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then lt_cv_path_LD="$ac_dir/$ac_prog" # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &5 $as_echo "$LD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -z "$LD" && as_fn_error $? "no acceptable ld found in \$PATH" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking if the linker ($LD) is GNU ld" >&5 $as_echo_n "checking if the linker ($LD) is GNU ld... " >&6; } if ${lt_cv_prog_gnu_ld+:} false; then : $as_echo_n "(cached) " >&6 else # I'd rather use --version here, but apparently some GNU lds only accept -v. case `$LD -v 2>&1 &5 $as_echo "$lt_cv_prog_gnu_ld" >&6; } with_gnu_ld=$lt_cv_prog_gnu_ld { $as_echo "$as_me:${as_lineno-$LINENO}: checking for BSD- or MS-compatible name lister (nm)" >&5 $as_echo_n "checking for BSD- or MS-compatible name lister (nm)... " >&6; } if ${lt_cv_path_NM+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$NM"; then # Let the user override the test. lt_cv_path_NM="$NM" else lt_nm_to_check="${ac_tool_prefix}nm" if test -n "$ac_tool_prefix" && test "$build" = "$host"; then lt_nm_to_check="$lt_nm_to_check nm" fi for lt_tmp_nm in $lt_nm_to_check; do lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. tmp_nm="$ac_dir/$lt_tmp_nm" if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then # Check to see if the nm accepts a BSD-compat flag. # Adding the `sed 1q' prevents false positives on HP-UX, which says: # nm: unknown option "B" ignored # Tru64's nm complains that /dev/null is an invalid object file case `"$tmp_nm" -B /dev/null 2>&1 | sed '1q'` in */dev/null* | *'Invalid file or object type'*) lt_cv_path_NM="$tmp_nm -B" break ;; *) case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in */dev/null*) lt_cv_path_NM="$tmp_nm -p" break ;; *) lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but continue # so that we can try to find one that supports BSD flags ;; esac ;; esac fi done IFS="$lt_save_ifs" done : ${lt_cv_path_NM=no} fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_NM" >&5 $as_echo "$lt_cv_path_NM" >&6; } if test "$lt_cv_path_NM" != "no"; then NM="$lt_cv_path_NM" else # Didn't find any BSD compatible name lister, look for dumpbin. if test -n "$DUMPBIN"; then : # Let the user override the test. else if test -n "$ac_tool_prefix"; then for ac_prog in dumpbin "link -dump" do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DUMPBIN+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DUMPBIN"; then ac_cv_prog_DUMPBIN="$DUMPBIN" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DUMPBIN="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DUMPBIN=$ac_cv_prog_DUMPBIN if test -n "$DUMPBIN"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DUMPBIN" >&5 $as_echo "$DUMPBIN" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$DUMPBIN" && break done fi if test -z "$DUMPBIN"; then ac_ct_DUMPBIN=$DUMPBIN for ac_prog in dumpbin "link -dump" do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DUMPBIN+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DUMPBIN"; then ac_cv_prog_ac_ct_DUMPBIN="$ac_ct_DUMPBIN" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DUMPBIN="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DUMPBIN=$ac_cv_prog_ac_ct_DUMPBIN if test -n "$ac_ct_DUMPBIN"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DUMPBIN" >&5 $as_echo "$ac_ct_DUMPBIN" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_DUMPBIN" && break done if test "x$ac_ct_DUMPBIN" = x; then DUMPBIN=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DUMPBIN=$ac_ct_DUMPBIN fi fi case `$DUMPBIN -symbols /dev/null 2>&1 | sed '1q'` in *COFF*) DUMPBIN="$DUMPBIN -symbols" ;; *) DUMPBIN=: ;; esac fi if test "$DUMPBIN" != ":"; then NM="$DUMPBIN" fi fi test -z "$NM" && NM=nm { $as_echo "$as_me:${as_lineno-$LINENO}: checking the name lister ($NM) interface" >&5 $as_echo_n "checking the name lister ($NM) interface... " >&6; } if ${lt_cv_nm_interface+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_nm_interface="BSD nm" echo "int some_variable = 0;" > conftest.$ac_ext (eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&5) (eval "$ac_compile" 2>conftest.err) cat conftest.err >&5 (eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&5) (eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out) cat conftest.err >&5 (eval echo "\"\$as_me:$LINENO: output\"" >&5) cat conftest.out >&5 if $GREP 'External.*some_variable' conftest.out > /dev/null; then lt_cv_nm_interface="MS dumpbin" fi rm -f conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_nm_interface" >&5 $as_echo "$lt_cv_nm_interface" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ln -s works" >&5 $as_echo_n "checking whether ln -s works... " >&6; } LN_S=$as_ln_s if test "$LN_S" = "ln -s"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no, using $LN_S" >&5 $as_echo "no, using $LN_S" >&6; } fi # find the maximum length of command line arguments { $as_echo "$as_me:${as_lineno-$LINENO}: checking the maximum length of command line arguments" >&5 $as_echo_n "checking the maximum length of command line arguments... " >&6; } if ${lt_cv_sys_max_cmd_len+:} false; then : $as_echo_n "(cached) " >&6 else i=0 teststring="ABCD" case $build_os in msdosdjgpp*) # On DJGPP, this test can blow up pretty badly due to problems in libc # (any single argument exceeding 2000 bytes causes a buffer overrun # during glob expansion). Even if it were fixed, the result of this # check would be larger than it should be. lt_cv_sys_max_cmd_len=12288; # 12K is about right ;; gnu*) # Under GNU Hurd, this test is not required because there is # no limit to the length of command line arguments. # Libtool will interpret -1 as no limit whatsoever lt_cv_sys_max_cmd_len=-1; ;; cygwin* | mingw* | cegcc*) # On Win9x/ME, this test blows up -- it succeeds, but takes # about 5 minutes as the teststring grows exponentially. # Worse, since 9x/ME are not pre-emptively multitasking, # you end up with a "frozen" computer, even though with patience # the test eventually succeeds (with a max line length of 256k). # Instead, let's just punt: use the minimum linelength reported by # all of the supported platforms: 8192 (on NT/2K/XP). lt_cv_sys_max_cmd_len=8192; ;; mint*) # On MiNT this can take a long time and run out of memory. lt_cv_sys_max_cmd_len=8192; ;; amigaos*) # On AmigaOS with pdksh, this test takes hours, literally. # So we just punt and use a minimum line length of 8192. lt_cv_sys_max_cmd_len=8192; ;; netbsd* | freebsd* | openbsd* | darwin* | dragonfly*) # This has been around since 386BSD, at least. Likely further. if test -x /sbin/sysctl; then lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax` elif test -x /usr/sbin/sysctl; then lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax` else lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs fi # And add a safety zone lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` ;; interix*) # We know the value 262144 and hardcode it with a safety zone (like BSD) lt_cv_sys_max_cmd_len=196608 ;; os2*) # The test takes a long time on OS/2. lt_cv_sys_max_cmd_len=8192 ;; osf*) # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not # nice to cause kernel panics so lets avoid the loop below. # First set a reasonable default. lt_cv_sys_max_cmd_len=16384 # if test -x /sbin/sysconfig; then case `/sbin/sysconfig -q proc exec_disable_arg_limit` in *1*) lt_cv_sys_max_cmd_len=-1 ;; esac fi ;; sco3.2v5*) lt_cv_sys_max_cmd_len=102400 ;; sysv5* | sco5v6* | sysv4.2uw2*) kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null` if test -n "$kargmax"; then lt_cv_sys_max_cmd_len=`echo $kargmax | sed 's/.*[ ]//'` else lt_cv_sys_max_cmd_len=32768 fi ;; *) lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` if test -n "$lt_cv_sys_max_cmd_len" && \ test undefined != "$lt_cv_sys_max_cmd_len"; then lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` else # Make teststring a little bigger before we do anything with it. # a 1K string should be a reasonable start. for i in 1 2 3 4 5 6 7 8 ; do teststring=$teststring$teststring done SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}} # If test is not a shell built-in, we'll probably end up computing a # maximum length that is only half of the actual maximum length, but # we can't tell. while { test "X"`env echo "$teststring$teststring" 2>/dev/null` \ = "X$teststring$teststring"; } >/dev/null 2>&1 && test $i != 17 # 1/2 MB should be enough do i=`expr $i + 1` teststring=$teststring$teststring done # Only check the string length outside the loop. lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1` teststring= # Add a significant safety factor because C++ compilers can tack on # massive amounts of additional arguments before passing them to the # linker. It appears as though 1/2 is a usable value. lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2` fi ;; esac fi if test -n $lt_cv_sys_max_cmd_len ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sys_max_cmd_len" >&5 $as_echo "$lt_cv_sys_max_cmd_len" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: none" >&5 $as_echo "none" >&6; } fi max_cmd_len=$lt_cv_sys_max_cmd_len : ${CP="cp -f"} : ${MV="mv -f"} : ${RM="rm -f"} { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the shell understands some XSI constructs" >&5 $as_echo_n "checking whether the shell understands some XSI constructs... " >&6; } # Try some XSI features xsi_shell=no ( _lt_dummy="a/b/c" test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \ = c,a/b,b/c, \ && eval 'test $(( 1 + 1 )) -eq 2 \ && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \ && xsi_shell=yes { $as_echo "$as_me:${as_lineno-$LINENO}: result: $xsi_shell" >&5 $as_echo "$xsi_shell" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the shell understands \"+=\"" >&5 $as_echo_n "checking whether the shell understands \"+=\"... " >&6; } lt_shell_append=no ( foo=bar; set foo baz; eval "$1+=\$2" && test "$foo" = barbaz ) \ >/dev/null 2>&1 \ && lt_shell_append=yes { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_shell_append" >&5 $as_echo "$lt_shell_append" >&6; } if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then lt_unset=unset else lt_unset=false fi # test EBCDIC or ASCII case `echo X|tr X '\101'` in A) # ASCII based system # \n is not interpreted correctly by Solaris 8 /usr/ucb/tr lt_SP2NL='tr \040 \012' lt_NL2SP='tr \015\012 \040\040' ;; *) # EBCDIC based system lt_SP2NL='tr \100 \n' lt_NL2SP='tr \r\n \100\100' ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5 $as_echo_n "checking how to convert $build file names to $host format... " >&6; } if ${lt_cv_to_host_file_cmd+:} false; then : $as_echo_n "(cached) " >&6 else case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32 ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32 ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32 ;; esac ;; *-*-cygwin* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_noop ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin ;; esac ;; * ) # unhandled hosts (and "normal" native builds) lt_cv_to_host_file_cmd=func_convert_file_noop ;; esac fi to_host_file_cmd=$lt_cv_to_host_file_cmd { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5 $as_echo "$lt_cv_to_host_file_cmd" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5 $as_echo_n "checking how to convert $build file names to toolchain format... " >&6; } if ${lt_cv_to_tool_file_cmd+:} false; then : $as_echo_n "(cached) " >&6 else #assume ordinary cross tools, or native build. lt_cv_to_tool_file_cmd=func_convert_file_noop case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32 ;; esac ;; esac fi to_tool_file_cmd=$lt_cv_to_tool_file_cmd { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5 $as_echo "$lt_cv_to_tool_file_cmd" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5 $as_echo_n "checking for $LD option to reload object files... " >&6; } if ${lt_cv_ld_reload_flag+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_reload_flag='-r' fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_reload_flag" >&5 $as_echo "$lt_cv_ld_reload_flag" >&6; } reload_flag=$lt_cv_ld_reload_flag case $reload_flag in "" | " "*) ;; *) reload_flag=" $reload_flag" ;; esac reload_cmds='$LD$reload_flag -o $output$reload_objs' case $host_os in cygwin* | mingw* | pw32* | cegcc*) if test "$GCC" != yes; then reload_cmds=false fi ;; darwin*) if test "$GCC" = yes; then reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs' else reload_cmds='$LD$reload_flag -o $output$reload_objs' fi ;; esac if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}objdump", so it can be a program name with args. set dummy ${ac_tool_prefix}objdump; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OBJDUMP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OBJDUMP"; then ac_cv_prog_OBJDUMP="$OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OBJDUMP="${ac_tool_prefix}objdump" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OBJDUMP=$ac_cv_prog_OBJDUMP if test -n "$OBJDUMP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OBJDUMP" >&5 $as_echo "$OBJDUMP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OBJDUMP"; then ac_ct_OBJDUMP=$OBJDUMP # Extract the first word of "objdump", so it can be a program name with args. set dummy objdump; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OBJDUMP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OBJDUMP"; then ac_cv_prog_ac_ct_OBJDUMP="$ac_ct_OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OBJDUMP="objdump" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OBJDUMP=$ac_cv_prog_ac_ct_OBJDUMP if test -n "$ac_ct_OBJDUMP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OBJDUMP" >&5 $as_echo "$ac_ct_OBJDUMP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OBJDUMP" = x; then OBJDUMP="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OBJDUMP=$ac_ct_OBJDUMP fi else OBJDUMP="$ac_cv_prog_OBJDUMP" fi test -z "$OBJDUMP" && OBJDUMP=objdump { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to recognize dependent libraries" >&5 $as_echo_n "checking how to recognize dependent libraries... " >&6; } if ${lt_cv_deplibs_check_method+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_file_magic_cmd='$MAGIC_CMD' lt_cv_file_magic_test_file= lt_cv_deplibs_check_method='unknown' # Need to set the preceding variable on all platforms that support # interlibrary dependencies. # 'none' -- dependencies not supported. # `unknown' -- same as none, but documents that we really don't know. # 'pass_all' -- all dependencies passed with no checks. # 'test_compile' -- check by making test program. # 'file_magic [[regex]]' -- check by looking for files in library path # which responds to the $file_magic_cmd with a given extended regex. # If you have `file' or equivalent on your system and you're not sure # whether `pass_all' will *always* work, you probably want this one. case $host_os in aix[4-9]*) lt_cv_deplibs_check_method=pass_all ;; beos*) lt_cv_deplibs_check_method=pass_all ;; bsdi[45]*) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib)' lt_cv_file_magic_cmd='/usr/bin/file -L' lt_cv_file_magic_test_file=/shlib/libc.so ;; cygwin*) # func_win32_libid is a shell function defined in ltmain.sh lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' ;; mingw* | pw32*) # Base MSYS/MinGW do not provide the 'file' command needed by # func_win32_libid shell function, so use a weaker test based on 'objdump', # unless we find 'file', for example because we are cross-compiling. # func_win32_libid assumes BSD nm, so disallow it if using MS dumpbin. if ( test "$lt_cv_nm_interface" = "BSD nm" && file / ) >/dev/null 2>&1; then lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' else # Keep this pattern in sync with the one in func_win32_libid. lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' lt_cv_file_magic_cmd='$OBJDUMP -f' fi ;; cegcc*) # use the weaker test based on 'objdump'. See mingw*. lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?' lt_cv_file_magic_cmd='$OBJDUMP -f' ;; darwin* | rhapsody*) lt_cv_deplibs_check_method=pass_all ;; freebsd* | dragonfly*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then case $host_cpu in i*86 ) # Not sure whether the presence of OpenBSD here was a mistake. # Let's accept both of them until this is cleared up. lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[3-9]86 (compact )?demand paged shared library' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` ;; esac else lt_cv_deplibs_check_method=pass_all fi ;; haiku*) lt_cv_deplibs_check_method=pass_all ;; hpux10.20* | hpux11*) lt_cv_file_magic_cmd=/usr/bin/file case $host_cpu in ia64*) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - IA64' lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so ;; hppa*64*) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]' lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl ;; *) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|PA-RISC[0-9]\.[0-9]) shared library' lt_cv_file_magic_test_file=/usr/lib/libc.sl ;; esac ;; interix[3-9]*) # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|\.a)$' ;; irix5* | irix6* | nonstopux*) case $LD in *-32|*"-32 ") libmagic=32-bit;; *-n32|*"-n32 ") libmagic=N32;; *-64|*"-64 ") libmagic=64-bit;; *) libmagic=never-match;; esac lt_cv_deplibs_check_method=pass_all ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) lt_cv_deplibs_check_method=pass_all ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|_pic\.a)$' fi ;; newos6*) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (executable|dynamic lib)' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=/usr/lib/libnls.so ;; *nto* | *qnx*) lt_cv_deplibs_check_method=pass_all ;; openbsd*) if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|\.so|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' fi ;; osf3* | osf4* | osf5*) lt_cv_deplibs_check_method=pass_all ;; rdos*) lt_cv_deplibs_check_method=pass_all ;; solaris*) lt_cv_deplibs_check_method=pass_all ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) lt_cv_deplibs_check_method=pass_all ;; sysv4 | sysv4.3*) case $host_vendor in motorola) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib) M[0-9][0-9]* Version [0-9]' lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` ;; ncr) lt_cv_deplibs_check_method=pass_all ;; sequent) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB (shared object|dynamic lib )' ;; sni) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method="file_magic ELF [0-9][0-9]*-bit [LM]SB dynamic lib" lt_cv_file_magic_test_file=/lib/libc.so ;; siemens) lt_cv_deplibs_check_method=pass_all ;; pc) lt_cv_deplibs_check_method=pass_all ;; esac ;; tpf*) lt_cv_deplibs_check_method=pass_all ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5 $as_echo "$lt_cv_deplibs_check_method" >&6; } file_magic_glob= want_nocaseglob=no if test "$build" = "$host"; then case $host_os in mingw* | pw32*) if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then want_nocaseglob=yes else file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"` fi ;; esac fi file_magic_cmd=$lt_cv_file_magic_cmd deplibs_check_method=$lt_cv_deplibs_check_method test -z "$deplibs_check_method" && deplibs_check_method=unknown if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args. set dummy ${ac_tool_prefix}dlltool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DLLTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DLLTOOL"; then ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DLLTOOL=$ac_cv_prog_DLLTOOL if test -n "$DLLTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5 $as_echo "$DLLTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_DLLTOOL"; then ac_ct_DLLTOOL=$DLLTOOL # Extract the first word of "dlltool", so it can be a program name with args. set dummy dlltool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DLLTOOL"; then ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DLLTOOL="dlltool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL if test -n "$ac_ct_DLLTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5 $as_echo "$ac_ct_DLLTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_DLLTOOL" = x; then DLLTOOL="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DLLTOOL=$ac_ct_DLLTOOL fi else DLLTOOL="$ac_cv_prog_DLLTOOL" fi test -z "$DLLTOOL" && DLLTOOL=dlltool { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5 $as_echo_n "checking how to associate runtime and link libraries... " >&6; } if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_sharedlib_from_linklib_cmd='unknown' case $host_os in cygwin* | mingw* | pw32* | cegcc*) # two different shell functions defined in ltmain.sh # decide which to use based on capabilities of $DLLTOOL case `$DLLTOOL --help 2>&1` in *--identify-strict*) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib ;; *) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback ;; esac ;; *) # fallback: assume linklib IS sharedlib lt_cv_sharedlib_from_linklib_cmd="$ECHO" ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5 $as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; } sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO if test -n "$ac_tool_prefix"; then for ac_prog in ar do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_AR+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$AR"; then ac_cv_prog_AR="$AR" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_AR="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi AR=$ac_cv_prog_AR if test -n "$AR"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $AR" >&5 $as_echo "$AR" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$AR" && break done fi if test -z "$AR"; then ac_ct_AR=$AR for ac_prog in ar do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_AR+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_AR"; then ac_cv_prog_ac_ct_AR="$ac_ct_AR" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_AR="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_AR=$ac_cv_prog_ac_ct_AR if test -n "$ac_ct_AR"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_AR" >&5 $as_echo "$ac_ct_AR" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_AR" && break done if test "x$ac_ct_AR" = x; then AR="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac AR=$ac_ct_AR fi fi : ${AR=ar} : ${AR_FLAGS=cru} { $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5 $as_echo_n "checking for archiver @FILE support... " >&6; } if ${lt_cv_ar_at_file+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ar_at_file=no cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : echo conftest.$ac_objext > conftest.lst lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5' { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5 (eval $lt_ar_try) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if test "$ac_status" -eq 0; then # Ensure the archiver fails upon bogus file names. rm -f conftest.$ac_objext libconftest.a { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5 (eval $lt_ar_try) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if test "$ac_status" -ne 0; then lt_cv_ar_at_file=@ fi fi rm -f conftest.* libconftest.a fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5 $as_echo "$lt_cv_ar_at_file" >&6; } if test "x$lt_cv_ar_at_file" = xno; then archiver_list_spec= else archiver_list_spec=$lt_cv_ar_at_file fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5 $as_echo "$STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5 $as_echo "$ac_ct_STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_STRIP" = x; then STRIP=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac STRIP=$ac_ct_STRIP fi else STRIP="$ac_cv_prog_STRIP" fi test -z "$STRIP" && STRIP=: if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}ranlib", so it can be a program name with args. set dummy ${ac_tool_prefix}ranlib; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_RANLIB+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$RANLIB"; then ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_RANLIB="${ac_tool_prefix}ranlib" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi RANLIB=$ac_cv_prog_RANLIB if test -n "$RANLIB"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $RANLIB" >&5 $as_echo "$RANLIB" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_RANLIB"; then ac_ct_RANLIB=$RANLIB # Extract the first word of "ranlib", so it can be a program name with args. set dummy ranlib; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_RANLIB+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_RANLIB"; then ac_cv_prog_ac_ct_RANLIB="$ac_ct_RANLIB" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_RANLIB="ranlib" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_RANLIB=$ac_cv_prog_ac_ct_RANLIB if test -n "$ac_ct_RANLIB"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_RANLIB" >&5 $as_echo "$ac_ct_RANLIB" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_RANLIB" = x; then RANLIB=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac RANLIB=$ac_ct_RANLIB fi else RANLIB="$ac_cv_prog_RANLIB" fi test -z "$RANLIB" && RANLIB=: # Determine commands to create old-style static archives. old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs' old_postinstall_cmds='chmod 644 $oldlib' old_postuninstall_cmds= if test -n "$RANLIB"; then case $host_os in openbsd*) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib" ;; *) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib" ;; esac old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib" fi case $host_os in darwin*) lock_old_archive_extraction=yes ;; *) lock_old_archive_extraction=no ;; esac # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # Check for command to grab the raw symbol name followed by C symbol from nm. { $as_echo "$as_me:${as_lineno-$LINENO}: checking command to parse $NM output from $compiler object" >&5 $as_echo_n "checking command to parse $NM output from $compiler object... " >&6; } if ${lt_cv_sys_global_symbol_pipe+:} false; then : $as_echo_n "(cached) " >&6 else # These are sane defaults that work on at least a few old systems. # [They come from Ultrix. What could be older than Ultrix?!! ;)] # Character class describing NM global symbol codes. symcode='[BCDEGRST]' # Regexp to match symbols that can be accessed directly from C. sympat='\([_A-Za-z][_A-Za-z0-9]*\)' # Define system-specific variables. case $host_os in aix*) symcode='[BCDT]' ;; cygwin* | mingw* | pw32* | cegcc*) symcode='[ABCDGISTW]' ;; hpux*) if test "$host_cpu" = ia64; then symcode='[ABCDEGRST]' fi ;; irix* | nonstopux*) symcode='[BCDEGRST]' ;; osf*) symcode='[BCDEGQRST]' ;; solaris*) symcode='[BDRT]' ;; sco3.2v5*) symcode='[DT]' ;; sysv4.2uw2*) symcode='[DT]' ;; sysv5* | sco5v6* | unixware* | OpenUNIX*) symcode='[ABDT]' ;; sysv4) symcode='[DFNSTU]' ;; esac # If we're using GNU nm, then use its standard symbol codes. case `$NM -V 2>&1` in *GNU* | *'with BFD'*) symcode='[ABCDGIRSTW]' ;; esac # Transform an extracted symbol line into a proper C declaration. # Some systems (esp. on ia64) link data and code symbols differently, # so use this general approach. lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" # Transform an extracted symbol line into symbol name and symbol address lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/ {\"\2\", (void *) \&\2},/p'" lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/ {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/ {\"lib\2\", (void *) \&\2},/p'" # Handle CRLF in mingw tool chain opt_cr= case $build_os in mingw*) opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp ;; esac # Try without a prefix underscore, then with it. for ac_symprfx in "" "_"; do # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol. symxfrm="\\1 $ac_symprfx\\2 \\2" # Write the raw and C identifiers. if test "$lt_cv_nm_interface" = "MS dumpbin"; then # Fake it for dumpbin and say T for any non-static function # and D for any global variable. # Also find C++ and __fastcall symbols from MSVC++, # which start with @ or ?. lt_cv_sys_global_symbol_pipe="$AWK '"\ " {last_section=section; section=\$ 3};"\ " /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\ " /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\ " \$ 0!~/External *\|/{next};"\ " / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\ " {if(hide[section]) next};"\ " {f=0}; \$ 0~/\(\).*\|/{f=1}; {printf f ? \"T \" : \"D \"};"\ " {split(\$ 0, a, /\||\r/); split(a[2], s)};"\ " s[1]~/^[@?]/{print s[1], s[1]; next};"\ " s[1]~prfx {split(s[1],t,\"@\"); print t[1], substr(t[1],length(prfx))}"\ " ' prfx=^$ac_symprfx" else lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[ ]\($symcode$symcode*\)[ ][ ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'" fi lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'" # Check to see that the pipe works correctly. pipe_works=no rm -f conftest* cat > conftest.$ac_ext <<_LT_EOF #ifdef __cplusplus extern "C" { #endif char nm_test_var; void nm_test_func(void); void nm_test_func(void){} #ifdef __cplusplus } #endif int main(){nm_test_var='a';nm_test_func();return(0);} _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then # Now try to grab the symbols. nlist=conftest.nm if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist\""; } >&5 (eval $NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s "$nlist"; then # Try sorting and uniquifying the output. if sort "$nlist" | uniq > "$nlist"T; then mv -f "$nlist"T "$nlist" else rm -f "$nlist"T fi # Make sure that we snagged all the symbols we need. if $GREP ' nm_test_var$' "$nlist" >/dev/null; then if $GREP ' nm_test_func$' "$nlist" >/dev/null; then cat <<_LT_EOF > conftest.$ac_ext /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE) /* DATA imports from DLLs on WIN32 con't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT_DLSYM_CONST #elif defined(__osf__) /* This system does not cope well with relocations in const data. */ # define LT_DLSYM_CONST #else # define LT_DLSYM_CONST const #endif #ifdef __cplusplus extern "C" { #endif _LT_EOF # Now generate the symbol file. eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext' cat <<_LT_EOF >> conftest.$ac_ext /* The mapping between symbol names and symbols. */ LT_DLSYM_CONST struct { const char *name; void *address; } lt__PROGRAM__LTX_preloaded_symbols[] = { { "@PROGRAM@", (void *) 0 }, _LT_EOF $SED "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (void *) \&\2},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext cat <<\_LT_EOF >> conftest.$ac_ext {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt__PROGRAM__LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif _LT_EOF # Now try linking the two files. mv conftest.$ac_objext conftstm.$ac_objext lt_globsym_save_LIBS=$LIBS lt_globsym_save_CFLAGS=$CFLAGS LIBS="conftstm.$ac_objext" CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag" if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s conftest${ac_exeext}; then pipe_works=yes fi LIBS=$lt_globsym_save_LIBS CFLAGS=$lt_globsym_save_CFLAGS else echo "cannot find nm_test_func in $nlist" >&5 fi else echo "cannot find nm_test_var in $nlist" >&5 fi else echo "cannot run $lt_cv_sys_global_symbol_pipe" >&5 fi else echo "$progname: failed program was:" >&5 cat conftest.$ac_ext >&5 fi rm -rf conftest* conftst* # Do not use the global_symbol_pipe unless it works. if test "$pipe_works" = yes; then break else lt_cv_sys_global_symbol_pipe= fi done fi if test -z "$lt_cv_sys_global_symbol_pipe"; then lt_cv_sys_global_symbol_to_cdecl= fi if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: failed" >&5 $as_echo "failed" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: ok" >&5 $as_echo "ok" >&6; } fi # Response file support. if test "$lt_cv_nm_interface" = "MS dumpbin"; then nm_file_list_spec='@' elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then nm_file_list_spec='@' fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5 $as_echo_n "checking for sysroot... " >&6; } # Check whether --with-sysroot was given. if test "${with_sysroot+set}" = set; then : withval=$with_sysroot; else with_sysroot=no fi lt_sysroot= case ${with_sysroot} in #( yes) if test "$GCC" = yes; then lt_sysroot=`$CC --print-sysroot 2>/dev/null` fi ;; #( /*) lt_sysroot=`echo "$with_sysroot" | sed -e "$sed_quote_subst"` ;; #( no|'') ;; #( *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_sysroot}" >&5 $as_echo "${with_sysroot}" >&6; } as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5 ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5 $as_echo "${lt_sysroot:-no}" >&6; } # Check whether --enable-libtool-lock was given. if test "${enable_libtool_lock+set}" = set; then : enableval=$enable_libtool_lock; fi test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes # Some flags need to be propagated to the compiler or linker for good # libtool support. case $host in ia64-*-hpux*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.$ac_objext` in *ELF-32*) HPUX_IA64_MODE="32" ;; *ELF-64*) HPUX_IA64_MODE="64" ;; esac fi rm -rf conftest* ;; *-*-irix6*) # Find out which ABI we are using. echo '#line '$LINENO' "configure"' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then if test "$lt_cv_prog_gnu_ld" = yes; then case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -melf32bsmip" ;; *N32*) LD="${LD-ld} -melf32bmipn32" ;; *64-bit*) LD="${LD-ld} -melf64bmip" ;; esac else case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -32" ;; *N32*) LD="${LD-ld} -n32" ;; *64-bit*) LD="${LD-ld} -64" ;; esac fi fi rm -rf conftest* ;; x86_64-*kfreebsd*-gnu|x86_64-*linux*|powerpc*-*linux*| \ s390*-*linux*|s390*-*tpf*|sparc*-*linux*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.o` in *32-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_i386_fbsd" ;; x86_64-*linux*) case `/usr/bin/file conftest.o` in *x86-64*) LD="${LD-ld} -m elf32_x86_64" ;; *) LD="${LD-ld} -m elf_i386" ;; esac ;; powerpc64le-*) LD="${LD-ld} -m elf32lppclinux" ;; powerpc64-*) LD="${LD-ld} -m elf32ppclinux" ;; s390x-*linux*) LD="${LD-ld} -m elf_s390" ;; sparc64-*linux*) LD="${LD-ld} -m elf32_sparc" ;; esac ;; *64-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_x86_64_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_x86_64" ;; powerpcle-*) LD="${LD-ld} -m elf64lppc" ;; powerpc-*) LD="${LD-ld} -m elf64ppc" ;; s390*-*linux*|s390*-*tpf*) LD="${LD-ld} -m elf64_s390" ;; sparc*-*linux*) LD="${LD-ld} -m elf64_sparc" ;; esac ;; esac fi rm -rf conftest* ;; *-*-sco3.2v5*) # On SCO OpenServer 5, we need -belf to get full-featured binaries. SAVE_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS -belf" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the C compiler needs -belf" >&5 $as_echo_n "checking whether the C compiler needs -belf... " >&6; } if ${lt_cv_cc_needs_belf+:} false; then : $as_echo_n "(cached) " >&6 else ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_cc_needs_belf=yes else lt_cv_cc_needs_belf=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_cc_needs_belf" >&5 $as_echo "$lt_cv_cc_needs_belf" >&6; } if test x"$lt_cv_cc_needs_belf" != x"yes"; then # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf CFLAGS="$SAVE_CFLAGS" fi ;; *-*solaris*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.o` in *64-bit*) case $lt_cv_prog_gnu_ld in yes*) case $host in i?86-*-solaris*) LD="${LD-ld} -m elf_x86_64" ;; sparc*-*-solaris*) LD="${LD-ld} -m elf64_sparc" ;; esac # GNU ld 2.21 introduced _sol2 emulations. Use them if available. if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then LD="${LD-ld}_sol2" fi ;; *) if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then LD="${LD-ld} -64" fi ;; esac ;; esac fi rm -rf conftest* ;; esac need_locks="$enable_libtool_lock" if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args. set dummy ${ac_tool_prefix}mt; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_MANIFEST_TOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$MANIFEST_TOOL"; then ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL if test -n "$MANIFEST_TOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5 $as_echo "$MANIFEST_TOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_MANIFEST_TOOL"; then ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL # Extract the first word of "mt", so it can be a program name with args. set dummy mt; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_MANIFEST_TOOL"; then ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_MANIFEST_TOOL="mt" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL if test -n "$ac_ct_MANIFEST_TOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5 $as_echo "$ac_ct_MANIFEST_TOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_MANIFEST_TOOL" = x; then MANIFEST_TOOL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL fi else MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL" fi test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5 $as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; } if ${lt_cv_path_mainfest_tool+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_path_mainfest_tool=no echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5 $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out cat conftest.err >&5 if $GREP 'Manifest Tool' conftest.out > /dev/null; then lt_cv_path_mainfest_tool=yes fi rm -f conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5 $as_echo "$lt_cv_path_mainfest_tool" >&6; } if test "x$lt_cv_path_mainfest_tool" != xyes; then MANIFEST_TOOL=: fi case $host_os in rhapsody* | darwin*) if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}dsymutil", so it can be a program name with args. set dummy ${ac_tool_prefix}dsymutil; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DSYMUTIL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DSYMUTIL"; then ac_cv_prog_DSYMUTIL="$DSYMUTIL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DSYMUTIL="${ac_tool_prefix}dsymutil" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DSYMUTIL=$ac_cv_prog_DSYMUTIL if test -n "$DSYMUTIL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DSYMUTIL" >&5 $as_echo "$DSYMUTIL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_DSYMUTIL"; then ac_ct_DSYMUTIL=$DSYMUTIL # Extract the first word of "dsymutil", so it can be a program name with args. set dummy dsymutil; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DSYMUTIL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DSYMUTIL"; then ac_cv_prog_ac_ct_DSYMUTIL="$ac_ct_DSYMUTIL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DSYMUTIL="dsymutil" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DSYMUTIL=$ac_cv_prog_ac_ct_DSYMUTIL if test -n "$ac_ct_DSYMUTIL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DSYMUTIL" >&5 $as_echo "$ac_ct_DSYMUTIL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_DSYMUTIL" = x; then DSYMUTIL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DSYMUTIL=$ac_ct_DSYMUTIL fi else DSYMUTIL="$ac_cv_prog_DSYMUTIL" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}nmedit", so it can be a program name with args. set dummy ${ac_tool_prefix}nmedit; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_NMEDIT+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$NMEDIT"; then ac_cv_prog_NMEDIT="$NMEDIT" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_NMEDIT="${ac_tool_prefix}nmedit" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi NMEDIT=$ac_cv_prog_NMEDIT if test -n "$NMEDIT"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $NMEDIT" >&5 $as_echo "$NMEDIT" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_NMEDIT"; then ac_ct_NMEDIT=$NMEDIT # Extract the first word of "nmedit", so it can be a program name with args. set dummy nmedit; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_NMEDIT+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_NMEDIT"; then ac_cv_prog_ac_ct_NMEDIT="$ac_ct_NMEDIT" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_NMEDIT="nmedit" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_NMEDIT=$ac_cv_prog_ac_ct_NMEDIT if test -n "$ac_ct_NMEDIT"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_NMEDIT" >&5 $as_echo "$ac_ct_NMEDIT" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_NMEDIT" = x; then NMEDIT=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac NMEDIT=$ac_ct_NMEDIT fi else NMEDIT="$ac_cv_prog_NMEDIT" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}lipo", so it can be a program name with args. set dummy ${ac_tool_prefix}lipo; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_LIPO+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$LIPO"; then ac_cv_prog_LIPO="$LIPO" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_LIPO="${ac_tool_prefix}lipo" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi LIPO=$ac_cv_prog_LIPO if test -n "$LIPO"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $LIPO" >&5 $as_echo "$LIPO" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_LIPO"; then ac_ct_LIPO=$LIPO # Extract the first word of "lipo", so it can be a program name with args. set dummy lipo; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_LIPO+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_LIPO"; then ac_cv_prog_ac_ct_LIPO="$ac_ct_LIPO" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_LIPO="lipo" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_LIPO=$ac_cv_prog_ac_ct_LIPO if test -n "$ac_ct_LIPO"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_LIPO" >&5 $as_echo "$ac_ct_LIPO" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_LIPO" = x; then LIPO=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac LIPO=$ac_ct_LIPO fi else LIPO="$ac_cv_prog_LIPO" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}otool", so it can be a program name with args. set dummy ${ac_tool_prefix}otool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OTOOL"; then ac_cv_prog_OTOOL="$OTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OTOOL="${ac_tool_prefix}otool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OTOOL=$ac_cv_prog_OTOOL if test -n "$OTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OTOOL" >&5 $as_echo "$OTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OTOOL"; then ac_ct_OTOOL=$OTOOL # Extract the first word of "otool", so it can be a program name with args. set dummy otool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OTOOL"; then ac_cv_prog_ac_ct_OTOOL="$ac_ct_OTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OTOOL="otool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OTOOL=$ac_cv_prog_ac_ct_OTOOL if test -n "$ac_ct_OTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL" >&5 $as_echo "$ac_ct_OTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OTOOL" = x; then OTOOL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OTOOL=$ac_ct_OTOOL fi else OTOOL="$ac_cv_prog_OTOOL" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}otool64", so it can be a program name with args. set dummy ${ac_tool_prefix}otool64; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OTOOL64+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OTOOL64"; then ac_cv_prog_OTOOL64="$OTOOL64" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OTOOL64="${ac_tool_prefix}otool64" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OTOOL64=$ac_cv_prog_OTOOL64 if test -n "$OTOOL64"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OTOOL64" >&5 $as_echo "$OTOOL64" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OTOOL64"; then ac_ct_OTOOL64=$OTOOL64 # Extract the first word of "otool64", so it can be a program name with args. set dummy otool64; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OTOOL64+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OTOOL64"; then ac_cv_prog_ac_ct_OTOOL64="$ac_ct_OTOOL64" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OTOOL64="otool64" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OTOOL64=$ac_cv_prog_ac_ct_OTOOL64 if test -n "$ac_ct_OTOOL64"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL64" >&5 $as_echo "$ac_ct_OTOOL64" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OTOOL64" = x; then OTOOL64=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OTOOL64=$ac_ct_OTOOL64 fi else OTOOL64="$ac_cv_prog_OTOOL64" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -single_module linker flag" >&5 $as_echo_n "checking for -single_module linker flag... " >&6; } if ${lt_cv_apple_cc_single_mod+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_apple_cc_single_mod=no if test -z "${LT_MULTI_MODULE}"; then # By default we will add the -single_module flag. You can override # by either setting the environment variable LT_MULTI_MODULE # non-empty at configure time, or by adding -multi_module to the # link flags. rm -rf libconftest.dylib* echo "int foo(void){return 1;}" > conftest.c echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c" >&5 $LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c 2>conftest.err _lt_result=$? # If there is a non-empty error log, and "single_module" # appears in it, assume the flag caused a linker warning if test -s conftest.err && $GREP single_module conftest.err; then cat conftest.err >&5 # Otherwise, if the output was created with a 0 exit code from # the compiler, it worked. elif test -f libconftest.dylib && test $_lt_result -eq 0; then lt_cv_apple_cc_single_mod=yes else cat conftest.err >&5 fi rm -rf libconftest.dylib* rm -f conftest.* fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_apple_cc_single_mod" >&5 $as_echo "$lt_cv_apple_cc_single_mod" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -exported_symbols_list linker flag" >&5 $as_echo_n "checking for -exported_symbols_list linker flag... " >&6; } if ${lt_cv_ld_exported_symbols_list+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_exported_symbols_list=no save_LDFLAGS=$LDFLAGS echo "_main" > conftest.sym LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_ld_exported_symbols_list=yes else lt_cv_ld_exported_symbols_list=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_exported_symbols_list" >&5 $as_echo "$lt_cv_ld_exported_symbols_list" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -force_load linker flag" >&5 $as_echo_n "checking for -force_load linker flag... " >&6; } if ${lt_cv_ld_force_load+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_force_load=no cat > conftest.c << _LT_EOF int forced_loaded() { return 2;} _LT_EOF echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&5 $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5 echo "$AR cru libconftest.a conftest.o" >&5 $AR cru libconftest.a conftest.o 2>&5 echo "$RANLIB libconftest.a" >&5 $RANLIB libconftest.a 2>&5 cat > conftest.c << _LT_EOF int main() { return 0;} _LT_EOF echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&5 $LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err _lt_result=$? if test -s conftest.err && $GREP force_load conftest.err; then cat conftest.err >&5 elif test -f conftest && test $_lt_result -eq 0 && $GREP forced_load conftest >/dev/null 2>&1 ; then lt_cv_ld_force_load=yes else cat conftest.err >&5 fi rm -f conftest.err libconftest.a conftest conftest.c rm -rf conftest.dSYM fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_force_load" >&5 $as_echo "$lt_cv_ld_force_load" >&6; } case $host_os in rhapsody* | darwin1.[012]) _lt_dar_allow_undefined='${wl}-undefined ${wl}suppress' ;; darwin1.*) _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; darwin*) # darwin 5.x on # if running on 10.5 or later, the deployment target defaults # to the OS version, if on x86, and 10.4, the deployment # target defaults to 10.4. Don't you love it? case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in 10.0,*86*-darwin8*|10.0,*-darwin[91]*) _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; 10.[012]*) _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; 10.*) _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; esac ;; esac if test "$lt_cv_apple_cc_single_mod" = "yes"; then _lt_dar_single_mod='$single_module' fi if test "$lt_cv_ld_exported_symbols_list" = "yes"; then _lt_dar_export_syms=' ${wl}-exported_symbols_list,$output_objdir/${libname}-symbols.expsym' else _lt_dar_export_syms='~$NMEDIT -s $output_objdir/${libname}-symbols.expsym ${lib}' fi if test "$DSYMUTIL" != ":" && test "$lt_cv_ld_force_load" = "no"; then _lt_dsymutil='~$DSYMUTIL $lib || :' else _lt_dsymutil= fi ;; esac for ac_header in dlfcn.h do : ac_fn_c_check_header_compile "$LINENO" "dlfcn.h" "ac_cv_header_dlfcn_h" "$ac_includes_default " if test "x$ac_cv_header_dlfcn_h" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_DLFCN_H 1 _ACEOF fi done func_stripname_cnf () { case ${2} in .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;; *) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;; esac } # func_stripname_cnf # Set options enable_dlopen=no enable_win32_dll=no # Check whether --enable-shared was given. if test "${enable_shared+set}" = set; then : enableval=$enable_shared; p=${PACKAGE-default} case $enableval in yes) enable_shared=yes ;; no) enable_shared=no ;; *) enable_shared=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_shared=yes fi done IFS="$lt_save_ifs" ;; esac else enable_shared=yes fi # Check whether --enable-static was given. if test "${enable_static+set}" = set; then : enableval=$enable_static; p=${PACKAGE-default} case $enableval in yes) enable_static=yes ;; no) enable_static=no ;; *) enable_static=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_static=yes fi done IFS="$lt_save_ifs" ;; esac else enable_static=yes fi # Check whether --with-pic was given. if test "${with_pic+set}" = set; then : withval=$with_pic; lt_p=${PACKAGE-default} case $withval in yes|no) pic_mode=$withval ;; *) pic_mode=default # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for lt_pkg in $withval; do IFS="$lt_save_ifs" if test "X$lt_pkg" = "X$lt_p"; then pic_mode=yes fi done IFS="$lt_save_ifs" ;; esac else pic_mode=default fi test -z "$pic_mode" && pic_mode=default # Check whether --enable-fast-install was given. if test "${enable_fast_install+set}" = set; then : enableval=$enable_fast_install; p=${PACKAGE-default} case $enableval in yes) enable_fast_install=yes ;; no) enable_fast_install=no ;; *) enable_fast_install=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_fast_install=yes fi done IFS="$lt_save_ifs" ;; esac else enable_fast_install=yes fi # This can be used to rebuild libtool when needed LIBTOOL_DEPS="$ltmain" # Always use our own libtool. LIBTOOL='$(SHELL) $(top_builddir)/libtool' test -z "$LN_S" && LN_S="ln -s" if test -n "${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for objdir" >&5 $as_echo_n "checking for objdir... " >&6; } if ${lt_cv_objdir+:} false; then : $as_echo_n "(cached) " >&6 else rm -f .libs 2>/dev/null mkdir .libs 2>/dev/null if test -d .libs; then lt_cv_objdir=.libs else # MS-DOS does not allow filenames that begin with a dot. lt_cv_objdir=_libs fi rmdir .libs 2>/dev/null fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_objdir" >&5 $as_echo "$lt_cv_objdir" >&6; } objdir=$lt_cv_objdir cat >>confdefs.h <<_ACEOF #define LT_OBJDIR "$lt_cv_objdir/" _ACEOF case $host_os in aix3*) # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test "X${COLLECT_NAMES+set}" != Xset; then COLLECT_NAMES= export COLLECT_NAMES fi ;; esac # Global variables: ofile=libtool can_build_shared=yes # All known linkers require a `.a' archive for static linking (except MSVC, # which needs '.lib'). libext=a with_gnu_ld="$lt_cv_prog_gnu_ld" old_CC="$CC" old_CFLAGS="$CFLAGS" # Set sane defaults for various variables test -z "$CC" && CC=cc test -z "$LTCC" && LTCC=$CC test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS test -z "$LD" && LD=ld test -z "$ac_objext" && ac_objext=o for cc_temp in $compiler""; do case $cc_temp in compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; \-*) ;; *) break;; esac done cc_basename=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` # Only perform the check for file, if the check method requires it test -z "$MAGIC_CMD" && MAGIC_CMD=file case $deplibs_check_method in file_magic*) if test "$file_magic_cmd" = '$MAGIC_CMD'; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ${ac_tool_prefix}file" >&5 $as_echo_n "checking for ${ac_tool_prefix}file... " >&6; } if ${lt_cv_path_MAGIC_CMD+:} false; then : $as_echo_n "(cached) " >&6 else case $MAGIC_CMD in [\\/*] | ?:[\\/]*) lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD="$MAGIC_CMD" lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" for ac_dir in $ac_dummy; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/${ac_tool_prefix}file; then lt_cv_path_MAGIC_CMD="$ac_dir/${ac_tool_prefix}file" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS="$lt_save_ifs" MAGIC_CMD="$lt_save_MAGIC_CMD" ;; esac fi MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if test -n "$MAGIC_CMD"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5 $as_echo "$MAGIC_CMD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test -z "$lt_cv_path_MAGIC_CMD"; then if test -n "$ac_tool_prefix"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for file" >&5 $as_echo_n "checking for file... " >&6; } if ${lt_cv_path_MAGIC_CMD+:} false; then : $as_echo_n "(cached) " >&6 else case $MAGIC_CMD in [\\/*] | ?:[\\/]*) lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD="$MAGIC_CMD" lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" for ac_dir in $ac_dummy; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/file; then lt_cv_path_MAGIC_CMD="$ac_dir/file" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS="$lt_save_ifs" MAGIC_CMD="$lt_save_MAGIC_CMD" ;; esac fi MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if test -n "$MAGIC_CMD"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5 $as_echo "$MAGIC_CMD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else MAGIC_CMD=: fi fi fi ;; esac # Use C for the default configuration in the libtool script lt_save_CC="$CC" ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu # Source file extension for C test sources. ac_ext=c # Object file extension for compiled C test sources. objext=o objext=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(){return(0);}' # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # Save the default compiler, since it gets overwritten when the other # tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP. compiler_DEFAULT=$CC # save warnings/boilerplate of simple test code ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then lt_prog_compiler_no_builtin_flag= if test "$GCC" = yes; then case $cc_basename in nvcc*) lt_prog_compiler_no_builtin_flag=' -Xcompiler -fno-builtin' ;; *) lt_prog_compiler_no_builtin_flag=' -fno-builtin' ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -fno-rtti -fno-exceptions" >&5 $as_echo_n "checking if $compiler supports -fno-rtti -fno-exceptions... " >&6; } if ${lt_cv_prog_compiler_rtti_exceptions+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_rtti_exceptions=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-fno-rtti -fno-exceptions" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_rtti_exceptions=yes fi fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_rtti_exceptions" >&5 $as_echo "$lt_cv_prog_compiler_rtti_exceptions" >&6; } if test x"$lt_cv_prog_compiler_rtti_exceptions" = xyes; then lt_prog_compiler_no_builtin_flag="$lt_prog_compiler_no_builtin_flag -fno-rtti -fno-exceptions" else : fi fi lt_prog_compiler_wl= lt_prog_compiler_pic= lt_prog_compiler_static= if test "$GCC" = yes; then lt_prog_compiler_wl='-Wl,' lt_prog_compiler_static='-static' case $host_os in aix*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor lt_prog_compiler_static='-Bstatic' fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support lt_prog_compiler_pic='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the `-m68020' flag to GCC prevents building anything better, # like `-m68040'. lt_prog_compiler_pic='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries lt_prog_compiler_pic='-DDLL_EXPORT' ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files lt_prog_compiler_pic='-fno-common' ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. lt_prog_compiler_static= ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) # +Z the default ;; *) lt_prog_compiler_pic='-fPIC' ;; esac ;; interix[3-9]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; msdosdjgpp*) # Just because we use GCC doesn't mean we suddenly get shared libraries # on systems that don't support them. lt_prog_compiler_can_build_shared=no enable_shared=no ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic='-fPIC -shared' ;; sysv4*MP*) if test -d /usr/nec; then lt_prog_compiler_pic=-Kconform_pic fi ;; *) lt_prog_compiler_pic='-fPIC' ;; esac case $cc_basename in nvcc*) # Cuda Compiler Driver 2.2 lt_prog_compiler_wl='-Xlinker ' if test -n "$lt_prog_compiler_pic"; then lt_prog_compiler_pic="-Xcompiler $lt_prog_compiler_pic" fi ;; esac else # PORTME Check for flag to pass linker flags through the system compiler. case $host_os in aix*) lt_prog_compiler_wl='-Wl,' if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor lt_prog_compiler_static='-Bstatic' else lt_prog_compiler_static='-bnso -bI:/lib/syscalls.exp' fi ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). lt_prog_compiler_pic='-DDLL_EXPORT' ;; hpux9* | hpux10* | hpux11*) lt_prog_compiler_wl='-Wl,' # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but # not for PA HP-UX. case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) lt_prog_compiler_pic='+Z' ;; esac # Is there a better lt_prog_compiler_static that works with the bundled CC? lt_prog_compiler_static='${wl}-a ${wl}archive' ;; irix5* | irix6* | nonstopux*) lt_prog_compiler_wl='-Wl,' # PIC (with -KPIC) is the default. lt_prog_compiler_static='-non_shared' ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in # old Intel for x86_64 which still supported -KPIC. ecc*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-static' ;; # icc used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. icc* | ifort*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; # Lahey Fortran 8.1. lf95*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='--shared' lt_prog_compiler_static='--static' ;; nagfor*) # NAG Fortran compiler lt_prog_compiler_wl='-Wl,-Wl,,' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group compilers (*not* the Pentium gcc compiler, # which looks to be a dead project) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fpic' lt_prog_compiler_static='-Bstatic' ;; ccc*) lt_prog_compiler_wl='-Wl,' # All Alpha code is PIC. lt_prog_compiler_static='-non_shared' ;; xl* | bgxl* | bgf* | mpixl*) # IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-qpic' lt_prog_compiler_static='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [1-7].* | *Sun*Fortran*\ 8.[0-3]*) # Sun Fortran 8.3 passes all unrecognized flags to the linker lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='' ;; *Sun\ F* | *Sun*Fortran*) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='-Qoption ld ' ;; *Sun\ C*) # Sun C 5.9 lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='-Wl,' ;; *Intel*\ [CF]*Compiler*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; *Portland\ Group*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fpic' lt_prog_compiler_static='-Bstatic' ;; esac ;; esac ;; newsos6) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic='-fPIC -shared' ;; osf3* | osf4* | osf5*) lt_prog_compiler_wl='-Wl,' # All OSF/1 code is PIC. lt_prog_compiler_static='-non_shared' ;; rdos*) lt_prog_compiler_static='-non_shared' ;; solaris*) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' case $cc_basename in f77* | f90* | f95* | sunf77* | sunf90* | sunf95*) lt_prog_compiler_wl='-Qoption ld ';; *) lt_prog_compiler_wl='-Wl,';; esac ;; sunos4*) lt_prog_compiler_wl='-Qoption ld ' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; sysv4 | sysv4.2uw2* | sysv4.3*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; sysv4*MP*) if test -d /usr/nec ;then lt_prog_compiler_pic='-Kconform_pic' lt_prog_compiler_static='-Bstatic' fi ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; unicos*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_can_build_shared=no ;; uts4*) lt_prog_compiler_pic='-pic' lt_prog_compiler_static='-Bstatic' ;; *) lt_prog_compiler_can_build_shared=no ;; esac fi case $host_os in # For platforms which do not support PIC, -DPIC is meaningless: *djgpp*) lt_prog_compiler_pic= ;; *) lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC" ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5 $as_echo_n "checking for $compiler option to produce PIC... " >&6; } if ${lt_cv_prog_compiler_pic+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic=$lt_prog_compiler_pic fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5 $as_echo "$lt_cv_prog_compiler_pic" >&6; } lt_prog_compiler_pic=$lt_cv_prog_compiler_pic # # Check to make sure the PIC flag actually works. # if test -n "$lt_prog_compiler_pic"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler PIC flag $lt_prog_compiler_pic works" >&5 $as_echo_n "checking if $compiler PIC flag $lt_prog_compiler_pic works... " >&6; } if ${lt_cv_prog_compiler_pic_works+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic_works=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="$lt_prog_compiler_pic -DPIC" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_pic_works=yes fi fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_works" >&5 $as_echo "$lt_cv_prog_compiler_pic_works" >&6; } if test x"$lt_cv_prog_compiler_pic_works" = xyes; then case $lt_prog_compiler_pic in "" | " "*) ;; *) lt_prog_compiler_pic=" $lt_prog_compiler_pic" ;; esac else lt_prog_compiler_pic= lt_prog_compiler_can_build_shared=no fi fi # # Check to make sure the static flag actually works. # wl=$lt_prog_compiler_wl eval lt_tmp_static_flag=\"$lt_prog_compiler_static\" { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler static flag $lt_tmp_static_flag works" >&5 $as_echo_n "checking if $compiler static flag $lt_tmp_static_flag works... " >&6; } if ${lt_cv_prog_compiler_static_works+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_static_works=no save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS $lt_tmp_static_flag" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_static_works=yes fi else lt_cv_prog_compiler_static_works=yes fi fi $RM -r conftest* LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_static_works" >&5 $as_echo "$lt_cv_prog_compiler_static_works" >&6; } if test x"$lt_cv_prog_compiler_static_works" = xyes; then : else lt_prog_compiler_static= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5 $as_echo "$lt_cv_prog_compiler_c_o" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5 $as_echo "$lt_cv_prog_compiler_c_o" >&6; } hard_links="nottested" if test "$lt_cv_prog_compiler_c_o" = no && test "$need_locks" != no; then # do not overwrite the value of need_locks provided by the user { $as_echo "$as_me:${as_lineno-$LINENO}: checking if we can lock with hard links" >&5 $as_echo_n "checking if we can lock with hard links... " >&6; } hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hard_links" >&5 $as_echo "$hard_links" >&6; } if test "$hard_links" = no; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5 $as_echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;} need_locks=warn fi else need_locks=no fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5 $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; } runpath_var= allow_undefined_flag= always_export_symbols=no archive_cmds= archive_expsym_cmds= compiler_needs_object=no enable_shared_with_static_runtimes=no export_dynamic_flag_spec= export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' hardcode_automatic=no hardcode_direct=no hardcode_direct_absolute=no hardcode_libdir_flag_spec= hardcode_libdir_separator= hardcode_minus_L=no hardcode_shlibpath_var=unsupported inherit_rpath=no link_all_deplibs=unknown module_cmds= module_expsym_cmds= old_archive_from_new_cmds= old_archive_from_expsyms_cmds= thread_safe_flag_spec= whole_archive_flag_spec= # include_expsyms should be a list of space-separated symbols to be *always* # included in the symbol list include_expsyms= # exclude_expsyms can be an extended regexp of symbols to exclude # it will be wrapped by ` (' and `)$', so one must not match beginning or # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', # as well as any symbol that contains `d'. exclude_expsyms='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*' # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out # platforms (ab)use it in PIC code, but their linkers get confused if # the symbol is explicitly referenced. Since portable code cannot # rely on this symbol name, it's probably fine to never include it in # preloaded symbol tables. # Exclude shared library initialization/finalization symbols. extract_expsyms_cmds= case $host_os in cygwin* | mingw* | pw32* | cegcc*) # FIXME: the MSVC++ port hasn't been tested in a loooong time # When not using gcc, we currently assume that we are using # Microsoft Visual C++. if test "$GCC" != yes; then with_gnu_ld=no fi ;; interix*) # we just hope/assume this is gcc and not c89 (= MSVC++) with_gnu_ld=yes ;; openbsd*) with_gnu_ld=no ;; linux* | k*bsd*-gnu | gnu*) link_all_deplibs=no ;; esac ld_shlibs=yes # On some targets, GNU ld is compatible enough with the native linker # that we're better off using the native interface for both. lt_use_gnu_ld_interface=no if test "$with_gnu_ld" = yes; then case $host_os in aix*) # The AIX port of GNU ld has always aspired to compatibility # with the native linker. However, as the warning in the GNU ld # block says, versions before 2.19.5* couldn't really create working # shared libraries, regardless of the interface used. case `$LD -v 2>&1` in *\ \(GNU\ Binutils\)\ 2.19.5*) ;; *\ \(GNU\ Binutils\)\ 2.[2-9]*) ;; *\ \(GNU\ Binutils\)\ [3-9]*) ;; *) lt_use_gnu_ld_interface=yes ;; esac ;; *) lt_use_gnu_ld_interface=yes ;; esac fi if test "$lt_use_gnu_ld_interface" = yes; then # If archive_cmds runs LD, not CC, wlarc should be empty wlarc='${wl}' # Set some defaults for GNU ld with shared library support. These # are reset later if shared libraries are not supported. Putting them # here allows them to be overridden if necessary. runpath_var=LD_RUN_PATH hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' export_dynamic_flag_spec='${wl}--export-dynamic' # ancient GNU ld didn't support --whole-archive et. al. if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then whole_archive_flag_spec="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' else whole_archive_flag_spec= fi supports_anon_versioning=no case `$LD -v 2>&1` in *GNU\ gold*) supports_anon_versioning=yes ;; *\ [01].* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11 *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... *\ 2.11.*) ;; # other 2.11 versions *) supports_anon_versioning=yes ;; esac # See if GNU ld supports shared libraries. case $host_os in aix[3-9]*) # On AIX/PPC, the GNU linker is very broken if test "$host_cpu" != ia64; then ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: the GNU linker, at least up to release 2.19, is reported *** to be unable to reliably create shared libraries on AIX. *** Therefore, libtool is disabling shared libraries support. If you *** really care for shared libraries, you may want to install binutils *** 2.20 or above, or modify your PATH so that a non-GNU linker is found. *** You will then need to restart the configuration process. _LT_EOF fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='' ;; m68k) archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then allow_undefined_flag=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME archive_cmds='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' else ld_shlibs=no fi ;; cygwin* | mingw* | pw32* | cegcc*) # _LT_TAGVAR(hardcode_libdir_flag_spec, ) is actually meaningless, # as there is no search path for DLLs. hardcode_libdir_flag_spec='-L$libdir' export_dynamic_flag_spec='${wl}--export-all-symbols' allow_undefined_flag=unsupported always_export_symbols=no enable_shared_with_static_runtimes=yes export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols' exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname' if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file (1st line # is EXPORTS), use it as is; otherwise, prepend... archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else ld_shlibs=no fi ;; haiku*) archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' link_all_deplibs=yes ;; interix[3-9]*) hardcode_direct=no hardcode_shlibpath_var=no hardcode_libdir_flag_spec='${wl}-rpath,$libdir' export_dynamic_flag_spec='${wl}-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' archive_expsym_cmds='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu) tmp_diet=no if test "$host_os" = linux-dietlibc; then case $cc_basename in diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn) esac fi if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \ && test "$tmp_diet" = no then tmp_addflag=' $pic_flag' tmp_sharedflag='-shared' case $cc_basename,$host_cpu in pgcc*) # Portland Group C compiler whole_archive_flag_spec='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' tmp_addflag=' $pic_flag' ;; pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group f77 and f90 compilers whole_archive_flag_spec='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' tmp_addflag=' $pic_flag -Mnomain' ;; ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 tmp_addflag=' -i_dynamic' ;; efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 tmp_addflag=' -i_dynamic -nofor_main' ;; ifc* | ifort*) # Intel Fortran compiler tmp_addflag=' -nofor_main' ;; lf95*) # Lahey Fortran 8.1 whole_archive_flag_spec= tmp_sharedflag='--shared' ;; xl[cC]* | bgxl[cC]* | mpixl[cC]*) # IBM XL C 8.0 on PPC (deal with xlf below) tmp_sharedflag='-qmkshrobj' tmp_addflag= ;; nvcc*) # Cuda Compiler Driver 2.2 whole_archive_flag_spec='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' compiler_needs_object=yes ;; esac case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C 5.9 whole_archive_flag_spec='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' compiler_needs_object=yes tmp_sharedflag='-G' ;; *Sun\ F*) # Sun Fortran 8.3 tmp_sharedflag='-G' ;; esac archive_cmds='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' if test "x$supports_anon_versioning" = xyes; then archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' fi case $cc_basename in xlf* | bgf* | bgxlf* | mpixlf*) # IBM XL Fortran 10.1 on PPC cannot create shared libs itself whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib' if test "x$supports_anon_versioning" = xyes; then archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib' fi ;; esac else ld_shlibs=no fi ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' wlarc= else archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' fi ;; solaris*) if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: The releases 2.8.* of the GNU linker cannot reliably *** create shared libraries on Solaris systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.9.1 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) case `$LD -v 2>&1` in *\ [01].* | *\ 2.[0-9].* | *\ 2.1[0-5].*) ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: Releases of the GNU linker prior to 2.16.91.0.3 can not *** reliably create shared libraries on SCO systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.16.91.0.3 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF ;; *) # For security reasons, it is highly recommended that you always # use absolute paths for naming shared libraries, and exclude the # DT_RUNPATH tag from executables and libraries. But doing so # requires that you compile everything twice, which is a pain. if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; esac ;; sunos4*) archive_cmds='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' wlarc= hardcode_direct=yes hardcode_shlibpath_var=no ;; *) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; esac if test "$ld_shlibs" = no; then runpath_var= hardcode_libdir_flag_spec= export_dynamic_flag_spec= whole_archive_flag_spec= fi else # PORTME fill in a description of your system's linker (not GNU ld) case $host_os in aix3*) allow_undefined_flag=unsupported always_export_symbols=yes archive_expsym_cmds='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' # Note: this linker hardcodes the directories in LIBPATH if there # are no directories specified by -L. hardcode_minus_L=yes if test "$GCC" = yes && test -z "$lt_prog_compiler_static"; then # Neither direct hardcoding nor static linking is supported with a # broken collect2. hardcode_direct=unsupported fi ;; aix[4-9]*) if test "$host_cpu" = ia64; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag="" else # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to AIX nm, but means don't demangle with GNU nm # Also, AIX nm treats weak defined symbols like other global # defined symbols, whereas GNU nm marks them as "W". if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then export_symbols_cmds='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && (substr(\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' else export_symbols_cmds='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B")) && (substr(\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' fi aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # need to do runtime linking. case $host_os in aix4.[23]|aix4.[23].*|aix[5-9]*) for ld_flag in $LDFLAGS; do if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then aix_use_runtimelinking=yes break fi done ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. archive_cmds='' hardcode_direct=yes hardcode_direct_absolute=yes hardcode_libdir_separator=':' link_all_deplibs=yes file_list_spec='${wl}-f,' if test "$GCC" = yes; then case $host_os in aix4.[012]|aix4.[012].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`${CC} -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 hardcode_direct=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking hardcode_minus_L=yes hardcode_libdir_flag_spec='-L$libdir' hardcode_libdir_separator= fi ;; esac shared_flag='-shared' if test "$aix_use_runtimelinking" = yes; then shared_flag="$shared_flag "'${wl}-G' fi link_all_deplibs=no else # not using gcc if test "$host_cpu" = ia64; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test "$aix_use_runtimelinking" = yes; then shared_flag='${wl}-G' else shared_flag='${wl}-bM:SRE' fi fi fi export_dynamic_flag_spec='${wl}-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to export. always_export_symbols=yes if test "$aix_use_runtimelinking" = yes; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. allow_undefined_flag='-berok' # Determine the default libpath from the value encoded in an # empty executable. if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath_+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_="/usr/lib:/lib" fi fi aix_libpath=$lt_cv_aix_libpath_ fi hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath" archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" else if test "$host_cpu" = ia64; then hardcode_libdir_flag_spec='${wl}-R $libdir:/usr/lib:/lib' allow_undefined_flag="-z nodefs" archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath_+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_="/usr/lib:/lib" fi fi aix_libpath=$lt_cv_aix_libpath_ fi hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. no_undefined_flag=' ${wl}-bernotok' allow_undefined_flag=' ${wl}-berok' if test "$with_gnu_ld" = yes; then # We only use this code for GNU lds that support --whole-archive. whole_archive_flag_spec='${wl}--whole-archive$convenience ${wl}--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives whole_archive_flag_spec='$convenience' fi archive_cmds_need_lc=yes # This is similar to how AIX traditionally builds its shared libraries. archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' fi fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='' ;; m68k) archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; bsdi[45]*) export_dynamic_flag_spec=-rdynamic ;; cygwin* | mingw* | pw32* | cegcc*) # When not using gcc, we currently assume that we are using # Microsoft Visual C++. # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. case $cc_basename in cl*) # Native MSVC hardcode_libdir_flag_spec=' ' allow_undefined_flag=unsupported always_export_symbols=yes file_list_spec='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames=' archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp; else sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, )='true' enable_shared_with_static_runtimes=yes exclude_expsyms='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols' # Don't use ranlib old_postinstall_cmds='chmod 644 $oldlib' postlink_cmds='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile="$lt_outputfile.exe" lt_tool_outputfile="$lt_tool_outputfile.exe" ;; esac~ if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # Assume MSVC wrapper hardcode_libdir_flag_spec=' ' allow_undefined_flag=unsupported # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames=' # The linker will automatically build a .lib file if we build a DLL. old_archive_from_new_cmds='true' # FIXME: Should let the user specify the lib program. old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs' enable_shared_with_static_runtimes=yes ;; esac ;; darwin* | rhapsody*) archive_cmds_need_lc=no hardcode_direct=no hardcode_automatic=yes hardcode_shlibpath_var=unsupported if test "$lt_cv_ld_force_load" = "yes"; then whole_archive_flag_spec='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience ${wl}-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' else whole_archive_flag_spec='' fi link_all_deplibs=yes allow_undefined_flag="$_lt_dar_allow_undefined" case $cc_basename in ifort*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac if test "$_lt_dar_can_shared" = "yes"; then output_verbose_link_cmd=func_echo_all archive_cmds="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod${_lt_dsymutil}" module_cmds="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dsymutil}" archive_expsym_cmds="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring ${_lt_dar_single_mod}${_lt_dar_export_syms}${_lt_dsymutil}" module_expsym_cmds="sed -e 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dar_export_syms}${_lt_dsymutil}" else ld_shlibs=no fi ;; dgux*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-L$libdir' hardcode_shlibpath_var=no ;; # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor # support. Future versions do this automatically, but an explicit c++rt0.o # does not break anything, and helps significantly (at the cost of a little # extra space). freebsd2.2*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; # Unfortunately, older versions of FreeBSD 2 do not have this feature. freebsd2.*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes hardcode_minus_L=yes hardcode_shlibpath_var=no ;; # FreeBSD 3 and greater uses gcc -shared to do shared libraries. freebsd* | dragonfly*) archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; hpux9*) if test "$GCC" = yes; then archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' else archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' fi hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: hardcode_direct=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes export_dynamic_flag_spec='${wl}-E' ;; hpux10*) if test "$GCC" = yes && test "$with_gnu_ld" = no; then archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi if test "$with_gnu_ld" = no; then hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: hardcode_direct=yes hardcode_direct_absolute=yes export_dynamic_flag_spec='${wl}-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes fi ;; hpux11*) if test "$GCC" = yes && test "$with_gnu_ld" = no; then case $host_cpu in hppa*64*) archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' ;; esac else case $host_cpu in hppa*64*) archive_cmds='$CC -b ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) archive_cmds='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) # Older versions of the 11.00 compiler do not understand -b yet # (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does) { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $CC understands -b" >&5 $as_echo_n "checking if $CC understands -b... " >&6; } if ${lt_cv_prog_compiler__b+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler__b=no save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS -b" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler__b=yes fi else lt_cv_prog_compiler__b=yes fi fi $RM -r conftest* LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler__b" >&5 $as_echo "$lt_cv_prog_compiler__b" >&6; } if test x"$lt_cv_prog_compiler__b" = xyes; then archive_cmds='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi ;; esac fi if test "$with_gnu_ld" = no; then hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: case $host_cpu in hppa*64*|ia64*) hardcode_direct=no hardcode_shlibpath_var=no ;; *) hardcode_direct=yes hardcode_direct_absolute=yes export_dynamic_flag_spec='${wl}-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes ;; esac fi ;; irix5* | irix6* | nonstopux*) if test "$GCC" = yes; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' # Try to use the -exported_symbol ld option, if it does not # work, assume that -exports_file does not work either and # implicitly export all symbols. # This should be the same for all languages, so no per-tag cache variable. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5 $as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; } if ${lt_cv_irix_exported_symbol+:} false; then : $as_echo_n "(cached) " >&6 else save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int foo (void) { return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_irix_exported_symbol=yes else lt_cv_irix_exported_symbol=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5 $as_echo "$lt_cv_irix_exported_symbol" >&6; } if test "$lt_cv_irix_exported_symbol" = yes; then archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib' fi else archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib' fi archive_cmds_need_lc='no' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: inherit_rpath=yes link_all_deplibs=yes ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out else archive_cmds='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF fi hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; newsos6) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: hardcode_shlibpath_var=no ;; *nto* | *qnx*) ;; openbsd*) if test -f /usr/libexec/ld.so; then hardcode_direct=yes hardcode_shlibpath_var=no hardcode_direct_absolute=yes if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-retain-symbols-file,$export_symbols' hardcode_libdir_flag_spec='${wl}-rpath,$libdir' export_dynamic_flag_spec='${wl}-E' else case $host_os in openbsd[01].* | openbsd2.[0-7] | openbsd2.[0-7].*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-R$libdir' ;; *) archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' hardcode_libdir_flag_spec='${wl}-rpath,$libdir' ;; esac fi else ld_shlibs=no fi ;; os2*) hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes allow_undefined_flag=unsupported archive_cmds='$ECHO "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~echo DATA >> $output_objdir/$libname.def~echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' old_archive_from_new_cmds='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' ;; osf3*) if test "$GCC" = yes; then allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' else allow_undefined_flag=' -expect_unresolved \*' archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' fi archive_cmds_need_lc='no' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: ;; osf4* | osf5*) # as osf3* with the addition of -msym flag if test "$GCC" = yes; then allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' else allow_undefined_flag=' -expect_unresolved \*' archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' archive_expsym_cmds='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~ $CC -shared${allow_undefined_flag} ${wl}-input ${wl}$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib~$RM $lib.exp' # Both c and cxx compiler support -rpath directly hardcode_libdir_flag_spec='-rpath $libdir' fi archive_cmds_need_lc='no' hardcode_libdir_separator=: ;; solaris*) no_undefined_flag=' -z defs' if test "$GCC" = yes; then wlarc='${wl}' archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' else case `$CC -V 2>&1` in *"Compilers 5.0"*) wlarc='' archive_cmds='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp' ;; *) wlarc='${wl}' archive_cmds='$CC -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' ;; esac fi hardcode_libdir_flag_spec='-R$libdir' hardcode_shlibpath_var=no case $host_os in solaris2.[0-5] | solaris2.[0-5].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands `-z linker_flag'. GCC discards it without `$wl', # but is careful enough not to reorder. # Supported since Solaris 2.6 (maybe 2.5.1?) if test "$GCC" = yes; then whole_archive_flag_spec='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' else whole_archive_flag_spec='-z allextract$convenience -z defaultextract' fi ;; esac link_all_deplibs=yes ;; sunos4*) if test "x$host_vendor" = xsequent; then # Use $CC to link under sequent, because it throws in some extra .o # files that make .init and .fini sections work. archive_cmds='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' fi hardcode_libdir_flag_spec='-L$libdir' hardcode_direct=yes hardcode_minus_L=yes hardcode_shlibpath_var=no ;; sysv4) case $host_vendor in sni) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes # is this really true??? ;; siemens) ## LD is ld it makes a PLAMLIB ## CC just makes a GrossModule. archive_cmds='$LD -G -o $lib $libobjs $deplibs $linker_flags' reload_cmds='$CC -r -o $output$reload_objs' hardcode_direct=no ;; motorola) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=no #Motorola manual says yes, but my tests say they lie ;; esac runpath_var='LD_RUN_PATH' hardcode_shlibpath_var=no ;; sysv4.3*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_shlibpath_var=no export_dynamic_flag_spec='-Bexport' ;; sysv4*MP*) if test -d /usr/nec; then archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_shlibpath_var=no runpath_var=LD_RUN_PATH hardcode_runpath_var=yes ld_shlibs=yes fi ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) no_undefined_flag='${wl}-z,text' archive_cmds_need_lc=no hardcode_shlibpath_var=no runpath_var='LD_RUN_PATH' if test "$GCC" = yes; then archive_cmds='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We can NOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. no_undefined_flag='${wl}-z,text' allow_undefined_flag='${wl}-z,nodefs' archive_cmds_need_lc=no hardcode_shlibpath_var=no hardcode_libdir_flag_spec='${wl}-R,$libdir' hardcode_libdir_separator=':' link_all_deplibs=yes export_dynamic_flag_spec='${wl}-Bexport' runpath_var='LD_RUN_PATH' if test "$GCC" = yes; then archive_cmds='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; uts4*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-L$libdir' hardcode_shlibpath_var=no ;; *) ld_shlibs=no ;; esac if test x$host_vendor = xsni; then case $host in sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) export_dynamic_flag_spec='${wl}-Blargedynsym' ;; esac fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs" >&5 $as_echo "$ld_shlibs" >&6; } test "$ld_shlibs" = no && can_build_shared=no with_gnu_ld=$with_gnu_ld # # Do we need to explicitly link libc? # case "x$archive_cmds_need_lc" in x|xyes) # Assume -lc should be added archive_cmds_need_lc=yes if test "$enable_shared" = yes && test "$GCC" = yes; then case $archive_cmds in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -lc should be explicitly linked in" >&5 $as_echo_n "checking whether -lc should be explicitly linked in... " >&6; } if ${lt_cv_archive_cmds_need_lc+:} false; then : $as_echo_n "(cached) " >&6 else $RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$lt_prog_compiler_wl pic_flag=$lt_prog_compiler_pic compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$allow_undefined_flag allow_undefined_flag= if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1\""; } >&5 (eval $archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then lt_cv_archive_cmds_need_lc=no else lt_cv_archive_cmds_need_lc=yes fi allow_undefined_flag=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_archive_cmds_need_lc" >&5 $as_echo "$lt_cv_archive_cmds_need_lc" >&6; } archive_cmds_need_lc=$lt_cv_archive_cmds_need_lc ;; esac fi ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking dynamic linker characteristics" >&5 $as_echo_n "checking dynamic linker characteristics... " >&6; } if test "$GCC" = yes; then case $host_os in darwin*) lt_awk_arg="/^libraries:/,/LR/" ;; *) lt_awk_arg="/^libraries:/" ;; esac case $host_os in mingw* | cegcc*) lt_sed_strip_eq="s,=\([A-Za-z]:\),\1,g" ;; *) lt_sed_strip_eq="s,=/,/,g" ;; esac lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq` case $lt_search_path_spec in *\;*) # if the path contains ";" then we assume it to be the separator # otherwise default to the standard path separator (i.e. ":") - it is # assumed that no part of a normal pathname contains ";" but that should # okay in the real world where ";" in dirpaths is itself problematic. lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'` ;; *) lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"` ;; esac # Ok, now we have the path, separated by spaces, we can step through it # and add multilib dir if necessary. lt_tmp_lt_search_path_spec= lt_multi_os_dir=`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` for lt_sys_path in $lt_search_path_spec; do if test -d "$lt_sys_path/$lt_multi_os_dir"; then lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path/$lt_multi_os_dir" else test -d "$lt_sys_path" && \ lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path" fi done lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk ' BEGIN {RS=" "; FS="/|\n";} { lt_foo=""; lt_count=0; for (lt_i = NF; lt_i > 0; lt_i--) { if ($lt_i != "" && $lt_i != ".") { if ($lt_i == "..") { lt_count++; } else { if (lt_count == 0) { lt_foo="/" $lt_i lt_foo; } else { lt_count--; } } } } if (lt_foo != "") { lt_freq[lt_foo]++; } if (lt_freq[lt_foo] == 1) { print lt_foo; } }'` # AWK program above erroneously prepends '/' to C:/dos/paths # for these hosts. case $host_os in mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\ $SED 's,/\([A-Za-z]:\),\1,g'` ;; esac sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP` else sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" fi library_names_spec= libname_spec='lib$name' soname_spec= shrext_cmds=".so" postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. soname_spec='${libname}${release}${shared_ext}$major' ;; aix[4-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes if test "$host_cpu" = ia64; then # AIX 5 supports IA64 library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with # the line `#! .'. This would cause the generated library to # depend on `.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[01] | aix4.[01].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' echo '#endif'; } | ${CC} -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac # AIX (on Power*) has no versioning support, so currently we can not hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. if test "$aix_use_runtimelinking" = yes; then # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' else # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. library_names_spec='${libname}${release}.a $libname.a' soname_spec='${libname}${release}${shared_ext}$major' fi shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) library_names_spec='${libname}${shared_ext}' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[45]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows shrext_cmds=".dll" need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api" ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl*) # Native MSVC libname_spec='$name' soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' library_names_spec='${libname}.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) sys_lib_search_path_spec="$LIB" if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC wrapper library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${major}$shared_ext ${libname}$shared_ext' soname_spec='${libname}${release}${major}$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib" sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[23].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' need_version=no need_lib_prefix=no ;; freebsd-*) library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[01]* | freebsdelf3.[01]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LIBRARY_PATH shlibpath_overrides_runpath=yes sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' if test "X$HPUX_IA64_MODE" = X32; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" fi sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[3-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) if test "$lt_cv_prog_gnu_ld" = yes; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH if ${lt_cv_shlibpath_overrides_runpath+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$lt_prog_compiler_wl\"; \ LDFLAGS=\"\$LDFLAGS $hardcode_libdir_flag_spec\"" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : if ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null; then : lt_cv_shlibpath_overrides_runpath=yes fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS libdir=$save_libdir fi shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes # Append ld.so.conf contents to the search path if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; netbsdelf*-gnu) version_type=linux need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='NetBSD ld.elf_so' ;; netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; openbsd*) version_type=sunos sys_lib_dlsearch_path_spec="/usr/lib" need_lib_prefix=no # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. case $host_os in openbsd3.3 | openbsd3.3.*) need_version=yes ;; *) need_version=no ;; esac library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then case $host_os in openbsd2.[89] | openbsd2.[89].*) shlibpath_overrides_runpath=no ;; *) shlibpath_overrides_runpath=yes ;; esac else shlibpath_overrides_runpath=yes fi ;; os2*) libname_spec='$name' shrext_cmds=".dll" need_lib_prefix=no library_names_spec='$libname${shared_ext} $libname.a' dynamic_linker='OS/2 ld.exe' shlibpath_var=LIBPATH ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes if test "$with_gnu_ld" = yes; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) if test -d /usr/nec ;then version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' soname_spec='$libname${shared_ext}.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) version_type=freebsd-elf need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes if test "$with_gnu_ld" = yes; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $dynamic_linker" >&5 $as_echo "$dynamic_linker" >&6; } test "$dynamic_linker" = no && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" if test "$GCC" = yes; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi if test "${lt_cv_sys_lib_search_path_spec+set}" = set; then sys_lib_search_path_spec="$lt_cv_sys_lib_search_path_spec" fi if test "${lt_cv_sys_lib_dlsearch_path_spec+set}" = set; then sys_lib_dlsearch_path_spec="$lt_cv_sys_lib_dlsearch_path_spec" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to hardcode library paths into programs" >&5 $as_echo_n "checking how to hardcode library paths into programs... " >&6; } hardcode_action= if test -n "$hardcode_libdir_flag_spec" || test -n "$runpath_var" || test "X$hardcode_automatic" = "Xyes" ; then # We can hardcode non-existent directories. if test "$hardcode_direct" != no && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one ## test "$_LT_TAGVAR(hardcode_shlibpath_var, )" != no && test "$hardcode_minus_L" != no; then # Linking always hardcodes the temporary library directory. hardcode_action=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. hardcode_action=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. hardcode_action=unsupported fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hardcode_action" >&5 $as_echo "$hardcode_action" >&6; } if test "$hardcode_action" = relink || test "$inherit_rpath" = yes; then # Fast installation is not supported enable_fast_install=no elif test "$shlibpath_overrides_runpath" = yes || test "$enable_shared" = no; then # Fast installation is not necessary enable_fast_install=needless fi if test "x$enable_dlopen" != xyes; then enable_dlopen=unknown enable_dlopen_self=unknown enable_dlopen_self_static=unknown else lt_cv_dlopen=no lt_cv_dlopen_libs= case $host_os in beos*) lt_cv_dlopen="load_add_on" lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ;; mingw* | pw32* | cegcc*) lt_cv_dlopen="LoadLibrary" lt_cv_dlopen_libs= ;; cygwin*) lt_cv_dlopen="dlopen" lt_cv_dlopen_libs= ;; darwin*) # if libdl is installed we need to link against it { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl" else lt_cv_dlopen="dyld" lt_cv_dlopen_libs= lt_cv_dlopen_self=yes fi ;; *) ac_fn_c_check_func "$LINENO" "shl_load" "ac_cv_func_shl_load" if test "x$ac_cv_func_shl_load" = xyes; then : lt_cv_dlopen="shl_load" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for shl_load in -ldld" >&5 $as_echo_n "checking for shl_load in -ldld... " >&6; } if ${ac_cv_lib_dld_shl_load+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char shl_load (); int main () { return shl_load (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dld_shl_load=yes else ac_cv_lib_dld_shl_load=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_shl_load" >&5 $as_echo "$ac_cv_lib_dld_shl_load" >&6; } if test "x$ac_cv_lib_dld_shl_load" = xyes; then : lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-ldld" else ac_fn_c_check_func "$LINENO" "dlopen" "ac_cv_func_dlopen" if test "x$ac_cv_func_dlopen" = xyes; then : lt_cv_dlopen="dlopen" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -lsvld" >&5 $as_echo_n "checking for dlopen in -lsvld... " >&6; } if ${ac_cv_lib_svld_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lsvld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_svld_dlopen=yes else ac_cv_lib_svld_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_svld_dlopen" >&5 $as_echo "$ac_cv_lib_svld_dlopen" >&6; } if test "x$ac_cv_lib_svld_dlopen" = xyes; then : lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dld_link in -ldld" >&5 $as_echo_n "checking for dld_link in -ldld... " >&6; } if ${ac_cv_lib_dld_dld_link+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dld_link (); int main () { return dld_link (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dld_dld_link=yes else ac_cv_lib_dld_dld_link=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_dld_link" >&5 $as_echo "$ac_cv_lib_dld_dld_link" >&6; } if test "x$ac_cv_lib_dld_dld_link" = xyes; then : lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-ldld" fi fi fi fi fi fi ;; esac if test "x$lt_cv_dlopen" != xno; then enable_dlopen=yes else enable_dlopen=no fi case $lt_cv_dlopen in dlopen) save_CPPFLAGS="$CPPFLAGS" test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" save_LDFLAGS="$LDFLAGS" wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" save_LIBS="$LIBS" LIBS="$lt_cv_dlopen_libs $LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a program can dlopen itself" >&5 $as_echo_n "checking whether a program can dlopen itself... " >&6; } if ${lt_cv_dlopen_self+:} false; then : $as_echo_n "(cached) " >&6 else if test "$cross_compiling" = yes; then : lt_cv_dlopen_self=cross else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF #line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisbility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; } _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s conftest${ac_exeext} 2>/dev/null; then (./conftest; exit; ) >&5 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) lt_cv_dlopen_self=yes ;; x$lt_dlneed_uscore) lt_cv_dlopen_self=yes ;; x$lt_dlunknown|x*) lt_cv_dlopen_self=no ;; esac else : # compilation failed lt_cv_dlopen_self=no fi fi rm -fr conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self" >&5 $as_echo "$lt_cv_dlopen_self" >&6; } if test "x$lt_cv_dlopen_self" = xyes; then wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a statically linked program can dlopen itself" >&5 $as_echo_n "checking whether a statically linked program can dlopen itself... " >&6; } if ${lt_cv_dlopen_self_static+:} false; then : $as_echo_n "(cached) " >&6 else if test "$cross_compiling" = yes; then : lt_cv_dlopen_self_static=cross else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF #line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisbility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; } _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s conftest${ac_exeext} 2>/dev/null; then (./conftest; exit; ) >&5 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) lt_cv_dlopen_self_static=yes ;; x$lt_dlneed_uscore) lt_cv_dlopen_self_static=yes ;; x$lt_dlunknown|x*) lt_cv_dlopen_self_static=no ;; esac else : # compilation failed lt_cv_dlopen_self_static=no fi fi rm -fr conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self_static" >&5 $as_echo "$lt_cv_dlopen_self_static" >&6; } fi CPPFLAGS="$save_CPPFLAGS" LDFLAGS="$save_LDFLAGS" LIBS="$save_LIBS" ;; esac case $lt_cv_dlopen_self in yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; *) enable_dlopen_self=unknown ;; esac case $lt_cv_dlopen_self_static in yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; *) enable_dlopen_self_static=unknown ;; esac fi striplib= old_striplib= { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether stripping libraries is possible" >&5 $as_echo_n "checking whether stripping libraries is possible... " >&6; } if test -n "$STRIP" && $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then test -z "$old_striplib" && old_striplib="$STRIP --strip-debug" test -z "$striplib" && striplib="$STRIP --strip-unneeded" { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else # FIXME - insert some real tests, host_os isn't really good enough case $host_os in darwin*) if test -n "$STRIP" ; then striplib="$STRIP -x" old_striplib="$STRIP -S" { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac fi # Report which library types will actually be built { $as_echo "$as_me:${as_lineno-$LINENO}: checking if libtool supports shared libraries" >&5 $as_echo_n "checking if libtool supports shared libraries... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $can_build_shared" >&5 $as_echo "$can_build_shared" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build shared libraries" >&5 $as_echo_n "checking whether to build shared libraries... " >&6; } test "$can_build_shared" = "no" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test "$enable_shared" = yes && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[4-9]*) if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then test "$enable_shared" = yes && enable_static=no fi ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $enable_shared" >&5 $as_echo "$enable_shared" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build static libraries" >&5 $as_echo_n "checking whether to build static libraries... " >&6; } # Make sure either enable_shared or enable_static is yes. test "$enable_shared" = yes || enable_static=yes { $as_echo "$as_me:${as_lineno-$LINENO}: result: $enable_static" >&5 $as_echo "$enable_static" >&6; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu CC="$lt_save_CC" if test -n "$CXX" && ( test "X$CXX" != "Xno" && ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) || (test "X$CXX" != "Xg++"))) ; then ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to run the C++ preprocessor" >&5 $as_echo_n "checking how to run the C++ preprocessor... " >&6; } if test -z "$CXXCPP"; then if ${ac_cv_prog_CXXCPP+:} false; then : $as_echo_n "(cached) " >&6 else # Double quotes because CXXCPP needs to be expanded for CXXCPP in "$CXX -E" "/lib/cpp" do ac_preproc_ok=false for ac_cxx_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : break fi done ac_cv_prog_CXXCPP=$CXXCPP fi CXXCPP=$ac_cv_prog_CXXCPP else ac_cv_prog_CXXCPP=$CXXCPP fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CXXCPP" >&5 $as_echo "$CXXCPP" >&6; } ac_preproc_ok=false for ac_cxx_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "C++ preprocessor \"$CXXCPP\" fails sanity check See \`config.log' for more details" "$LINENO" 5; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu else _lt_caught_CXX_error=yes fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu archive_cmds_need_lc_CXX=no allow_undefined_flag_CXX= always_export_symbols_CXX=no archive_expsym_cmds_CXX= compiler_needs_object_CXX=no export_dynamic_flag_spec_CXX= hardcode_direct_CXX=no hardcode_direct_absolute_CXX=no hardcode_libdir_flag_spec_CXX= hardcode_libdir_separator_CXX= hardcode_minus_L_CXX=no hardcode_shlibpath_var_CXX=unsupported hardcode_automatic_CXX=no inherit_rpath_CXX=no module_cmds_CXX= module_expsym_cmds_CXX= link_all_deplibs_CXX=unknown old_archive_cmds_CXX=$old_archive_cmds reload_flag_CXX=$reload_flag reload_cmds_CXX=$reload_cmds no_undefined_flag_CXX= whole_archive_flag_spec_CXX= enable_shared_with_static_runtimes_CXX=no # Source file extension for C++ test sources. ac_ext=cpp # Object file extension for compiled C++ test sources. objext=o objext_CXX=$objext # No sense in running all these tests if we already determined that # the CXX compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test "$_lt_caught_CXX_error" != yes; then # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(int, char *[]) { return(0); }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # save warnings/boilerplate of simple test code ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_LD=$LD lt_save_GCC=$GCC GCC=$GXX lt_save_with_gnu_ld=$with_gnu_ld lt_save_path_LD=$lt_cv_path_LD if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx else $as_unset lt_cv_prog_gnu_ld fi if test -n "${lt_cv_path_LDCXX+set}"; then lt_cv_path_LD=$lt_cv_path_LDCXX else $as_unset lt_cv_path_LD fi test -z "${LDCXX+set}" || LD=$LDCXX CC=${CXX-"c++"} CFLAGS=$CXXFLAGS compiler=$CC compiler_CXX=$CC for cc_temp in $compiler""; do case $cc_temp in compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; \-*) ;; *) break;; esac done cc_basename=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` if test -n "$compiler"; then # We don't want -fno-exception when compiling C++ code, so set the # no_builtin_flag separately if test "$GXX" = yes; then lt_prog_compiler_no_builtin_flag_CXX=' -fno-builtin' else lt_prog_compiler_no_builtin_flag_CXX= fi if test "$GXX" = yes; then # Set up default GNU C++ configuration # Check whether --with-gnu-ld was given. if test "${with_gnu_ld+set}" = set; then : withval=$with_gnu_ld; test "$withval" = no || with_gnu_ld=yes else with_gnu_ld=no fi ac_prog=ld if test "$GCC" = yes; then # Check if gcc -print-prog-name=ld gives a path. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ld used by $CC" >&5 $as_echo_n "checking for ld used by $CC... " >&6; } case $host in *-*-mingw*) # gcc leaves a trailing carriage return which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [\\/]* | ?:[\\/]*) re_direlt='/[^/][^/]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done test -z "$LD" && LD="$ac_prog" ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac elif test "$with_gnu_ld" = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GNU ld" >&5 $as_echo_n "checking for GNU ld... " >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for non-GNU ld" >&5 $as_echo_n "checking for non-GNU ld... " >&6; } fi if ${lt_cv_path_LD+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$LD"; then lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then lt_cv_path_LD="$ac_dir/$ac_prog" # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &5 $as_echo "$LD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -z "$LD" && as_fn_error $? "no acceptable ld found in \$PATH" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking if the linker ($LD) is GNU ld" >&5 $as_echo_n "checking if the linker ($LD) is GNU ld... " >&6; } if ${lt_cv_prog_gnu_ld+:} false; then : $as_echo_n "(cached) " >&6 else # I'd rather use --version here, but apparently some GNU lds only accept -v. case `$LD -v 2>&1 &5 $as_echo "$lt_cv_prog_gnu_ld" >&6; } with_gnu_ld=$lt_cv_prog_gnu_ld # Check if GNU C++ uses GNU ld as the underlying linker, since the # archiving commands below assume that GNU ld is being used. if test "$with_gnu_ld" = yes; then archive_cmds_CXX='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' export_dynamic_flag_spec_CXX='${wl}--export-dynamic' # If archive_cmds runs LD, not CC, wlarc should be empty # XXX I think wlarc can be eliminated in ltcf-cxx, but I need to # investigate it a little bit more. (MM) wlarc='${wl}' # ancient GNU ld didn't support --whole-archive et. al. if eval "`$CC -print-prog-name=ld` --help 2>&1" | $GREP 'no-whole-archive' > /dev/null; then whole_archive_flag_spec_CXX="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' else whole_archive_flag_spec_CXX= fi else with_gnu_ld=no wlarc= # A generic and very simple default shared library creation # command for GNU C++ for the case where it uses the native # linker, instead of GNU ld. If possible, this setting should # overridden to take advantage of the native linker features on # the platform it is being used on. archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' fi # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else GXX=no with_gnu_ld=no wlarc= fi # PORTME: fill in a description of your system's C++ link characteristics { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5 $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; } ld_shlibs_CXX=yes case $host_os in aix3*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; aix[4-9]*) if test "$host_cpu" = ia64; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag="" else aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # need to do runtime linking. case $host_os in aix4.[23]|aix4.[23].*|aix[5-9]*) for ld_flag in $LDFLAGS; do case $ld_flag in *-brtl*) aix_use_runtimelinking=yes break ;; esac done ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. archive_cmds_CXX='' hardcode_direct_CXX=yes hardcode_direct_absolute_CXX=yes hardcode_libdir_separator_CXX=':' link_all_deplibs_CXX=yes file_list_spec_CXX='${wl}-f,' if test "$GXX" = yes; then case $host_os in aix4.[012]|aix4.[012].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`${CC} -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 hardcode_direct_CXX=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking hardcode_minus_L_CXX=yes hardcode_libdir_flag_spec_CXX='-L$libdir' hardcode_libdir_separator_CXX= fi esac shared_flag='-shared' if test "$aix_use_runtimelinking" = yes; then shared_flag="$shared_flag "'${wl}-G' fi else # not using gcc if test "$host_cpu" = ia64; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test "$aix_use_runtimelinking" = yes; then shared_flag='${wl}-G' else shared_flag='${wl}-bM:SRE' fi fi fi export_dynamic_flag_spec_CXX='${wl}-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to # export. always_export_symbols_CXX=yes if test "$aix_use_runtimelinking" = yes; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. allow_undefined_flag_CXX='-berok' # Determine the default libpath from the value encoded in an empty # executable. if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath__CXX+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX="/usr/lib:/lib" fi fi aix_libpath=$lt_cv_aix_libpath__CXX fi hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath" archive_expsym_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" else if test "$host_cpu" = ia64; then hardcode_libdir_flag_spec_CXX='${wl}-R $libdir:/usr/lib:/lib' allow_undefined_flag_CXX="-z nodefs" archive_expsym_cmds_CXX="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath__CXX+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX="/usr/lib:/lib" fi fi aix_libpath=$lt_cv_aix_libpath__CXX fi hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. no_undefined_flag_CXX=' ${wl}-bernotok' allow_undefined_flag_CXX=' ${wl}-berok' if test "$with_gnu_ld" = yes; then # We only use this code for GNU lds that support --whole-archive. whole_archive_flag_spec_CXX='${wl}--whole-archive$convenience ${wl}--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives whole_archive_flag_spec_CXX='$convenience' fi archive_cmds_need_lc_CXX=yes # This is similar to how AIX traditionally builds its shared # libraries. archive_expsym_cmds_CXX="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' fi fi ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then allow_undefined_flag_CXX=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME archive_cmds_CXX='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' else ld_shlibs_CXX=no fi ;; chorus*) case $cc_basename in *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; cygwin* | mingw* | pw32* | cegcc*) case $GXX,$cc_basename in ,cl* | no,cl*) # Native MSVC # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. hardcode_libdir_flag_spec_CXX=' ' allow_undefined_flag_CXX=unsupported always_export_symbols_CXX=yes file_list_spec_CXX='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. archive_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames=' archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp; else $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, CXX)='true' enable_shared_with_static_runtimes_CXX=yes # Don't use ranlib old_postinstall_cmds_CXX='chmod 644 $oldlib' postlink_cmds_CXX='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile="$lt_outputfile.exe" lt_tool_outputfile="$lt_tool_outputfile.exe" ;; esac~ func_to_tool_file "$lt_outputfile"~ if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # g++ # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless, # as there is no search path for DLLs. hardcode_libdir_flag_spec_CXX='-L$libdir' export_dynamic_flag_spec_CXX='${wl}--export-all-symbols' allow_undefined_flag_CXX=unsupported always_export_symbols_CXX=no enable_shared_with_static_runtimes_CXX=yes if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file (1st line # is EXPORTS), use it as is; otherwise, prepend... archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else ld_shlibs_CXX=no fi ;; esac ;; darwin* | rhapsody*) archive_cmds_need_lc_CXX=no hardcode_direct_CXX=no hardcode_automatic_CXX=yes hardcode_shlibpath_var_CXX=unsupported if test "$lt_cv_ld_force_load" = "yes"; then whole_archive_flag_spec_CXX='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience ${wl}-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' else whole_archive_flag_spec_CXX='' fi link_all_deplibs_CXX=yes allow_undefined_flag_CXX="$_lt_dar_allow_undefined" case $cc_basename in ifort*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac if test "$_lt_dar_can_shared" = "yes"; then output_verbose_link_cmd=func_echo_all archive_cmds_CXX="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod${_lt_dsymutil}" module_cmds_CXX="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dsymutil}" archive_expsym_cmds_CXX="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring ${_lt_dar_single_mod}${_lt_dar_export_syms}${_lt_dsymutil}" module_expsym_cmds_CXX="sed -e 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dar_export_syms}${_lt_dsymutil}" if test "$lt_cv_apple_cc_single_mod" != "yes"; then archive_cmds_CXX="\$CC -r -keep_private_externs -nostdlib -o \${lib}-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \${lib}-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring${_lt_dsymutil}" archive_expsym_cmds_CXX="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -r -keep_private_externs -nostdlib -o \${lib}-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \${lib}-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring${_lt_dar_export_syms}${_lt_dsymutil}" fi else ld_shlibs_CXX=no fi ;; dgux*) case $cc_basename in ec++*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; ghcx*) # Green Hills C++ Compiler # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; freebsd2.*) # C++ shared libraries reported to be fairly broken before # switch to ELF ld_shlibs_CXX=no ;; freebsd-elf*) archive_cmds_need_lc_CXX=no ;; freebsd* | dragonfly*) # FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF # conventions ld_shlibs_CXX=yes ;; haiku*) archive_cmds_CXX='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' link_all_deplibs_CXX=yes ;; hpux9*) hardcode_libdir_flag_spec_CXX='${wl}+b ${wl}$libdir' hardcode_libdir_separator_CXX=: export_dynamic_flag_spec_CXX='${wl}-E' hardcode_direct_CXX=yes hardcode_minus_L_CXX=yes # Not in the search PATH, # but as the default # location of the library. case $cc_basename in CC*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; aCC*) archive_cmds_CXX='$RM $output_objdir/$soname~$CC -b ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $EGREP "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes; then archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' else # FIXME: insert proper C++ library support ld_shlibs_CXX=no fi ;; esac ;; hpux10*|hpux11*) if test $with_gnu_ld = no; then hardcode_libdir_flag_spec_CXX='${wl}+b ${wl}$libdir' hardcode_libdir_separator_CXX=: case $host_cpu in hppa*64*|ia64*) ;; *) export_dynamic_flag_spec_CXX='${wl}-E' ;; esac fi case $host_cpu in hppa*64*|ia64*) hardcode_direct_CXX=no hardcode_shlibpath_var_CXX=no ;; *) hardcode_direct_CXX=yes hardcode_direct_absolute_CXX=yes hardcode_minus_L_CXX=yes # Not in the search PATH, # but as the default # location of the library. ;; esac case $cc_basename in CC*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; aCC*) case $host_cpu in hppa*64*) archive_cmds_CXX='$CC -b ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) archive_cmds_CXX='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) archive_cmds_CXX='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $GREP "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes; then if test $with_gnu_ld = no; then case $host_cpu in hppa*64*) archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac fi else # FIXME: insert proper C++ library support ld_shlibs_CXX=no fi ;; esac ;; interix[3-9]*) hardcode_direct_CXX=no hardcode_shlibpath_var_CXX=no hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' export_dynamic_flag_spec_CXX='${wl}-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. archive_cmds_CXX='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' archive_expsym_cmds_CXX='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; irix5* | irix6*) case $cc_basename in CC*) # SGI C++ archive_cmds_CXX='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' # Archives containing C++ object files must be created using # "CC -ar", where "CC" is the IRIX C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. old_archive_cmds_CXX='$CC -ar -WR,-u -o $oldlib $oldobjs' ;; *) if test "$GXX" = yes; then if test "$with_gnu_ld" = no; then archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' else archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib' fi fi link_all_deplibs_CXX=yes ;; esac hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator_CXX=: inherit_rpath_CXX=yes ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' archive_expsym_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib ${wl}-retain-symbols-file,$export_symbols; mv \$templib $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | $GREP "ld"`; rm -f libconftest$shared_ext; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' export_dynamic_flag_spec_CXX='${wl}--export-dynamic' # Archives containing C++ object files must be created using # "CC -Bstatic", where "CC" is the KAI C++ compiler. old_archive_cmds_CXX='$CC -Bstatic -o $oldlib $oldobjs' ;; icpc* | ecpc* ) # Intel C++ with_gnu_ld=yes # version 8.0 and above of icpc choke on multiply defined symbols # if we add $predep_objects and $postdep_objects, however 7.1 and # earlier do not add the objects themselves. case `$CC -V 2>&1` in *"Version 7."*) archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' ;; *) # Version 8.0 or newer tmp_idyn= case $host_cpu in ia64*) tmp_idyn=' -i_dynamic';; esac archive_cmds_CXX='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' ;; esac archive_cmds_need_lc_CXX=no hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' export_dynamic_flag_spec_CXX='${wl}--export-dynamic' whole_archive_flag_spec_CXX='${wl}--whole-archive$convenience ${wl}--no-whole-archive' ;; pgCC* | pgcpp*) # Portland Group C++ compiler case `$CC -V` in *pgCC\ [1-5].* | *pgcpp\ [1-5].*) prelink_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~ compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"' old_archive_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~ $AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~ $RANLIB $oldlib' archive_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' archive_expsym_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' ;; *) # Version 6 and above use weak symbols archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' ;; esac hardcode_libdir_flag_spec_CXX='${wl}--rpath ${wl}$libdir' export_dynamic_flag_spec_CXX='${wl}--export-dynamic' whole_archive_flag_spec_CXX='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' ;; cxx*) # Compaq C++ archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib ${wl}-retain-symbols-file $wl$export_symbols' runpath_var=LD_RUN_PATH hardcode_libdir_flag_spec_CXX='-rpath $libdir' hardcode_libdir_separator_CXX=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "X$list" | $Xsed' ;; xl* | mpixl* | bgxl*) # IBM XL 8.0 on PPC, with GNU ld hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' export_dynamic_flag_spec_CXX='${wl}--export-dynamic' archive_cmds_CXX='$CC -qmkshrobj $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' if test "x$supports_anon_versioning" = xyes; then archive_expsym_cmds_CXX='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC -qmkshrobj $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' fi ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 no_undefined_flag_CXX=' -zdefs' archive_cmds_CXX='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' archive_expsym_cmds_CXX='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file ${wl}$export_symbols' hardcode_libdir_flag_spec_CXX='-R$libdir' whole_archive_flag_spec_CXX='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' compiler_needs_object_CXX=yes # Not sure whether something based on # $CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 # would be better. output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. old_archive_cmds_CXX='$CC -xar -o $oldlib $oldobjs' ;; esac ;; esac ;; lynxos*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; m88k*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; mvs*) case $cc_basename in cxx*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds_CXX='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags' wlarc= hardcode_libdir_flag_spec_CXX='-R$libdir' hardcode_direct_CXX=yes hardcode_shlibpath_var_CXX=no fi # Workaround some broken pre-1.5 toolchains output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"' ;; *nto* | *qnx*) ld_shlibs_CXX=yes ;; openbsd2*) # C++ shared libraries are fairly broken ld_shlibs_CXX=no ;; openbsd*) if test -f /usr/libexec/ld.so; then hardcode_direct_CXX=yes hardcode_shlibpath_var_CXX=no hardcode_direct_absolute_CXX=yes archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then archive_expsym_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file,$export_symbols -o $lib' export_dynamic_flag_spec_CXX='${wl}-E' whole_archive_flag_spec_CXX="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' fi output_verbose_link_cmd=func_echo_all else ld_shlibs_CXX=no fi ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo "$lib" | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' hardcode_libdir_separator_CXX=: # Archives containing C++ object files must be created using # the KAI C++ compiler. case $host in osf3*) old_archive_cmds_CXX='$CC -Bstatic -o $oldlib $oldobjs' ;; *) old_archive_cmds_CXX='$CC -o $oldlib $oldobjs' ;; esac ;; RCC*) # Rational C++ 2.4.1 # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; cxx*) case $host in osf3*) allow_undefined_flag_CXX=' ${wl}-expect_unresolved ${wl}\*' archive_cmds_CXX='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $soname `test -n "$verstring" && func_echo_all "${wl}-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' ;; *) allow_undefined_flag_CXX=' -expect_unresolved \*' archive_cmds_CXX='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' archive_expsym_cmds_CXX='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~ echo "-hidden">> $lib.exp~ $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname ${wl}-input ${wl}$lib.exp `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib~ $RM $lib.exp' hardcode_libdir_flag_spec_CXX='-rpath $libdir' ;; esac hardcode_libdir_separator_CXX=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld" | $GREP -v "ld:"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes && test "$with_gnu_ld" = no; then allow_undefined_flag_CXX=' ${wl}-expect_unresolved ${wl}\*' case $host in osf3*) archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' ;; *) archive_cmds_CXX='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' ;; esac hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator_CXX=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # FIXME: insert proper C++ library support ld_shlibs_CXX=no fi ;; esac ;; psos*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; lcc*) # Lucid # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ archive_cmds_need_lc_CXX=yes no_undefined_flag_CXX=' -zdefs' archive_cmds_CXX='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G${allow_undefined_flag} ${wl}-M ${wl}$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' hardcode_libdir_flag_spec_CXX='-R$libdir' hardcode_shlibpath_var_CXX=no case $host_os in solaris2.[0-5] | solaris2.[0-5].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands `-z linker_flag'. # Supported since Solaris 2.6 (maybe 2.5.1?) whole_archive_flag_spec_CXX='-z allextract$convenience -z defaultextract' ;; esac link_all_deplibs_CXX=yes output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. old_archive_cmds_CXX='$CC -xar -o $oldlib $oldobjs' ;; gcx*) # Green Hills C++ Compiler archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' # The C++ compiler must be used to create the archive. old_archive_cmds_CXX='$CC $LDFLAGS -archive -o $oldlib $oldobjs' ;; *) # GNU C++ compiler with Solaris linker if test "$GXX" = yes && test "$with_gnu_ld" = no; then no_undefined_flag_CXX=' ${wl}-z ${wl}defs' if $CC --version | $GREP -v '^2\.7' > /dev/null; then archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # g++ 2.7 appears to require `-G' NOT `-shared' on this # platform. archive_cmds_CXX='$CC -G -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -G $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' fi hardcode_libdir_flag_spec_CXX='${wl}-R $wl$libdir' case $host_os in solaris2.[0-5] | solaris2.[0-5].*) ;; *) whole_archive_flag_spec_CXX='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' ;; esac fi ;; esac ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) no_undefined_flag_CXX='${wl}-z,text' archive_cmds_need_lc_CXX=no hardcode_shlibpath_var_CXX=no runpath_var='LD_RUN_PATH' case $cc_basename in CC*) archive_cmds_CXX='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; *) archive_cmds_CXX='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We can NOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. no_undefined_flag_CXX='${wl}-z,text' allow_undefined_flag_CXX='${wl}-z,nodefs' archive_cmds_need_lc_CXX=no hardcode_shlibpath_var_CXX=no hardcode_libdir_flag_spec_CXX='${wl}-R,$libdir' hardcode_libdir_separator_CXX=':' link_all_deplibs_CXX=yes export_dynamic_flag_spec_CXX='${wl}-Bexport' runpath_var='LD_RUN_PATH' case $cc_basename in CC*) archive_cmds_CXX='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' old_archive_cmds_CXX='$CC -Tprelink_objects $oldobjs~ '"$old_archive_cmds_CXX" reload_cmds_CXX='$CC -Tprelink_objects $reload_objs~ '"$reload_cmds_CXX" ;; *) archive_cmds_CXX='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; vxworks*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5 $as_echo "$ld_shlibs_CXX" >&6; } test "$ld_shlibs_CXX" = no && can_build_shared=no GCC_CXX="$GXX" LD_CXX="$LD" ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... # Dependencies to place before and after the object being linked: predep_objects_CXX= postdep_objects_CXX= predeps_CXX= postdeps_CXX= compiler_lib_search_path_CXX= cat > conftest.$ac_ext <<_LT_EOF class Foo { public: Foo (void) { a = 0; } private: int a; }; _LT_EOF _lt_libdeps_save_CFLAGS=$CFLAGS case "$CC $CFLAGS " in #( *\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;; *\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;; *\ -fuse-linker-plugin*\ *) CFLAGS="$CFLAGS -fno-use-linker-plugin" ;; esac if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then # Parse the compiler output and extract the necessary # objects, libraries and library flags. # Sentinel used to keep track of whether or not we are before # the conftest object file. pre_test_object_deps_done=no for p in `eval "$output_verbose_link_cmd"`; do case ${prev}${p} in -L* | -R* | -l*) # Some compilers place space between "-{L,R}" and the path. # Remove the space. if test $p = "-L" || test $p = "-R"; then prev=$p continue fi # Expand the sysroot to ease extracting the directories later. if test -z "$prev"; then case $p in -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;; -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;; -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;; esac fi case $p in =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;; esac if test "$pre_test_object_deps_done" = no; then case ${prev} in -L | -R) # Internal compiler library paths should come after those # provided the user. The postdeps already come after the # user supplied libs so there is no need to process them. if test -z "$compiler_lib_search_path_CXX"; then compiler_lib_search_path_CXX="${prev}${p}" else compiler_lib_search_path_CXX="${compiler_lib_search_path_CXX} ${prev}${p}" fi ;; # The "-l" case would never come before the object being # linked, so don't bother handling this case. esac else if test -z "$postdeps_CXX"; then postdeps_CXX="${prev}${p}" else postdeps_CXX="${postdeps_CXX} ${prev}${p}" fi fi prev= ;; *.lto.$objext) ;; # Ignore GCC LTO objects *.$objext) # This assumes that the test object file only shows up # once in the compiler output. if test "$p" = "conftest.$objext"; then pre_test_object_deps_done=yes continue fi if test "$pre_test_object_deps_done" = no; then if test -z "$predep_objects_CXX"; then predep_objects_CXX="$p" else predep_objects_CXX="$predep_objects_CXX $p" fi else if test -z "$postdep_objects_CXX"; then postdep_objects_CXX="$p" else postdep_objects_CXX="$postdep_objects_CXX $p" fi fi ;; *) ;; # Ignore the rest. esac done # Clean up. rm -f a.out a.exe else echo "libtool.m4: error: problem compiling CXX test program" fi $RM -f confest.$objext CFLAGS=$_lt_libdeps_save_CFLAGS # PORTME: override above test on systems where it is broken case $host_os in interix[3-9]*) # Interix 3.5 installs completely hosed .la files for C++, so rather than # hack all around it, let's just trust "g++" to DTRT. predep_objects_CXX= postdep_objects_CXX= postdeps_CXX= ;; linux*) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 # The more standards-conforming stlport4 library is # incompatible with the Cstd library. Avoid specifying # it if it's in CXXFLAGS. Ignore libCrun as # -library=stlport4 depends on it. case " $CXX $CXXFLAGS " in *" -library=stlport4 "*) solaris_use_stlport4=yes ;; esac if test "$solaris_use_stlport4" != yes; then postdeps_CXX='-library=Cstd -library=Crun' fi ;; esac ;; solaris*) case $cc_basename in CC* | sunCC*) # The more standards-conforming stlport4 library is # incompatible with the Cstd library. Avoid specifying # it if it's in CXXFLAGS. Ignore libCrun as # -library=stlport4 depends on it. case " $CXX $CXXFLAGS " in *" -library=stlport4 "*) solaris_use_stlport4=yes ;; esac # Adding this requires a known-good setup of shared libraries for # Sun compiler versions before 5.6, else PIC objects from an old # archive will be linked into the output, leading to subtle bugs. if test "$solaris_use_stlport4" != yes; then postdeps_CXX='-library=Cstd -library=Crun' fi ;; esac ;; esac case " $postdeps_CXX " in *" -lc "*) archive_cmds_need_lc_CXX=no ;; esac compiler_lib_search_dirs_CXX= if test -n "${compiler_lib_search_path_CXX}"; then compiler_lib_search_dirs_CXX=`echo " ${compiler_lib_search_path_CXX}" | ${SED} -e 's! -L! !g' -e 's!^ !!'` fi lt_prog_compiler_wl_CXX= lt_prog_compiler_pic_CXX= lt_prog_compiler_static_CXX= # C++ specific cases for pic, static, wl, etc. if test "$GXX" = yes; then lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='-static' case $host_os in aix*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor lt_prog_compiler_static_CXX='-Bstatic' fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support lt_prog_compiler_pic_CXX='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the `-m68020' flag to GCC prevents building anything better, # like `-m68040'. lt_prog_compiler_pic_CXX='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries lt_prog_compiler_pic_CXX='-DDLL_EXPORT' ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files lt_prog_compiler_pic_CXX='-fno-common' ;; *djgpp*) # DJGPP does not support shared libraries at all lt_prog_compiler_pic_CXX= ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. lt_prog_compiler_static_CXX= ;; interix[3-9]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; sysv4*MP*) if test -d /usr/nec; then lt_prog_compiler_pic_CXX=-Kconform_pic fi ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) ;; *) lt_prog_compiler_pic_CXX='-fPIC' ;; esac ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic_CXX='-fPIC -shared' ;; *) lt_prog_compiler_pic_CXX='-fPIC' ;; esac else case $host_os in aix[4-9]*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor lt_prog_compiler_static_CXX='-Bstatic' else lt_prog_compiler_static_CXX='-bnso -bI:/lib/syscalls.exp' fi ;; chorus*) case $cc_basename in cxch68*) # Green Hills C++ Compiler # _LT_TAGVAR(lt_prog_compiler_static, CXX)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a" ;; esac ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). lt_prog_compiler_pic_CXX='-DDLL_EXPORT' ;; dgux*) case $cc_basename in ec++*) lt_prog_compiler_pic_CXX='-KPIC' ;; ghcx*) # Green Hills C++ Compiler lt_prog_compiler_pic_CXX='-pic' ;; *) ;; esac ;; freebsd* | dragonfly*) # FreeBSD uses GNU C++ ;; hpux9* | hpux10* | hpux11*) case $cc_basename in CC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='${wl}-a ${wl}archive' if test "$host_cpu" != ia64; then lt_prog_compiler_pic_CXX='+Z' fi ;; aCC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='${wl}-a ${wl}archive' case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) lt_prog_compiler_pic_CXX='+Z' ;; esac ;; *) ;; esac ;; interix*) # This is c89, which is MS Visual C++ (no shared libs) # Anyone wants to do a port? ;; irix5* | irix6* | nonstopux*) case $cc_basename in CC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='-non_shared' # CC pic flag -KPIC is the default. ;; *) ;; esac ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in KCC*) # KAI C++ Compiler lt_prog_compiler_wl_CXX='--backend -Wl,' lt_prog_compiler_pic_CXX='-fPIC' ;; ecpc* ) # old Intel C++ for x86_64 which still supported -KPIC. lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-static' ;; icpc* ) # Intel C++, used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-fPIC' lt_prog_compiler_static_CXX='-static' ;; pgCC* | pgcpp*) # Portland Group C++ compiler lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-fpic' lt_prog_compiler_static_CXX='-Bstatic' ;; cxx*) # Compaq C++ # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. lt_prog_compiler_pic_CXX= lt_prog_compiler_static_CXX='-non_shared' ;; xlc* | xlC* | bgxl[cC]* | mpixl[cC]*) # IBM XL 8.0, 9.0 on PPC and BlueGene lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-qpic' lt_prog_compiler_static_CXX='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-Bstatic' lt_prog_compiler_wl_CXX='-Qoption ld ' ;; esac ;; esac ;; lynxos*) ;; m88k*) ;; mvs*) case $cc_basename in cxx*) lt_prog_compiler_pic_CXX='-W c,exportall' ;; *) ;; esac ;; netbsd* | netbsdelf*-gnu) ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic_CXX='-fPIC -shared' ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) lt_prog_compiler_wl_CXX='--backend -Wl,' ;; RCC*) # Rational C++ 2.4.1 lt_prog_compiler_pic_CXX='-pic' ;; cxx*) # Digital/Compaq C++ lt_prog_compiler_wl_CXX='-Wl,' # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. lt_prog_compiler_pic_CXX= lt_prog_compiler_static_CXX='-non_shared' ;; *) ;; esac ;; psos*) ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-Bstatic' lt_prog_compiler_wl_CXX='-Qoption ld ' ;; gcx*) # Green Hills C++ Compiler lt_prog_compiler_pic_CXX='-PIC' ;; *) ;; esac ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x lt_prog_compiler_pic_CXX='-pic' lt_prog_compiler_static_CXX='-Bstatic' ;; lcc*) # Lucid lt_prog_compiler_pic_CXX='-pic' ;; *) ;; esac ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) case $cc_basename in CC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-Bstatic' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 lt_prog_compiler_pic_CXX='-KPIC' ;; *) ;; esac ;; vxworks*) ;; *) lt_prog_compiler_can_build_shared_CXX=no ;; esac fi case $host_os in # For platforms which do not support PIC, -DPIC is meaningless: *djgpp*) lt_prog_compiler_pic_CXX= ;; *) lt_prog_compiler_pic_CXX="$lt_prog_compiler_pic_CXX -DPIC" ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5 $as_echo_n "checking for $compiler option to produce PIC... " >&6; } if ${lt_cv_prog_compiler_pic_CXX+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic_CXX=$lt_prog_compiler_pic_CXX fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_CXX" >&5 $as_echo "$lt_cv_prog_compiler_pic_CXX" >&6; } lt_prog_compiler_pic_CXX=$lt_cv_prog_compiler_pic_CXX # # Check to make sure the PIC flag actually works. # if test -n "$lt_prog_compiler_pic_CXX"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler PIC flag $lt_prog_compiler_pic_CXX works" >&5 $as_echo_n "checking if $compiler PIC flag $lt_prog_compiler_pic_CXX works... " >&6; } if ${lt_cv_prog_compiler_pic_works_CXX+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic_works_CXX=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="$lt_prog_compiler_pic_CXX -DPIC" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_pic_works_CXX=yes fi fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_works_CXX" >&5 $as_echo "$lt_cv_prog_compiler_pic_works_CXX" >&6; } if test x"$lt_cv_prog_compiler_pic_works_CXX" = xyes; then case $lt_prog_compiler_pic_CXX in "" | " "*) ;; *) lt_prog_compiler_pic_CXX=" $lt_prog_compiler_pic_CXX" ;; esac else lt_prog_compiler_pic_CXX= lt_prog_compiler_can_build_shared_CXX=no fi fi # # Check to make sure the static flag actually works. # wl=$lt_prog_compiler_wl_CXX eval lt_tmp_static_flag=\"$lt_prog_compiler_static_CXX\" { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler static flag $lt_tmp_static_flag works" >&5 $as_echo_n "checking if $compiler static flag $lt_tmp_static_flag works... " >&6; } if ${lt_cv_prog_compiler_static_works_CXX+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_static_works_CXX=no save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS $lt_tmp_static_flag" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_static_works_CXX=yes fi else lt_cv_prog_compiler_static_works_CXX=yes fi fi $RM -r conftest* LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_static_works_CXX" >&5 $as_echo "$lt_cv_prog_compiler_static_works_CXX" >&6; } if test x"$lt_cv_prog_compiler_static_works_CXX" = xyes; then : else lt_prog_compiler_static_CXX= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o_CXX+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o_CXX=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o_CXX=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o_CXX" >&5 $as_echo "$lt_cv_prog_compiler_c_o_CXX" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o_CXX+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o_CXX=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o_CXX=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o_CXX" >&5 $as_echo "$lt_cv_prog_compiler_c_o_CXX" >&6; } hard_links="nottested" if test "$lt_cv_prog_compiler_c_o_CXX" = no && test "$need_locks" != no; then # do not overwrite the value of need_locks provided by the user { $as_echo "$as_me:${as_lineno-$LINENO}: checking if we can lock with hard links" >&5 $as_echo_n "checking if we can lock with hard links... " >&6; } hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hard_links" >&5 $as_echo "$hard_links" >&6; } if test "$hard_links" = no; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5 $as_echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;} need_locks=warn fi else need_locks=no fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5 $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; } export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*' case $host_os in aix[4-9]*) # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to AIX nm, but means don't demangle with GNU nm # Also, AIX nm treats weak defined symbols like other global defined # symbols, whereas GNU nm marks them as "W". if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then export_symbols_cmds_CXX='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && (substr(\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' else export_symbols_cmds_CXX='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B")) && (substr(\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' fi ;; pw32*) export_symbols_cmds_CXX="$ltdll_cmds" ;; cygwin* | mingw* | cegcc*) case $cc_basename in cl*) exclude_expsyms_CXX='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' ;; *) export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols' exclude_expsyms_CXX='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname' ;; esac ;; linux* | k*bsd*-gnu | gnu*) link_all_deplibs_CXX=no ;; *) export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5 $as_echo "$ld_shlibs_CXX" >&6; } test "$ld_shlibs_CXX" = no && can_build_shared=no with_gnu_ld_CXX=$with_gnu_ld # # Do we need to explicitly link libc? # case "x$archive_cmds_need_lc_CXX" in x|xyes) # Assume -lc should be added archive_cmds_need_lc_CXX=yes if test "$enable_shared" = yes && test "$GCC" = yes; then case $archive_cmds_CXX in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -lc should be explicitly linked in" >&5 $as_echo_n "checking whether -lc should be explicitly linked in... " >&6; } if ${lt_cv_archive_cmds_need_lc_CXX+:} false; then : $as_echo_n "(cached) " >&6 else $RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$lt_prog_compiler_wl_CXX pic_flag=$lt_prog_compiler_pic_CXX compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$allow_undefined_flag_CXX allow_undefined_flag_CXX= if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$archive_cmds_CXX 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1\""; } >&5 (eval $archive_cmds_CXX 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then lt_cv_archive_cmds_need_lc_CXX=no else lt_cv_archive_cmds_need_lc_CXX=yes fi allow_undefined_flag_CXX=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_archive_cmds_need_lc_CXX" >&5 $as_echo "$lt_cv_archive_cmds_need_lc_CXX" >&6; } archive_cmds_need_lc_CXX=$lt_cv_archive_cmds_need_lc_CXX ;; esac fi ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking dynamic linker characteristics" >&5 $as_echo_n "checking dynamic linker characteristics... " >&6; } library_names_spec= libname_spec='lib$name' soname_spec= shrext_cmds=".so" postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. soname_spec='${libname}${release}${shared_ext}$major' ;; aix[4-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes if test "$host_cpu" = ia64; then # AIX 5 supports IA64 library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with # the line `#! .'. This would cause the generated library to # depend on `.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[01] | aix4.[01].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' echo '#endif'; } | ${CC} -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac # AIX (on Power*) has no versioning support, so currently we can not hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. if test "$aix_use_runtimelinking" = yes; then # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' else # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. library_names_spec='${libname}${release}.a $libname.a' soname_spec='${libname}${release}${shared_ext}$major' fi shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) library_names_spec='${libname}${shared_ext}' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[45]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows shrext_cmds=".dll" need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl*) # Native MSVC libname_spec='$name' soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' library_names_spec='${libname}.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) sys_lib_search_path_spec="$LIB" if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC wrapper library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${major}$shared_ext ${libname}$shared_ext' soname_spec='${libname}${release}${major}$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[23].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' need_version=no need_lib_prefix=no ;; freebsd-*) library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[01]* | freebsdelf3.[01]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LIBRARY_PATH shlibpath_overrides_runpath=yes sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' if test "X$HPUX_IA64_MODE" = X32; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" fi sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[3-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) if test "$lt_cv_prog_gnu_ld" = yes; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH if ${lt_cv_shlibpath_overrides_runpath+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$lt_prog_compiler_wl_CXX\"; \ LDFLAGS=\"\$LDFLAGS $hardcode_libdir_flag_spec_CXX\"" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO"; then : if ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null; then : lt_cv_shlibpath_overrides_runpath=yes fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS libdir=$save_libdir fi shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes # Append ld.so.conf contents to the search path if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; netbsdelf*-gnu) version_type=linux need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='NetBSD ld.elf_so' ;; netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; openbsd*) version_type=sunos sys_lib_dlsearch_path_spec="/usr/lib" need_lib_prefix=no # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. case $host_os in openbsd3.3 | openbsd3.3.*) need_version=yes ;; *) need_version=no ;; esac library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then case $host_os in openbsd2.[89] | openbsd2.[89].*) shlibpath_overrides_runpath=no ;; *) shlibpath_overrides_runpath=yes ;; esac else shlibpath_overrides_runpath=yes fi ;; os2*) libname_spec='$name' shrext_cmds=".dll" need_lib_prefix=no library_names_spec='$libname${shared_ext} $libname.a' dynamic_linker='OS/2 ld.exe' shlibpath_var=LIBPATH ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes if test "$with_gnu_ld" = yes; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) if test -d /usr/nec ;then version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' soname_spec='$libname${shared_ext}.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) version_type=freebsd-elf need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes if test "$with_gnu_ld" = yes; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $dynamic_linker" >&5 $as_echo "$dynamic_linker" >&6; } test "$dynamic_linker" = no && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" if test "$GCC" = yes; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi if test "${lt_cv_sys_lib_search_path_spec+set}" = set; then sys_lib_search_path_spec="$lt_cv_sys_lib_search_path_spec" fi if test "${lt_cv_sys_lib_dlsearch_path_spec+set}" = set; then sys_lib_dlsearch_path_spec="$lt_cv_sys_lib_dlsearch_path_spec" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to hardcode library paths into programs" >&5 $as_echo_n "checking how to hardcode library paths into programs... " >&6; } hardcode_action_CXX= if test -n "$hardcode_libdir_flag_spec_CXX" || test -n "$runpath_var_CXX" || test "X$hardcode_automatic_CXX" = "Xyes" ; then # We can hardcode non-existent directories. if test "$hardcode_direct_CXX" != no && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one ## test "$_LT_TAGVAR(hardcode_shlibpath_var, CXX)" != no && test "$hardcode_minus_L_CXX" != no; then # Linking always hardcodes the temporary library directory. hardcode_action_CXX=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. hardcode_action_CXX=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. hardcode_action_CXX=unsupported fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hardcode_action_CXX" >&5 $as_echo "$hardcode_action_CXX" >&6; } if test "$hardcode_action_CXX" = relink || test "$inherit_rpath_CXX" = yes; then # Fast installation is not supported enable_fast_install=no elif test "$shlibpath_overrides_runpath" = yes || test "$enable_shared" = no; then # Fast installation is not necessary enable_fast_install=needless fi fi # test -n "$compiler" CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS LDCXX=$LD LD=$lt_save_LD GCC=$lt_save_GCC with_gnu_ld=$lt_save_with_gnu_ld lt_cv_path_LDCXX=$lt_cv_path_LD lt_cv_path_LD=$lt_save_path_LD lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld fi # test "$_lt_caught_CXX_error" != yes ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ac_config_commands="$ac_config_commands libtool" # Only expand once: if test "x$ac_cv_env_PKG_CONFIG_set" != "xset"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}pkg-config", so it can be a program name with args. set dummy ${ac_tool_prefix}pkg-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_PKG_CONFIG+:} false; then : $as_echo_n "(cached) " >&6 else case $PKG_CONFIG in [\\/]* | ?:[\\/]*) ac_cv_path_PKG_CONFIG="$PKG_CONFIG" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_PKG_CONFIG="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS ;; esac fi PKG_CONFIG=$ac_cv_path_PKG_CONFIG if test -n "$PKG_CONFIG"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PKG_CONFIG" >&5 $as_echo "$PKG_CONFIG" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_path_PKG_CONFIG"; then ac_pt_PKG_CONFIG=$PKG_CONFIG # Extract the first word of "pkg-config", so it can be a program name with args. set dummy pkg-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_ac_pt_PKG_CONFIG+:} false; then : $as_echo_n "(cached) " >&6 else case $ac_pt_PKG_CONFIG in [\\/]* | ?:[\\/]*) ac_cv_path_ac_pt_PKG_CONFIG="$ac_pt_PKG_CONFIG" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_ac_pt_PKG_CONFIG="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS ;; esac fi ac_pt_PKG_CONFIG=$ac_cv_path_ac_pt_PKG_CONFIG if test -n "$ac_pt_PKG_CONFIG"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_pt_PKG_CONFIG" >&5 $as_echo "$ac_pt_PKG_CONFIG" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_pt_PKG_CONFIG" = x; then PKG_CONFIG="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac PKG_CONFIG=$ac_pt_PKG_CONFIG fi else PKG_CONFIG="$ac_cv_path_PKG_CONFIG" fi fi if test -n "$PKG_CONFIG"; then _pkg_min_version=0.9.0 { $as_echo "$as_me:${as_lineno-$LINENO}: checking pkg-config is at least version $_pkg_min_version" >&5 $as_echo_n "checking pkg-config is at least version $_pkg_min_version... " >&6; } if $PKG_CONFIG --atleast-pkgconfig-version $_pkg_min_version; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } PKG_CONFIG="" fi fi if test -n "$ac_ct_CXX"; then WITH_CXX_TRUE= WITH_CXX_FALSE='#' else WITH_CXX_TRUE='#' WITH_CXX_FALSE= fi if test "$with_gnu_ld" = "yes"; then WITH_GNU_LD_TRUE= WITH_GNU_LD_FALSE='#' else WITH_GNU_LD_TRUE='#' WITH_GNU_LD_FALSE= fi # Extract the first word of "sleep", so it can be a program name with args. set dummy sleep; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_SLEEP_CMD+:} false; then : $as_echo_n "(cached) " >&6 else case $SLEEP_CMD in [\\/]* | ?:[\\/]*) ac_cv_path_SLEEP_CMD="$SLEEP_CMD" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_SLEEP_CMD="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_path_SLEEP_CMD" && ac_cv_path_SLEEP_CMD="/bin/sleep" ;; esac fi SLEEP_CMD=$ac_cv_path_SLEEP_CMD if test -n "$SLEEP_CMD"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SLEEP_CMD" >&5 $as_echo "$SLEEP_CMD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi cat >>confdefs.h <<_ACEOF #define SLEEP_CMD "$SLEEP_CMD" _ACEOF # Extract the first word of "su", so it can be a program name with args. set dummy su; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_SUCMD+:} false; then : $as_echo_n "(cached) " >&6 else case $SUCMD in [\\/]* | ?:[\\/]*) ac_cv_path_SUCMD="$SUCMD" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_SUCMD="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_path_SUCMD" && ac_cv_path_SUCMD="/bin/su" ;; esac fi SUCMD=$ac_cv_path_SUCMD if test -n "$SUCMD"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SUCMD" >&5 $as_echo "$SUCMD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi cat >>confdefs.h <<_ACEOF #define SUCMD "$SUCMD" _ACEOF { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing socket" >&5 $as_echo_n "checking for library containing socket... " >&6; } if ${ac_cv_search_socket+:} false; then : $as_echo_n "(cached) " >&6 else ac_func_search_save_LIBS=$LIBS cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char socket (); int main () { return socket (); ; return 0; } _ACEOF for ac_lib in '' socket; do if test -z "$ac_lib"; then ac_res="none required" else ac_res=-l$ac_lib LIBS="-l$ac_lib $ac_func_search_save_LIBS" fi if ac_fn_c_try_link "$LINENO"; then : ac_cv_search_socket=$ac_res fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext if ${ac_cv_search_socket+:} false; then : break fi done if ${ac_cv_search_socket+:} false; then : else ac_cv_search_socket=no fi rm conftest.$ac_ext LIBS=$ac_func_search_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_socket" >&5 $as_echo "$ac_cv_search_socket" >&6; } ac_res=$ac_cv_search_socket if test "$ac_res" != no; then : test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gethostbyname" >&5 $as_echo_n "checking for library containing gethostbyname... " >&6; } if ${ac_cv_search_gethostbyname+:} false; then : $as_echo_n "(cached) " >&6 else ac_func_search_save_LIBS=$LIBS cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char gethostbyname (); int main () { return gethostbyname (); ; return 0; } _ACEOF for ac_lib in '' nsl; do if test -z "$ac_lib"; then ac_res="none required" else ac_res=-l$ac_lib LIBS="-l$ac_lib $ac_func_search_save_LIBS" fi if ac_fn_c_try_link "$LINENO"; then : ac_cv_search_gethostbyname=$ac_res fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext if ${ac_cv_search_gethostbyname+:} false; then : break fi done if ${ac_cv_search_gethostbyname+:} false; then : else ac_cv_search_gethostbyname=no fi rm conftest.$ac_ext LIBS=$ac_func_search_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_gethostbyname" >&5 $as_echo "$ac_cv_search_gethostbyname" >&6; } ac_res=$ac_cv_search_gethostbyname if test "$ac_res" != no; then : test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing hstrerror" >&5 $as_echo_n "checking for library containing hstrerror... " >&6; } if ${ac_cv_search_hstrerror+:} false; then : $as_echo_n "(cached) " >&6 else ac_func_search_save_LIBS=$LIBS cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char hstrerror (); int main () { return hstrerror (); ; return 0; } _ACEOF for ac_lib in '' resolv; do if test -z "$ac_lib"; then ac_res="none required" else ac_res=-l$ac_lib LIBS="-l$ac_lib $ac_func_search_save_LIBS" fi if ac_fn_c_try_link "$LINENO"; then : ac_cv_search_hstrerror=$ac_res fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext if ${ac_cv_search_hstrerror+:} false; then : break fi done if ${ac_cv_search_hstrerror+:} false; then : else ac_cv_search_hstrerror=no fi rm conftest.$ac_ext LIBS=$ac_func_search_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_hstrerror" >&5 $as_echo "$ac_cv_search_hstrerror" >&6; } ac_res=$ac_cv_search_hstrerror if test "$ac_res" != no; then : test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing kstat_open" >&5 $as_echo_n "checking for library containing kstat_open... " >&6; } if ${ac_cv_search_kstat_open+:} false; then : $as_echo_n "(cached) " >&6 else ac_func_search_save_LIBS=$LIBS cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char kstat_open (); int main () { return kstat_open (); ; return 0; } _ACEOF for ac_lib in '' kstat; do if test -z "$ac_lib"; then ac_res="none required" else ac_res=-l$ac_lib LIBS="-l$ac_lib $ac_func_search_save_LIBS" fi if ac_fn_c_try_link "$LINENO"; then : ac_cv_search_kstat_open=$ac_res fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext if ${ac_cv_search_kstat_open+:} false; then : break fi done if ${ac_cv_search_kstat_open+:} false; then : else ac_cv_search_kstat_open=no fi rm conftest.$ac_ext LIBS=$ac_func_search_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_kstat_open" >&5 $as_echo "$ac_cv_search_kstat_open" >&6; } ac_res=$ac_cv_search_kstat_open if test "$ac_res" != no; then : test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" fi for ac_header in mcheck.h values.h socket.h sys/socket.h \ stdbool.h sys/ipc.h sys/shm.h sys/sem.h errno.h \ stdlib.h dirent.h pthread.h sys/prctl.h \ sysint.h inttypes.h termcap.h netdb.h sys/socket.h \ sys/systemcfg.h ncurses.h curses.h sys/dr.h sys/vfs.h \ pam/pam_appl.h security/pam_appl.h sys/sysctl.h \ pty.h utmp.h \ sys/syslog.h linux/sched.h \ kstat.h paths.h limits.h sys/statfs.h sys/ptrace.h \ sys/termios.h float.h sys/statvfs.h do : as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh` ac_fn_c_check_header_mongrel "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default" if eval test \"x\$"$as_ac_Header"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_header" | $as_tr_cpp` 1 _ACEOF fi done { $as_echo "$as_me:${as_lineno-$LINENO}: checking for sys/wait.h that is POSIX.1 compatible" >&5 $as_echo_n "checking for sys/wait.h that is POSIX.1 compatible... " >&6; } if ${ac_cv_header_sys_wait_h+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #ifndef WEXITSTATUS # define WEXITSTATUS(stat_val) ((unsigned int) (stat_val) >> 8) #endif #ifndef WIFEXITED # define WIFEXITED(stat_val) (((stat_val) & 255) == 0) #endif int main () { int s; wait (&s); s = WIFEXITED (s) ? WEXITSTATUS (s) : 1; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_header_sys_wait_h=yes else ac_cv_header_sys_wait_h=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_header_sys_wait_h" >&5 $as_echo "$ac_cv_header_sys_wait_h" >&6; } if test $ac_cv_header_sys_wait_h = yes; then $as_echo "#define HAVE_SYS_WAIT_H 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether time.h and sys/time.h may both be included" >&5 $as_echo_n "checking whether time.h and sys/time.h may both be included... " >&6; } if ${ac_cv_header_time+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include int main () { if ((struct tm *) 0) return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_header_time=yes else ac_cv_header_time=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_header_time" >&5 $as_echo "$ac_cv_header_time" >&6; } if test $ac_cv_header_time = yes; then $as_echo "#define TIME_WITH_SYS_TIME 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ANSI C header files" >&5 $as_echo_n "checking for ANSI C header files... " >&6; } if ${ac_cv_header_stdc+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include #include int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_header_stdc=yes else ac_cv_header_stdc=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext if test $ac_cv_header_stdc = yes; then # SunOS 4.x string.h does not declare mem*, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "memchr" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "free" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi. if test "$cross_compiling" = yes; then : : else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #if ((' ' & 0x0FF) == 0x020) # define ISLOWER(c) ('a' <= (c) && (c) <= 'z') # define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) #else # define ISLOWER(c) \ (('a' <= (c) && (c) <= 'i') \ || ('j' <= (c) && (c) <= 'r') \ || ('s' <= (c) && (c) <= 'z')) # define TOUPPER(c) (ISLOWER(c) ? ((c) | 0x40) : (c)) #endif #define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) int main () { int i; for (i = 0; i < 256; i++) if (XOR (islower (i), ISLOWER (i)) || toupper (i) != TOUPPER (i)) return 2; return 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : else ac_cv_header_stdc=no fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_header_stdc" >&5 $as_echo "$ac_cv_header_stdc" >&6; } if test $ac_cv_header_stdc = yes; then $as_echo "#define STDC_HEADERS 1" >>confdefs.h fi cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { double x = _system_configuration.physmem; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : $as_echo "#define HAVE__SYSTEM_CONFIGURATION 1" >>confdefs.h fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: checking library containing dlopen" >&5 $as_echo_n "checking library containing dlopen... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -l" >&5 $as_echo_n "checking for dlopen in -l... " >&6; } if ${ac_cv_lib__dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-l $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib__dlopen=yes else ac_cv_lib__dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib__dlopen" >&5 $as_echo "$ac_cv_lib__dlopen" >&6; } if test "x$ac_cv_lib__dlopen" = xyes; then : ac_have_dlopen=yes; DL_LIBS="" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : ac_have_dlopen=yes; DL_LIBS="-ldl" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -lsvdl" >&5 $as_echo_n "checking for dlopen in -lsvdl... " >&6; } if ${ac_cv_lib_svdl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lsvdl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_svdl_dlopen=yes else ac_cv_lib_svdl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_svdl_dlopen" >&5 $as_echo "$ac_cv_lib_svdl_dlopen" >&6; } if test "x$ac_cv_lib_svdl_dlopen" = xyes; then : ac_have_dlopen=yes; DL_LIBS="-lsvdl" fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for program_invocation_name" >&5 $as_echo_n "checking for program_invocation_name... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include extern char *program_invocation_name; int main () { char *p; p = program_invocation_name; printf("%s\n", p); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : got_program_invocation_name=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${got_program_invocation_name=no}" >&5 $as_echo "${got_program_invocation_name=no}" >&6; } if test "x$got_program_invocation_name" = "xyes"; then $as_echo "#define HAVE_PROGRAM_INVOCATION_NAME 1" >>confdefs.h fi cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include int main () { ptrace(PT_TRACE_ME,0,0,0,0); ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : $as_echo "#define PTRACE_FIVE_ARGS 1" >>confdefs.h fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext for ac_func in ptrace64 do : ac_fn_c_check_func "$LINENO" "ptrace64" "ac_cv_func_ptrace64" if test "x$ac_cv_func_ptrace64" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_PTRACE64 1 _ACEOF fi done cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { setpgrp(0,0); ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : $as_echo "#define SETPGRP_TWO_ARGS 1" >>confdefs.h fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext # Test if sched_setaffinity function exists and argument count (it can vary) for ac_func in sched_setaffinity do : ac_fn_c_check_func "$LINENO" "sched_setaffinity" "ac_cv_func_sched_setaffinity" if test "x$ac_cv_func_sched_setaffinity" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_SCHED_SETAFFINITY 1 _ACEOF have_sched_setaffinity=yes fi done cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #define _GNU_SOURCE #include int main () { cpu_set_t mask; sched_getaffinity(0, sizeof(cpu_set_t), &mask); ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : $as_echo "#define SCHED_GETAFFINITY_THREE_ARGS 1" >>confdefs.h fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #define _GNU_SOURCE #include int main () { cpu_set_t mask; sched_getaffinity(0, &mask); ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : $as_echo "#define SCHED_GETAFFINITY_TWO_ARGS 1" >>confdefs.h fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext # # Test for NUMA memory afffinity functions and set the definitions # { $as_echo "$as_me:${as_lineno-$LINENO}: checking for numa_available in -lnuma" >&5 $as_echo_n "checking for numa_available in -lnuma... " >&6; } if ${ac_cv_lib_numa_numa_available+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lnuma $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char numa_available (); int main () { return numa_available (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_numa_numa_available=yes else ac_cv_lib_numa_numa_available=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_numa_numa_available" >&5 $as_echo "$ac_cv_lib_numa_numa_available" >&6; } if test "x$ac_cv_lib_numa_numa_available" = xyes; then : ac_have_numa=yes; NUMA_LIBS="-lnuma" fi if test "x$ac_have_numa" = "xyes"; then HAVE_NUMA_TRUE= HAVE_NUMA_FALSE='#' else HAVE_NUMA_TRUE='#' HAVE_NUMA_FALSE= fi if test "x$ac_have_numa" = "xyes"; then $as_echo "#define HAVE_NUMA 1" >>confdefs.h CFLAGS="-DNUMA_VERSION1_COMPATIBILITY $CFLAGS" else { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate NUMA memory affinity functions" >&5 $as_echo "$as_me: WARNING: unable to locate NUMA memory affinity functions" >&2;} fi # # Test for cpuset directory # cpuset_default_dir="/dev/cpuset" # Check whether --with-cpusetdir was given. if test "${with_cpusetdir+set}" = set; then : withval=$with_cpusetdir; try_path=$withval fi for cpuset_dir in $try_path "" $cpuset_default_dir; do if test -d "$cpuset_dir" ; then cat >>confdefs.h <<_ACEOF #define CPUSET_DIR "$cpuset_dir" _ACEOF have_sched_setaffinity=yes break fi done # # Set HAVE_SCHED_SETAFFINITY if any task affinity supported if test "x$have_sched_setaffinity" = "xyes"; then HAVE_SCHED_SETAFFINITY_TRUE= HAVE_SCHED_SETAFFINITY_FALSE='#' else HAVE_SCHED_SETAFFINITY_TRUE='#' HAVE_SCHED_SETAFFINITY_FALSE= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable PAM support" >&5 $as_echo_n "checking whether to enable PAM support... " >&6; } # Check whether --enable-pam was given. if test "${enable_pam+set}" = set; then : enableval=$enable_pam; case "$enableval" in yes) x_ac_pam=yes ;; no) x_ac_pam=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --enable-pam" "$LINENO" 5 ;; esac else x_ac_pam=yes fi if test "$x_ac_pam" = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for pam_get_user in -lpam" >&5 $as_echo_n "checking for pam_get_user in -lpam... " >&6; } if ${ac_cv_lib_pam_pam_get_user+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lpam $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char pam_get_user (); int main () { return pam_get_user (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_pam_pam_get_user=yes else ac_cv_lib_pam_pam_get_user=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_pam_pam_get_user" >&5 $as_echo "$ac_cv_lib_pam_pam_get_user" >&6; } if test "x$ac_cv_lib_pam_pam_get_user" = xyes; then : ac_have_pam=yes; PAM_LIBS="-lpam" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for misc_conv in -lpam_misc" >&5 $as_echo_n "checking for misc_conv in -lpam_misc... " >&6; } if ${ac_cv_lib_pam_misc_misc_conv+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lpam_misc $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char misc_conv (); int main () { return misc_conv (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_pam_misc_misc_conv=yes else ac_cv_lib_pam_misc_misc_conv=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_pam_misc_misc_conv" >&5 $as_echo "$ac_cv_lib_pam_misc_misc_conv" >&6; } if test "x$ac_cv_lib_pam_misc_misc_conv" = xyes; then : ac_have_pam_misc=yes; PAM_LIBS="$PAM_LIBS -lpam_misc" fi if test "x$ac_have_pam" = "xyes" -a "x$ac_have_pam_misc" = "xyes"; then $as_echo "#define HAVE_PAM /**/" >>confdefs.h else { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate PAM libraries" >&5 $as_echo "$as_me: WARNING: unable to locate PAM libraries" >&2;} fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$x_ac_pam" = "xyes" -a "x$ac_have_pam" = "xyes" -a "x$ac_have_pam_misc" = "xyes"; then HAVE_PAM_TRUE= HAVE_PAM_FALSE='#' else HAVE_PAM_TRUE='#' HAVE_PAM_FALSE= fi # Check whether --with-pam_dir was given. if test "${with_pam_dir+set}" = set; then : withval=$with_pam_dir; if test -d $withval ; then PAM_DIR="$withval" else as_fn_error $? "bad value \"$withval\" for --with-pam_dir" "$LINENO" 5 fi else if test -d /lib64/security ; then PAM_DIR="/lib64/security" else PAM_DIR="/lib/security" fi fi cat >>confdefs.h <<_ACEOF #define PAM_DIR "$pam_dir" _ACEOF { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable ISO 8601 time format support" >&5 $as_echo_n "checking whether to enable ISO 8601 time format support... " >&6; } # Check whether --enable-iso8601 was given. if test "${enable_iso8601+set}" = set; then : enableval=$enable_iso8601; case "$enableval" in yes) x_ac_iso8601=yes ;; no) x_ac_iso8601=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --enable-iso8601" "$LINENO" 5 ;; esac else x_ac_iso8601=yes fi if test "$x_ac_iso8601" = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define USE_ISO_8601 /**/" >>confdefs.h else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether sbatch --get-user-env option should load .login" >&5 $as_echo_n "checking whether sbatch --get-user-env option should load .login... " >&6; } # Check whether --enable-load-env-no-login was given. if test "${enable_load_env_no_login+set}" = set; then : enableval=$enable_load_env_no_login; case "$enableval" in yes) x_ac_load_env_no_login=yes ;; no) x_ac_load_env_no_login=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --enable-load-env-no-login" "$LINENO" 5 ;; esac else x_ac_load_env_no_login=no fi if test "$x_ac_load_env_no_login" = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define LOAD_ENV_NO_LOGIN 1" >>confdefs.h else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether byte ordering is bigendian" >&5 $as_echo_n "checking whether byte ordering is bigendian... " >&6; } if ${ac_cv_c_bigendian+:} false; then : $as_echo_n "(cached) " >&6 else ac_cv_c_bigendian=unknown # See if we're dealing with a universal compiler. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifndef __APPLE_CC__ not a universal capable compiler #endif typedef int dummy; _ACEOF if ac_fn_c_try_compile "$LINENO"; then : # Check for potential -arch flags. It is not universal unless # there are at least two -arch flags with different values. ac_arch= ac_prev= for ac_word in $CC $CFLAGS $CPPFLAGS $LDFLAGS; do if test -n "$ac_prev"; then case $ac_word in i?86 | x86_64 | ppc | ppc64) if test -z "$ac_arch" || test "$ac_arch" = "$ac_word"; then ac_arch=$ac_word else ac_cv_c_bigendian=universal break fi ;; esac ac_prev= elif test "x$ac_word" = "x-arch"; then ac_prev=arch fi done fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext if test $ac_cv_c_bigendian = unknown; then # See if sys/param.h defines the BYTE_ORDER macro. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { #if ! (defined BYTE_ORDER && defined BIG_ENDIAN \ && defined LITTLE_ENDIAN && BYTE_ORDER && BIG_ENDIAN \ && LITTLE_ENDIAN) bogus endian macros #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : # It does; now see whether it defined to BIG_ENDIAN or not. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { #if BYTE_ORDER != BIG_ENDIAN not big endian #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_c_bigendian=yes else ac_cv_c_bigendian=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi if test $ac_cv_c_bigendian = unknown; then # See if defines _LITTLE_ENDIAN or _BIG_ENDIAN (e.g., Solaris). cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { #if ! (defined _LITTLE_ENDIAN || defined _BIG_ENDIAN) bogus endian macros #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : # It does; now see whether it defined to _BIG_ENDIAN or not. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { #ifndef _BIG_ENDIAN not big endian #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_c_bigendian=yes else ac_cv_c_bigendian=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi if test $ac_cv_c_bigendian = unknown; then # Compile a test program. if test "$cross_compiling" = yes; then : # Try to guess by grepping values from an object file. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ short int ascii_mm[] = { 0x4249, 0x4765, 0x6E44, 0x6961, 0x6E53, 0x7953, 0 }; short int ascii_ii[] = { 0x694C, 0x5454, 0x656C, 0x6E45, 0x6944, 0x6E61, 0 }; int use_ascii (int i) { return ascii_mm[i] + ascii_ii[i]; } short int ebcdic_ii[] = { 0x89D3, 0xE3E3, 0x8593, 0x95C5, 0x89C4, 0x9581, 0 }; short int ebcdic_mm[] = { 0xC2C9, 0xC785, 0x95C4, 0x8981, 0x95E2, 0xA8E2, 0 }; int use_ebcdic (int i) { return ebcdic_mm[i] + ebcdic_ii[i]; } extern int foo; int main () { return use_ascii (foo) == use_ebcdic (foo); ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : if grep BIGenDianSyS conftest.$ac_objext >/dev/null; then ac_cv_c_bigendian=yes fi if grep LiTTleEnDian conftest.$ac_objext >/dev/null ; then if test "$ac_cv_c_bigendian" = unknown; then ac_cv_c_bigendian=no else # finding both strings is unlikely to happen, but who knows? ac_cv_c_bigendian=unknown fi fi fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_includes_default int main () { /* Are we little or big endian? From Harbison&Steele. */ union { long int l; char c[sizeof (long int)]; } u; u.l = 1; return u.c[sizeof (long int) - 1] == 1; ; return 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : ac_cv_c_bigendian=no else ac_cv_c_bigendian=yes fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_bigendian" >&5 $as_echo "$ac_cv_c_bigendian" >&6; } case $ac_cv_c_bigendian in #( yes) $as_echo "#define WORDS_BIGENDIAN 1" >>confdefs.h ;; #( no) ;; #( universal) $as_echo "#define AC_APPLE_UNIVERSAL_BUILD 1" >>confdefs.h ;; #( *) as_fn_error $? "unknown endianness presetting ac_cv_c_bigendian=no (or yes) will help" "$LINENO" 5 ;; esac if test "x$ac_cv_c_bigendian" = "xyes"; then $as_echo "#define SLURM_BIGENDIAN 1" >>confdefs.h fi x_ac_json_dirs="/usr /usr/local" x_ac_json_libs="lib64 lib" # Check whether --with-json was given. if test "${with_json+set}" = set; then : withval=$with_json; x_ac_json_dirs="$withval $x_ac_json_dirs" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for json installation" >&5 $as_echo_n "checking for json installation... " >&6; } if ${x_ac_cv_json_dir+:} false; then : $as_echo_n "(cached) " >&6 else for d in $x_ac_json_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/json-c/json_object.h" || test -f "$d/include/json/json_object.h" || continue for bit in $x_ac_json_libs; do test -d "$d/$bit" || continue _x_ac_json_libs_save="$LIBS" LIBS="-L$d/$bit -ljson-c $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char json_tokener_parse (); int main () { return json_tokener_parse (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : x_ac_cv_json_dir=$d fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS="$_x_ac_json_libs_save" test -n "$x_ac_cv_json_dir" && break done test -n "$x_ac_cv_json_dir" && break done fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $x_ac_cv_json_dir" >&5 $as_echo "$x_ac_cv_json_dir" >&6; } if test -z "$x_ac_cv_json_dir"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate json parser library" >&5 $as_echo "$as_me: WARNING: unable to locate json parser library" >&2;} else if test -f "$d/include/json-c/json_object.h" ; then $as_echo "#define HAVE_JSON_C_INC 1" >>confdefs.h fi if test -f "$d/include/json/json_object.h" ; then $as_echo "#define HAVE_JSON_INC 1" >>confdefs.h fi $as_echo "#define HAVE_JSON 1" >>confdefs.h JSON_CPPFLAGS="-I$x_ac_cv_json_dir/include" JSON_LDFLAGS="-L$x_ac_cv_json_dir/$bit -ljson-c" fi if test -n "$x_ac_cv_json_dir"; then WITH_JSON_PARSER_TRUE= WITH_JSON_PARSER_FALSE='#' else WITH_JSON_PARSER_TRUE='#' WITH_JSON_PARSER_FALSE= fi if test $ac_cv_c_compiler_gnu = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC needs -traditional" >&5 $as_echo_n "checking whether $CC needs -traditional... " >&6; } if ${ac_cv_prog_gcc_traditional+:} false; then : $as_echo_n "(cached) " >&6 else ac_pattern="Autoconf.*'x'" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include Autoconf TIOCGETP _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "$ac_pattern" >/dev/null 2>&1; then : ac_cv_prog_gcc_traditional=yes else ac_cv_prog_gcc_traditional=no fi rm -f conftest* if test $ac_cv_prog_gcc_traditional = no; then cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include Autoconf TCGETA _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "$ac_pattern" >/dev/null 2>&1; then : ac_cv_prog_gcc_traditional=yes fi rm -f conftest* fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_gcc_traditional" >&5 $as_echo "$ac_cv_prog_gcc_traditional" >&6; } if test $ac_cv_prog_gcc_traditional = yes; then CC="$CC -traditional" fi fi for ac_header in stdlib.h do : ac_fn_c_check_header_mongrel "$LINENO" "stdlib.h" "ac_cv_header_stdlib_h" "$ac_includes_default" if test "x$ac_cv_header_stdlib_h" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_STDLIB_H 1 _ACEOF fi done { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GNU libc compatible malloc" >&5 $as_echo_n "checking for GNU libc compatible malloc... " >&6; } if ${ac_cv_func_malloc_0_nonnull+:} false; then : $as_echo_n "(cached) " >&6 else if test "$cross_compiling" = yes; then : ac_cv_func_malloc_0_nonnull=no else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #if defined STDC_HEADERS || defined HAVE_STDLIB_H # include #else char *malloc (); #endif int main () { return ! malloc (0); ; return 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : ac_cv_func_malloc_0_nonnull=yes else ac_cv_func_malloc_0_nonnull=no fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_func_malloc_0_nonnull" >&5 $as_echo "$ac_cv_func_malloc_0_nonnull" >&6; } if test $ac_cv_func_malloc_0_nonnull = yes; then : $as_echo "#define HAVE_MALLOC 1" >>confdefs.h else $as_echo "#define HAVE_MALLOC 0" >>confdefs.h case " $LIBOBJS " in *" malloc.$ac_objext "* ) ;; *) LIBOBJS="$LIBOBJS malloc.$ac_objext" ;; esac $as_echo "#define malloc rpl_malloc" >>confdefs.h fi ac_fn_c_check_decl "$LINENO" "strerror_r" "ac_cv_have_decl_strerror_r" "$ac_includes_default" if test "x$ac_cv_have_decl_strerror_r" = xyes; then : ac_have_decl=1 else ac_have_decl=0 fi cat >>confdefs.h <<_ACEOF #define HAVE_DECL_STRERROR_R $ac_have_decl _ACEOF for ac_func in strerror_r do : ac_fn_c_check_func "$LINENO" "strerror_r" "ac_cv_func_strerror_r" if test "x$ac_cv_func_strerror_r" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_STRERROR_R 1 _ACEOF fi done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether strerror_r returns char *" >&5 $as_echo_n "checking whether strerror_r returns char *... " >&6; } if ${ac_cv_func_strerror_r_char_p+:} false; then : $as_echo_n "(cached) " >&6 else ac_cv_func_strerror_r_char_p=no if test $ac_cv_have_decl_strerror_r = yes; then cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_includes_default int main () { char buf[100]; char x = *strerror_r (0, buf, sizeof buf); char *p = strerror_r (0, buf, sizeof buf); return !p || x; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_func_strerror_r_char_p=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext else # strerror_r is not declared. Choose between # systems that have relatively inaccessible declarations for the # function. BeOS and DEC UNIX 4.0 fall in this category, but the # former has a strerror_r that returns char*, while the latter # has a strerror_r that returns `int'. # This test should segfault on the DEC system. if test "$cross_compiling" = yes; then : : else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_includes_default extern char *strerror_r (); int main () { char buf[100]; char x = *strerror_r (0, buf, sizeof buf); return ! isalpha (x); ; return 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : ac_cv_func_strerror_r_char_p=yes fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_func_strerror_r_char_p" >&5 $as_echo "$ac_cv_func_strerror_r_char_p" >&6; } if test $ac_cv_func_strerror_r_char_p = yes; then $as_echo "#define STRERROR_R_CHAR_P 1" >>confdefs.h fi for ac_func in \ fdatasync \ hstrerror \ strerror \ mtrace \ strndup \ strlcpy \ strsignal \ inet_aton \ inet_ntop \ inet_pton \ setproctitle \ sysctlbyname \ cfmakeraw \ setresuid \ get_current_dir_name \ faccessat \ eaccess \ statvfs \ statfs \ do : as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh` ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" if eval test \"x\$"$as_ac_var"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_func" | $as_tr_cpp` 1 _ACEOF fi done ac_fn_c_check_decl "$LINENO" "hstrerror" "ac_cv_have_decl_hstrerror" "$ac_includes_default" if test "x$ac_cv_have_decl_hstrerror" = xyes; then : ac_have_decl=1 else ac_have_decl=0 fi cat >>confdefs.h <<_ACEOF #define HAVE_DECL_HSTRERROR $ac_have_decl _ACEOF ac_fn_c_check_decl "$LINENO" "strsignal" "ac_cv_have_decl_strsignal" "$ac_includes_default" if test "x$ac_cv_have_decl_strsignal" = xyes; then : ac_have_decl=1 else ac_have_decl=0 fi cat >>confdefs.h <<_ACEOF #define HAVE_DECL_STRSIGNAL $ac_have_decl _ACEOF ac_fn_c_check_decl "$LINENO" "sys_siglist" "ac_cv_have_decl_sys_siglist" "$ac_includes_default" if test "x$ac_cv_have_decl_sys_siglist" = xyes; then : ac_have_decl=1 else ac_have_decl=0 fi cat >>confdefs.h <<_ACEOF #define HAVE_DECL_SYS_SIGLIST $ac_have_decl _ACEOF for ac_func in unsetenv do : ac_fn_c_check_func "$LINENO" "unsetenv" "ac_cv_func_unsetenv" if test "x$ac_cv_func_unsetenv" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_UNSETENV 1 _ACEOF have_unsetenv=yes fi done if test "x$have_unsetenv" = "xyes"; then HAVE_UNSETENV_TRUE= HAVE_UNSETENV_FALSE='#' else HAVE_UNSETENV_TRUE='#' HAVE_UNSETENV_FALSE= fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ax_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on True64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test x"$PTHREAD_LIBS$PTHREAD_CFLAGS" != x; then save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS" >&5 $as_echo_n "checking for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char pthread_join (); int main () { return pthread_join (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ax_pthread_ok=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_pthread_ok" >&5 $as_echo "$ax_pthread_ok" >&6; } if test x"$ax_pthread_ok" = xno; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items starting with a "-" are # C compiler flags, and other items are library names, except for "none" # which indicates that we try without any flags at all, and "pthread-config" # which is a program returning the flags for the Pth emulation library. ax_pthread_flags="pthreads none -Kthread -kthread lthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads) # -pthreads: Solaris/gcc # -mthreads: Mingw32/gcc, Lynx/gcc # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads too; # also defines -D_REENTRANT) # ... -mt is also the pthreads flag for HP/aCC # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case ${host_os} in solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (We need to link with -pthreads/-mt/ # -lpthread.) (The stubs are missing pthread_cleanup_push, or rather # a function called by this macro, so we could check for that, but # who knows whether they'll stub that too in a future libc.) So, # we'll just look for -pthreads and -lpthread first: ax_pthread_flags="-pthreads pthread -mt -pthread $ax_pthread_flags" ;; darwin*) ax_pthread_flags="-pthread $ax_pthread_flags" ;; esac if test x"$ax_pthread_ok" = xno; then for flag in $ax_pthread_flags; do case $flag in none) { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether pthreads work without any flags" >&5 $as_echo_n "checking whether pthreads work without any flags... " >&6; } ;; -*) { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether pthreads work with $flag" >&5 $as_echo_n "checking whether pthreads work with $flag... " >&6; } PTHREAD_CFLAGS="$flag" ;; pthread-config) # Extract the first word of "pthread-config", so it can be a program name with args. set dummy pthread-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ax_pthread_config+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ax_pthread_config"; then ac_cv_prog_ax_pthread_config="$ax_pthread_config" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ax_pthread_config="yes" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_prog_ax_pthread_config" && ac_cv_prog_ax_pthread_config="no" fi fi ax_pthread_config=$ac_cv_prog_ax_pthread_config if test -n "$ax_pthread_config"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_pthread_config" >&5 $as_echo "$ax_pthread_config" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test x"$ax_pthread_config" = xno; then continue; fi PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: checking for the pthreads library -l$flag" >&5 $as_echo_n "checking for the pthreads library -l$flag... " >&6; } PTHREAD_LIBS="-l$flag" ;; esac save_LIBS="$LIBS" save_CFLAGS="$CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include static void routine(void *a) { a = 0; } static void *start_routine(void *a) { return a; } int main () { pthread_t th; pthread_attr_t attr; pthread_create(&th, 0, start_routine, 0); pthread_join(th, 0); pthread_attr_init(&attr); pthread_cleanup_push(routine, 0); pthread_cleanup_pop(0) /* ; */ ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ax_pthread_ok=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_pthread_ok" >&5 $as_echo "$ax_pthread_ok" >&6; } if test "x$ax_pthread_ok" = xyes; then break; fi PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Various other checks: if test "x$ax_pthread_ok" = xyes; then save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for joinable pthread attribute" >&5 $as_echo_n "checking for joinable pthread attribute... " >&6; } attr_name=unknown for attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { int attr = $attr; return attr /* ; */ ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : attr_name=$attr; break fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext done { $as_echo "$as_me:${as_lineno-$LINENO}: result: $attr_name" >&5 $as_echo "$attr_name" >&6; } if test "$attr_name" != PTHREAD_CREATE_JOINABLE; then cat >>confdefs.h <<_ACEOF #define PTHREAD_CREATE_JOINABLE $attr_name _ACEOF fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking if more special flags are required for pthreads" >&5 $as_echo_n "checking if more special flags are required for pthreads... " >&6; } flag=no case ${host_os} in aix* | freebsd* | darwin*) flag="-D_THREAD_SAFE";; osf* | hpux*) flag="-D_REENTRANT";; solaris*) if test "$GCC" = "yes"; then flag="-D_REENTRANT" else flag="-mt -D_REENTRANT" fi ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${flag}" >&5 $as_echo "${flag}" >&6; } if test "x$flag" != xno; then PTHREAD_CFLAGS="$flag $PTHREAD_CFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PTHREAD_PRIO_INHERIT" >&5 $as_echo_n "checking for PTHREAD_PRIO_INHERIT... " >&6; } if ${ax_cv_PTHREAD_PRIO_INHERIT+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { int i = PTHREAD_PRIO_INHERIT; ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ax_cv_PTHREAD_PRIO_INHERIT=yes else ax_cv_PTHREAD_PRIO_INHERIT=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_PTHREAD_PRIO_INHERIT" >&5 $as_echo "$ax_cv_PTHREAD_PRIO_INHERIT" >&6; } if test "x$ax_cv_PTHREAD_PRIO_INHERIT" = "xyes"; then : $as_echo "#define HAVE_PTHREAD_PRIO_INHERIT 1" >>confdefs.h fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" # More AIX lossage: compile with *_r variant if test "x$GCC" != xyes; then case $host_os in aix*) case "x/$CC" in #( x*/c89|x*/c89_128|x*/c99|x*/c99_128|x*/cc|x*/cc128|x*/xlc|x*/xlc_v6|x*/xlc128|x*/xlc128_v6) : #handle absolute path differently from PATH based program lookup case "x$CC" in #( x/*) : if as_fn_executable_p ${CC}_r; then : PTHREAD_CC="${CC}_r" fi ;; #( *) : for ac_prog in ${CC}_r do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_PTHREAD_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$PTHREAD_CC"; then ac_cv_prog_PTHREAD_CC="$PTHREAD_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_PTHREAD_CC="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi PTHREAD_CC=$ac_cv_prog_PTHREAD_CC if test -n "$PTHREAD_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PTHREAD_CC" >&5 $as_echo "$PTHREAD_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$PTHREAD_CC" && break done test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" ;; esac ;; #( *) : ;; esac ;; esac fi fi test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test x"$ax_pthread_ok" = xyes; then $as_echo "#define HAVE_PTHREAD 1" >>confdefs.h : else ax_pthread_ok=no as_fn_error $? "Error: Cannot figure out how to use pthreads!" "$LINENO" 5 fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu # Always define WITH_PTHREADS if we make it this far $as_echo "#define WITH_PTHREADS 1" >>confdefs.h LDFLAGS="$LDFLAGS " CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for Sun Constellation system" >&5 $as_echo_n "checking for Sun Constellation system... " >&6; } # Check whether --enable-sun-const was given. if test "${enable_sun_const+set}" = set; then : enableval=$enable_sun_const; case "$enableval" in yes) x_ac_sun_const=yes ;; no) x_ac_sun_const=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --enable-sun-const" "$LINENO" 5 ;; esac else x_ac_sun_const=no fi if test "$x_ac_sun_const" = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define SYSTEM_DIMENSIONS 4" >>confdefs.h $as_echo "#define HAVE_SUN_CONST 1" >>confdefs.h else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking System dimensions" >&5 $as_echo_n "checking System dimensions... " >&6; } # Check whether --with-dimensions was given. if test "${with_dimensions+set}" = set; then : withval=$with_dimensions; if test `expr match "$withval" '[0-9]*$'` -gt 0; then dimensions="$withval" x_ac_dimensions=yes fi else x_ac_dimensions=no fi if test "$x_ac_dimensions" = yes; then if test $dimensions -lt 1; then as_fn_error $? "Invalid dimensions value $dimensions" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $dimensions" >&5 $as_echo "$dimensions" >&6; }; cat >>confdefs.h <<_ACEOF #define SYSTEM_DIMENSIONS $dimensions _ACEOF else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not set" >&5 $as_echo "not set" >&6; }; fi # This is here to avoid a bug in the gcc compiler 3.4.6 # Without this flag there is a bug when pointing to other functions # and then using them. It is also advised to set the flag if there # are goto statements you may get better performance. if test "$GCC" = yes; then CFLAGS="$CFLAGS -fno-gcse" fi _x_ac_ofed_dirs="/usr /usr/local" _x_ac_ofed_libs="lib64 lib" # Check whether --with-ofed was given. if test "${with_ofed+set}" = set; then : withval=$with_ofed; _x_ac_ofed_dirs="$withval $_x_ac_ofed_dirs" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ofed installation" >&5 $as_echo_n "checking for ofed installation... " >&6; } if ${x_ac_cv_ofed_dir+:} false; then : $as_echo_n "(cached) " >&6 else for d in $_x_ac_ofed_dirs; do test -d "$d" || continue test -d "$d/include/infiniband" || continue test -f "$d/include/infiniband/mad.h" || continue for bit in $_x_ac_ofed_libs; do test -d "$d/$bit" || continue _x_ac_ofed_cppflags_save="$CPPFLAGS" CPPFLAGS="-I$d/include $CPPFLAGS" _x_ac_ofed_libs_save="$LIBS" LIBS="-L$d/$bit -libmad -libumad $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char mad_rpc_open_port (); int main () { return mad_rpc_open_port (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : x_ac_cv_ofed_dir=$d fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char pma_query_via (); int main () { return pma_query_via (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : have_pma_query_via=yes else { $as_echo "$as_me:${as_lineno-$LINENO}: result: Using old libmad" >&5 $as_echo "Using old libmad" >&6; } fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext CPPFLAGS="$_x_ac_ofed_cppflags_save" LIBS="$_x_ac_ofed_libs_save" test -n "$x_ac_cv_ofed_dir" && break done test -n "$x_ac_cv_ofed_dir" && break done fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $x_ac_cv_ofed_dir" >&5 $as_echo "$x_ac_cv_ofed_dir" >&6; } if test -z "$x_ac_cv_ofed_dir"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate ofed installation" >&5 $as_echo "$as_me: WARNING: unable to locate ofed installation" >&2;} else OFED_CPPFLAGS="-I$x_ac_cv_ofed_dir/include/infiniband" if test "$ac_with_rpath" = "yes"; then OFED_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_ofed_dir/$bit -L$x_ac_cv_ofed_dir/$bit" else OFED_LDFLAGS="-L$x_ac_cv_ofed_dir/$bit" fi OFED_LIBS="-libmad -libumad" $as_echo "#define HAVE_OFED 1" >>confdefs.h if test ! -z "$have_pma_query_via" ; then $as_echo "#define HAVE_OFED_PMA_QUERY_VIA 1" >>confdefs.h fi fi if test -n "$x_ac_cv_ofed_dir"; then BUILD_OFED_TRUE= BUILD_OFED_FALSE='#' else BUILD_OFED_TRUE='#' BUILD_OFED_FALSE= fi if test "" = "" ; then : # Recognized value elif test "" = "serial" ; then : # Recognized value elif test "" = "parallel"; then : # Recognized value else as_fn_error $? " Unrecognized value for AX_LIB_HDF5 within configure.ac. If supplied, argument 1 must be either 'serial' or 'parallel'. " "$LINENO" 5 fi # Check whether --with-hdf5 was given. if test "${with_hdf5+set}" = set; then : withval=$with_hdf5; if test "$withval" = "no"; then with_hdf5="no" elif test "$withval" = "yes"; then with_hdf5="yes" else with_hdf5="yes" H5CC="$withval" fi else with_hdf5="yes" fi HDF5_CC="" HDF5_VERSION="" HDF5_CFLAGS="" HDF5_CPPFLAGS="" HDF5_LDFLAGS="" HDF5_LIBS="" HDF5_FC="" HDF5_FFLAGS="" HDF5_FLIBS="" if test "$with_hdf5" = "yes"; then if test -z "$H5CC"; then for ac_prog in h5cc h5pcc do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_H5CC+:} false; then : $as_echo_n "(cached) " >&6 else case $H5CC in [\\/]* | ?:[\\/]*) ac_cv_path_H5CC="$H5CC" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_H5CC="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS ;; esac fi H5CC=$ac_cv_path_H5CC if test -n "$H5CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $H5CC" >&5 $as_echo "$H5CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$H5CC" && break done else { $as_echo "$as_me:${as_lineno-$LINENO}: checking Using provided HDF5 C wrapper" >&5 $as_echo_n "checking Using provided HDF5 C wrapper... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $H5CC" >&5 $as_echo "$H5CC" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for HDF5 libraries" >&5 $as_echo_n "checking for HDF5 libraries... " >&6; } if test ! -f "$H5CC" || test ! -x "$H5CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to locate HDF5 compilation helper scripts 'h5cc' or 'h5pcc'. Please specify --with-hdf5= as the full path to h5cc or h5pcc. HDF5 support is being disabled (equivalent to --with-hdf5=no). " >&5 $as_echo "$as_me: WARNING: Unable to locate HDF5 compilation helper scripts 'h5cc' or 'h5pcc'. Please specify --with-hdf5= as the full path to h5cc or h5pcc. HDF5 support is being disabled (equivalent to --with-hdf5=no). " >&2;} with_hdf5="no" with_hdf5_fortran="no" else HDF5_SHOW=$(eval $H5CC -show) HDF5_CC=$(eval $H5CC -show | $AWK '{print $1}') HDF5_VERSION=$(eval $H5CC -showconfig | $GREP 'HDF5 Version:' \ | $AWK '{print $3}') HDF5_tmp_flags=$(eval $H5CC -showconfig \ | $GREP 'FLAGS\|Extra libraries:' \ | $AWK -F: '{printf("%s "), $2}' ) HDF5_tmp_inst=$(eval $H5CC -showconfig \ | $GREP 'Installation point:' \ | $AWK -F: '{print $2}' ) HDF5_CPPFLAGS="-I${HDF5_tmp_inst}/include" for arg in $HDF5_SHOW $HDF5_tmp_flags ; do case "$arg" in -I*) echo $HDF5_CPPFLAGS | $GREP -e "$arg" 2>&1 >/dev/null \ || HDF5_CPPFLAGS="$arg $HDF5_CPPFLAGS" ;; -L*) echo $HDF5_LDFLAGS | $GREP -e "$arg" 2>&1 >/dev/null \ || HDF5_LDFLAGS="$arg $HDF5_LDFLAGS" ;; -l*) echo $HDF5_LIBS | $GREP -e "$arg" 2>&1 >/dev/null \ || HDF5_LIBS="$arg $HDF5_LIBS" ;; esac done HDF5_LIBS="$HDF5_LIBS -lhdf5" { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes (version $HDF5_VERSION)" >&5 $as_echo "yes (version $HDF5_VERSION)" >&6; } ax_lib_hdf5_save_CC=$CC ax_lib_hdf5_save_CPPFLAGS=$CPPFLAGS ax_lib_hdf5_save_LIBS=$LIBS ax_lib_hdf5_save_LDFLAGS=$LDFLAGS CC=$HDF5_CC CPPFLAGS=$HDF5_CPPFLAGS LIBS=$HDF5_LIBS LDFLAGS=$HDF5_LDFLAGS ac_fn_c_check_header_mongrel "$LINENO" "hdf5.h" "ac_cv_header_hdf5_h" "$ac_includes_default" if test "x$ac_cv_header_hdf5_h" = xyes; then : ac_cv_hadf5_h=yes else ac_cv_hadf5_h=no fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for H5Fcreate in -lhdf5" >&5 $as_echo_n "checking for H5Fcreate in -lhdf5... " >&6; } if ${ac_cv_lib_hdf5_H5Fcreate+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lhdf5 $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char H5Fcreate (); int main () { return H5Fcreate (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_hdf5_H5Fcreate=yes else ac_cv_lib_hdf5_H5Fcreate=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_hdf5_H5Fcreate" >&5 $as_echo "$ac_cv_lib_hdf5_H5Fcreate" >&6; } if test "x$ac_cv_lib_hdf5_H5Fcreate" = xyes; then : ac_cv_libhdf5=yes else ac_cv_libhdf5=no fi if test "$ac_cv_hadf5_h" = "no" && test "$ac_cv_libhdf5" = "no" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to compile HDF5 test program" >&5 $as_echo "$as_me: WARNING: Unable to compile HDF5 test program" >&2;} fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for main in -lhdf5_hl" >&5 $as_echo_n "checking for main in -lhdf5_hl... " >&6; } if ${ac_cv_lib_hdf5_hl_main+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lhdf5_hl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { return main (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_hdf5_hl_main=yes else ac_cv_lib_hdf5_hl_main=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_hdf5_hl_main" >&5 $as_echo "$ac_cv_lib_hdf5_hl_main" >&6; } if test "x$ac_cv_lib_hdf5_hl_main" = xyes; then : HDF5_LIBS="$HDF5_LIBS -lhdf5_hl" fi ac_cv_lib_hdf5_hl=ac_cv_lib_hdf5_hl_main CC=$ax_lib_hdf5_save_CC LIBS=$ax_lib_hdf5_save_LIBS LDFLAGS=$ax_lib_hdf5_save_LDFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking for matching HDF5 Fortran wrapper" >&5 $as_echo_n "checking for matching HDF5 Fortran wrapper... " >&6; } H5FC=$(eval echo -n $H5CC | $SED -n 's/cc$/fc/p') if test -x "$H5FC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $H5FC" >&5 $as_echo "$H5FC" >&6; } with_hdf5_fortran="yes" for arg in `$H5FC -show` do case "$arg" in #( -I*) echo $HDF5_FFLAGS | $GREP -e "$arg" >/dev/null \ || HDF5_FFLAGS="$arg $HDF5_FFLAGS" ;;#( -L*) echo $HDF5_FFLAGS | $GREP -e "$arg" >/dev/null \ || HDF5_FFLAGS="$arg $HDF5_FFLAGS" echo $HDF5_FFLAGS | $GREP -e "-I${arg#-L}" >/dev/null \ || HDF5_FFLAGS="-I${arg#-L} $HDF5_FFLAGS" ;; esac done for arg in $HDF5_LIBS do case "$arg" in #( -lhdf5_hl) HDF5_FLIBS="$HDF5_FLIBS -lhdf5hl_fortran $arg" ;; #( -lhdf5) HDF5_FLIBS="$HDF5_FLIBS -lhdf5_fortran $arg" ;; #( *) HDF5_FLIBS="$HDF5_FLIBS $arg" ;; esac done else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } with_hdf5_fortran="no" fi $as_echo "#define HAVE_HDF5 1" >>confdefs.h fi fi if test "$with_hdf5" = "yes"; then BUILD_HDF5_TRUE= BUILD_HDF5_FALSE='#' else BUILD_HDF5_TRUE='#' BUILD_HDF5_FALSE= fi # Some older systems (Debian/Ubuntu/...) configure HDF5 with # --with-default-api-version=v16 which creates problems for slurm # because slurm uses the 1.8 API. By defining this CPP macro we get # the 1.8 API. $as_echo "#define H5_NO_DEPRECATED_SYMBOLS 1" >>confdefs.h _x_ac_hwloc_dirs="/usr /usr/local" _x_ac_hwloc_libs="lib64 lib" x_ac_cv_hwloc_pci="no" # Check whether --with-hwloc was given. if test "${with_hwloc+set}" = set; then : withval=$with_hwloc; _x_ac_hwloc_dirs="$withval $_x_ac_hwloc_dirs" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for hwloc installation" >&5 $as_echo_n "checking for hwloc installation... " >&6; } if ${x_ac_cv_hwloc_dir+:} false; then : $as_echo_n "(cached) " >&6 else for d in $_x_ac_hwloc_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/hwloc.h" || continue for bit in $_x_ac_hwloc_libs; do test -d "$d/$bit" || continue _x_ac_hwloc_cppflags_save="$CPPFLAGS" CPPFLAGS="-I$d/include $CPPFLAGS" _x_ac_hwloc_libs_save="$LIBS" LIBS="-L$d/$bit -lhwloc $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char hwloc_topology_init (); int main () { return hwloc_topology_init (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : x_ac_cv_hwloc_dir=$d fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { int i = HWLOC_OBJ_PCI_DEVICE; ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : x_ac_cv_hwloc_pci="yes" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext CPPFLAGS="$_x_ac_hwloc_cppflags_save" LIBS="$_x_ac_hwloc_libs_save" test -n "$x_ac_cv_hwloc_dir" && break done test -n "$x_ac_cv_hwloc_dir" && break done fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $x_ac_cv_hwloc_dir" >&5 $as_echo "$x_ac_cv_hwloc_dir" >&6; } if test -z "$x_ac_cv_hwloc_dir"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate hwloc installation" >&5 $as_echo "$as_me: WARNING: unable to locate hwloc installation" >&2;} else HWLOC_CPPFLAGS="-I$x_ac_cv_hwloc_dir/include" if test "$ac_with_rpath" = "yes"; then HWLOC_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_hwloc_dir/$bit -L$x_ac_cv_hwloc_dir/$bit" else HWLOC_LDFLAGS="-L$x_ac_cv_hwloc_dir/$bit" fi HWLOC_LIBS="-lhwloc" $as_echo "#define HAVE_HWLOC 1" >>confdefs.h if test "$x_ac_cv_hwloc_pci" = "yes"; then $as_echo "#define HAVE_HWLOC_PCI 1" >>confdefs.h fi fi _x_ac_freeipmi_dirs="/usr /usr/local" _x_ac_freeipmi_libs="lib64 lib" # Check whether --with-freeipmi was given. if test "${with_freeipmi+set}" = set; then : withval=$with_freeipmi; _x_ac_freeipmi_dirs="$withval $_x_ac_freeipmi_dirs" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for freeipmi installation" >&5 $as_echo_n "checking for freeipmi installation... " >&6; } if ${x_ac_cv_freeipmi_dir+:} false; then : $as_echo_n "(cached) " >&6 else for d in $_x_ac_freeipmi_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/ipmi_monitoring.h" || continue for bit in $_x_ac_freeipmi_libs; do test -d "$d/$bit" || continue _x_ac_freeipmi_cppflags_save="$CPPFLAGS" CPPFLAGS="-I$d/include $CPPFLAGS" _x_ac_freeipmi_libs_save="$LIBS" LIBS="-L$d/$bit -lipmimonitoring $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { int err; unsigned int flag = 0; return ipmi_monitoring_init (flag, &err); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : x_ac_cv_freeipmi_dir=$d fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext CPPFLAGS="$_x_ac_freeipmi_cppflags_save" LIBS="$_x_ac_freeipmi_libs_save" test -n "$x_ac_cv_freeipmi_dir" && break done test -n "$x_ac_cv_freeipmi_dir" && break done fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $x_ac_cv_freeipmi_dir" >&5 $as_echo "$x_ac_cv_freeipmi_dir" >&6; } if test -z "$x_ac_cv_freeipmi_dir"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate freeipmi installation" >&5 $as_echo "$as_me: WARNING: unable to locate freeipmi installation" >&2;} else FREEIPMI_CPPFLAGS="-I$x_ac_cv_freeipmi_dir/include" if test "$ac_with_rpath" = "yes"; then FREEIPMI_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_freeipmi_dir/$bit -L$x_ac_cv_freeipmi_dir/$bit" else FREEIPMI_LDFLAGS="-L$x_ac_cv_freeipmi_dir/$bit" fi FREEIPMI_LIBS="-lipmimonitoring" $as_echo "#define HAVE_FREEIPMI 1" >>confdefs.h fi if test -n "$x_ac_cv_freeipmi_dir"; then BUILD_IPMI_TRUE= BUILD_IPMI_FALSE='#' else BUILD_IPMI_TRUE='#' BUILD_IPMI_FALSE= fi SEMAPHORE_SOURCES="" SEMAPHORE_LIBS="" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for sem_open in -lposix4" >&5 $as_echo_n "checking for sem_open in -lposix4... " >&6; } if ${ac_cv_lib_posix4_sem_open+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lposix4 $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char sem_open (); int main () { return sem_open (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_posix4_sem_open=yes else ac_cv_lib_posix4_sem_open=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_posix4_sem_open" >&5 $as_echo "$ac_cv_lib_posix4_sem_open" >&6; } if test "x$ac_cv_lib_posix4_sem_open" = xyes; then : SEMAPHORE_LIBS="-lposix4"; $as_echo "#define HAVE_POSIX_SEMS 1" >>confdefs.h else SEMAPHORE_SOURCES="semaphore.c" fi _x_ac_rrdtool_dirs="/usr /usr/local" _x_ac_rrdtool_libs="lib64 lib" # Check whether --with-rrdtool was given. if test "${with_rrdtool+set}" = set; then : withval=$with_rrdtool; _x_ac_rrdtool_dirs="$withval $_x_ac_rrdtool_dirs" else with_rrdtool=check fi # echo with rrdtool $with_rrdtool # echo without rrdtool $without_rrdtool if test "x$with_rrdtool" != "xno"; then : { $as_echo "$as_me:${as_lineno-$LINENO}: checking for rrdtool installation" >&5 $as_echo_n "checking for rrdtool installation... " >&6; } if ${x_ac_cv_rrdtool_dir+:} false; then : $as_echo_n "(cached) " >&6 else for d in $_x_ac_rrdtool_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/rrd.h" || continue for bit in $_x_ac_rrdtool_libs; do test -d "$d/$bit" || continue _x_ac_rrdtool_cppflags_save="$CPPFLAGS" CPPFLAGS="-I$d/include $CPPFLAGS" _x_ac_rrdtool_libs_save="$LIBS" LIBS="-L$d/$bit -lrrd $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { rrd_value_t *rrd_data; rrd_info_t *rrd_info; rrd_test_error(); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : x_ac_cv_rrdtool_dir=$d fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext CPPFLAGS="$_x_ac_rrdtool_cppflags_save" LIBS="$_x_ac_rrdtool_libs_save" test -n "$x_ac_cv_rrdtool_dir" && break done test -n "$x_ac_cv_rrdtool_dir" && break done fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $x_ac_cv_rrdtool_dir" >&5 $as_echo "$x_ac_cv_rrdtool_dir" >&6; } fi # echo x_ac_cv_rrdtool_dir $x_ac_cv_rrdtool_dir if test -z "$x_ac_cv_rrdtool_dir"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate rrdtool installation" >&5 $as_echo "$as_me: WARNING: unable to locate rrdtool installation" >&2;} else RRDTOOL_CPPFLAGS="-I$x_ac_cv_rrdtool_dir/include" if test "$ac_with_rpath" = "yes"; then RRDTOOL_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_rrdtool_dir/$bit -L$x_ac_cv_rrdtool_dir/$bit" else RRDTOOL_LDFLAGS="-L$x_ac_cv_rrdtool_dir/$bit" fi RRDTOOL_LIBS="-lrrd" fi if test -n "$x_ac_cv_rrdtool_dir"; then BUILD_RRD_TRUE= BUILD_RRD_FALSE='#' else BUILD_RRD_TRUE='#' BUILD_RRD_FALSE= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for initscr in -lncurses" >&5 $as_echo_n "checking for initscr in -lncurses... " >&6; } if ${ac_cv_lib_ncurses_initscr+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lncurses $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char initscr (); int main () { return initscr (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_ncurses_initscr=yes else ac_cv_lib_ncurses_initscr=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_ncurses_initscr" >&5 $as_echo "$ac_cv_lib_ncurses_initscr" >&6; } if test "x$ac_cv_lib_ncurses_initscr" = xyes; then : ac_have_ncurses=yes fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for initscr in -lcurses" >&5 $as_echo_n "checking for initscr in -lcurses... " >&6; } if ${ac_cv_lib_curses_initscr+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lcurses $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char initscr (); int main () { return initscr (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_curses_initscr=yes else ac_cv_lib_curses_initscr=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curses_initscr" >&5 $as_echo "$ac_cv_lib_curses_initscr" >&6; } if test "x$ac_cv_lib_curses_initscr" = xyes; then : ac_have_curses=yes fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for tgetent in -ltinfo" >&5 $as_echo_n "checking for tgetent in -ltinfo... " >&6; } if ${ac_cv_lib_tinfo_tgetent+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ltinfo $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char tgetent (); int main () { return tgetent (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_tinfo_tgetent=yes else ac_cv_lib_tinfo_tgetent=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_tinfo_tgetent" >&5 $as_echo "$ac_cv_lib_tinfo_tgetent" >&6; } if test "x$ac_cv_lib_tinfo_tgetent" = xyes; then : ac_have_tinfo=yes fi if test "$ac_have_ncurses" = "yes"; then NCURSES="-lncurses" NCURSES_HEADER="ncurses.h" ac_have_some_curses="yes" elif test "$ac_have_curses" = "yes"; then NCURSES="-lcurses" NCURSES_HEADER="curses.h" ac_have_some_curses="yes" fi if test "$ac_have_tinfo" = "yes"; then NCURSES="$NCURSES -ltinfo" fi if test "$ac_have_some_curses" = "yes"; then save_LIBS="$LIBS" LIBS="$NCURSES $save_LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include <${NCURSES_HEADER}> int main () { (void)initscr(); (void)endwin(); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : else ac_have_some_curses="no" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS="$save_LIBS" if test "$ac_have_some_curses" = "yes"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: NCURSES test program built properly." >&5 $as_echo "NCURSES test program built properly." >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** NCURSES test program execution failed." >&5 $as_echo "$as_me: WARNING: *** NCURSES test program execution failed." >&2;} fi else { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cannot build smap without curses or ncurses library" >&5 $as_echo "$as_me: WARNING: cannot build smap without curses or ncurses library" >&2;} ac_have_some_curses="no" fi if test "x$ac_have_some_curses" = "xyes"; then HAVE_SOME_CURSES_TRUE= HAVE_SOME_CURSES_FALSE='#' else HAVE_SOME_CURSES_TRUE='#' HAVE_SOME_CURSES_FALSE= fi # # Tests for Check # pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for CHECK" >&5 $as_echo_n "checking for CHECK... " >&6; } if test -n "$CHECK_CFLAGS"; then pkg_cv_CHECK_CFLAGS="$CHECK_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"check >= 0.9.8\""; } >&5 ($PKG_CONFIG --exists --print-errors "check >= 0.9.8") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_CHECK_CFLAGS=`$PKG_CONFIG --cflags "check >= 0.9.8" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$CHECK_LIBS"; then pkg_cv_CHECK_LIBS="$CHECK_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"check >= 0.9.8\""; } >&5 ($PKG_CONFIG --exists --print-errors "check >= 0.9.8") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_CHECK_LIBS=`$PKG_CONFIG --libs "check >= 0.9.8" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then CHECK_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "check >= 0.9.8" 2>&1` else CHECK_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "check >= 0.9.8" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$CHECK_PKG_ERRORS" >&5 ac_have_check="no" elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ac_have_check="no" else CHECK_CFLAGS=$pkg_cv_CHECK_CFLAGS CHECK_LIBS=$pkg_cv_CHECK_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } ac_have_check="yes" fi if test "x$ac_have_check" = "xyes"; then HAVE_CHECK_TRUE= HAVE_CHECK_FALSE='#' else HAVE_CHECK_TRUE='#' HAVE_CHECK_FALSE= fi # # Tests for GTK+ # # use the correct libs if running on 64bit if test -d "/usr/lib64/pkgconfig"; then PKG_CONFIG_PATH="/usr/lib64/pkgconfig/:$PKG_CONFIG_PATH" fi if test -d "/opt/gnome/lib64/pkgconfig"; then PKG_CONFIG_PATH="/opt/gnome/lib64/pkgconfig/:$PKG_CONFIG_PATH" fi # Check whether --enable-glibtest was given. if test "${enable_glibtest+set}" = set; then : enableval=$enable_glibtest; else enable_glibtest=yes fi pkg_config_args=glib-2.0 for module in . gthread do case "$module" in gmodule) pkg_config_args="$pkg_config_args gmodule-2.0" ;; gmodule-no-export) pkg_config_args="$pkg_config_args gmodule-no-export-2.0" ;; gobject) pkg_config_args="$pkg_config_args gobject-2.0" ;; gthread) pkg_config_args="$pkg_config_args gthread-2.0" ;; gio*) pkg_config_args="$pkg_config_args $module-2.0" ;; esac done if test "x$ac_cv_env_PKG_CONFIG_set" != "xset"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}pkg-config", so it can be a program name with args. set dummy ${ac_tool_prefix}pkg-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_PKG_CONFIG+:} false; then : $as_echo_n "(cached) " >&6 else case $PKG_CONFIG in [\\/]* | ?:[\\/]*) ac_cv_path_PKG_CONFIG="$PKG_CONFIG" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_PKG_CONFIG="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS ;; esac fi PKG_CONFIG=$ac_cv_path_PKG_CONFIG if test -n "$PKG_CONFIG"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PKG_CONFIG" >&5 $as_echo "$PKG_CONFIG" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_path_PKG_CONFIG"; then ac_pt_PKG_CONFIG=$PKG_CONFIG # Extract the first word of "pkg-config", so it can be a program name with args. set dummy pkg-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_ac_pt_PKG_CONFIG+:} false; then : $as_echo_n "(cached) " >&6 else case $ac_pt_PKG_CONFIG in [\\/]* | ?:[\\/]*) ac_cv_path_ac_pt_PKG_CONFIG="$ac_pt_PKG_CONFIG" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_ac_pt_PKG_CONFIG="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS ;; esac fi ac_pt_PKG_CONFIG=$ac_cv_path_ac_pt_PKG_CONFIG if test -n "$ac_pt_PKG_CONFIG"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_pt_PKG_CONFIG" >&5 $as_echo "$ac_pt_PKG_CONFIG" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_pt_PKG_CONFIG" = x; then PKG_CONFIG="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac PKG_CONFIG=$ac_pt_PKG_CONFIG fi else PKG_CONFIG="$ac_cv_path_PKG_CONFIG" fi fi if test -n "$PKG_CONFIG"; then _pkg_min_version=0.16 { $as_echo "$as_me:${as_lineno-$LINENO}: checking pkg-config is at least version $_pkg_min_version" >&5 $as_echo_n "checking pkg-config is at least version $_pkg_min_version... " >&6; } if $PKG_CONFIG --atleast-pkgconfig-version $_pkg_min_version; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } PKG_CONFIG="" fi fi no_glib="" if test "x$PKG_CONFIG" = x ; then no_glib=yes PKG_CONFIG=no fi min_glib_version=2.7.1 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GLIB - version >= $min_glib_version" >&5 $as_echo_n "checking for GLIB - version >= $min_glib_version... " >&6; } if test x$PKG_CONFIG != xno ; then ## don't try to run the test against uninstalled libtool libs if $PKG_CONFIG --uninstalled $pkg_config_args; then echo "Will use uninstalled version of GLib found in PKG_CONFIG_PATH" enable_glibtest=no fi if $PKG_CONFIG --atleast-version $min_glib_version $pkg_config_args; then : else no_glib=yes fi fi if test x"$no_glib" = x ; then GLIB_GENMARSHAL=`$PKG_CONFIG --variable=glib_genmarshal glib-2.0` GOBJECT_QUERY=`$PKG_CONFIG --variable=gobject_query glib-2.0` GLIB_MKENUMS=`$PKG_CONFIG --variable=glib_mkenums glib-2.0` GLIB_COMPILE_RESOURCES=`$PKG_CONFIG --variable=glib_compile_resources gio-2.0` GLIB_CFLAGS=`$PKG_CONFIG --cflags $pkg_config_args` GLIB_LIBS=`$PKG_CONFIG --libs $pkg_config_args` glib_config_major_version=`$PKG_CONFIG --modversion glib-2.0 | \ sed 's/\([0-9]*\).\([0-9]*\).\([0-9]*\)/\1/'` glib_config_minor_version=`$PKG_CONFIG --modversion glib-2.0 | \ sed 's/\([0-9]*\).\([0-9]*\).\([0-9]*\)/\2/'` glib_config_micro_version=`$PKG_CONFIG --modversion glib-2.0 | \ sed 's/\([0-9]*\).\([0-9]*\).\([0-9]*\)/\3/'` if test "x$enable_glibtest" = "xyes" ; then ac_save_CFLAGS="$CFLAGS" ac_save_LIBS="$LIBS" CFLAGS="$CFLAGS $GLIB_CFLAGS" LIBS="$GLIB_LIBS $LIBS" rm -f conf.glibtest if test "$cross_compiling" = yes; then : echo $ac_n "cross compiling; assumed OK... $ac_c" else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include int main () { unsigned int major, minor, micro; fclose (fopen ("conf.glibtest", "w")); if (sscanf("$min_glib_version", "%u.%u.%u", &major, &minor, µ) != 3) { printf("%s, bad version string\n", "$min_glib_version"); exit(1); } if ((glib_major_version != $glib_config_major_version) || (glib_minor_version != $glib_config_minor_version) || (glib_micro_version != $glib_config_micro_version)) { printf("\n*** 'pkg-config --modversion glib-2.0' returned %d.%d.%d, but GLIB (%d.%d.%d)\n", $glib_config_major_version, $glib_config_minor_version, $glib_config_micro_version, glib_major_version, glib_minor_version, glib_micro_version); printf ("*** was found! If pkg-config was correct, then it is best\n"); printf ("*** to remove the old version of GLib. You may also be able to fix the error\n"); printf("*** by modifying your LD_LIBRARY_PATH enviroment variable, or by editing\n"); printf("*** /etc/ld.so.conf. Make sure you have run ldconfig if that is\n"); printf("*** required on your system.\n"); printf("*** If pkg-config was wrong, set the environment variable PKG_CONFIG_PATH\n"); printf("*** to point to the correct configuration files\n"); } else if ((glib_major_version != GLIB_MAJOR_VERSION) || (glib_minor_version != GLIB_MINOR_VERSION) || (glib_micro_version != GLIB_MICRO_VERSION)) { printf("*** GLIB header files (version %d.%d.%d) do not match\n", GLIB_MAJOR_VERSION, GLIB_MINOR_VERSION, GLIB_MICRO_VERSION); printf("*** library (version %d.%d.%d)\n", glib_major_version, glib_minor_version, glib_micro_version); } else { if ((glib_major_version > major) || ((glib_major_version == major) && (glib_minor_version > minor)) || ((glib_major_version == major) && (glib_minor_version == minor) && (glib_micro_version >= micro))) { return 0; } else { printf("\n*** An old version of GLIB (%u.%u.%u) was found.\n", glib_major_version, glib_minor_version, glib_micro_version); printf("*** You need a version of GLIB newer than %u.%u.%u. The latest version of\n", major, minor, micro); printf("*** GLIB is always available from ftp://ftp.gtk.org.\n"); printf("***\n"); printf("*** If you have already installed a sufficiently new version, this error\n"); printf("*** probably means that the wrong copy of the pkg-config shell script is\n"); printf("*** being found. The easiest way to fix this is to remove the old version\n"); printf("*** of GLIB, but you can also set the PKG_CONFIG environment to point to the\n"); printf("*** correct copy of pkg-config. (In this case, you will have to\n"); printf("*** modify your LD_LIBRARY_PATH enviroment variable, or edit /etc/ld.so.conf\n"); printf("*** so that the correct libraries are found at run-time))\n"); } } return 1; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : else no_glib=yes fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi CFLAGS="$ac_save_CFLAGS" LIBS="$ac_save_LIBS" fi fi if test "x$no_glib" = x ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes (version $glib_config_major_version.$glib_config_minor_version.$glib_config_micro_version)" >&5 $as_echo "yes (version $glib_config_major_version.$glib_config_minor_version.$glib_config_micro_version)" >&6; } ac_glib_test="yes" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if test "$PKG_CONFIG" = "no" ; then echo "*** A new enough version of pkg-config was not found." echo "*** See http://www.freedesktop.org/software/pkgconfig/" else if test -f conf.glibtest ; then : else echo "*** Could not run GLIB test program, checking why..." ac_save_CFLAGS="$CFLAGS" ac_save_LIBS="$LIBS" CFLAGS="$CFLAGS $GLIB_CFLAGS" LIBS="$LIBS $GLIB_LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { return ((glib_major_version) || (glib_minor_version) || (glib_micro_version)); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : echo "*** The test program compiled, but did not run. This usually means" echo "*** that the run-time linker is not finding GLIB or finding the wrong" echo "*** version of GLIB. If it is not finding GLIB, you'll need to set your" echo "*** LD_LIBRARY_PATH environment variable, or edit /etc/ld.so.conf to point" echo "*** to the installed location Also, make sure you have run ldconfig if that" echo "*** is required on your system" echo "***" echo "*** If you have an old version installed, it is best to remove it, although" echo "*** you may also be able to get things to work by modifying LD_LIBRARY_PATH" else echo "*** The test program failed to compile or link. See the file config.log for the" echo "*** exact error that occured. This usually means GLIB is incorrectly installed." fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext CFLAGS="$ac_save_CFLAGS" LIBS="$ac_save_LIBS" fi fi GLIB_CFLAGS="" GLIB_LIBS="" GLIB_GENMARSHAL="" GOBJECT_QUERY="" GLIB_MKENUMS="" GLIB_COMPILE_RESOURCES="" ac_glib_test="no" fi rm -f conf.glibtest if test ${glib_config_minor_version=0} -ge 32 ; then $as_echo "#define GLIB_NEW_THREADS 1" >>confdefs.h fi # Check whether --enable-gtktest was given. if test "${enable_gtktest+set}" = set; then : enableval=$enable_gtktest; else enable_gtktest=yes fi pkg_config_args=gtk+-2.0 for module in . gthread do case "$module" in gthread) pkg_config_args="$pkg_config_args gthread-2.0" ;; esac done no_gtk="" # Extract the first word of "pkg-config", so it can be a program name with args. set dummy pkg-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_PKG_CONFIG+:} false; then : $as_echo_n "(cached) " >&6 else case $PKG_CONFIG in [\\/]* | ?:[\\/]*) ac_cv_path_PKG_CONFIG="$PKG_CONFIG" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_PKG_CONFIG="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_path_PKG_CONFIG" && ac_cv_path_PKG_CONFIG="no" ;; esac fi PKG_CONFIG=$ac_cv_path_PKG_CONFIG if test -n "$PKG_CONFIG"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PKG_CONFIG" >&5 $as_echo "$PKG_CONFIG" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test x$PKG_CONFIG != xno ; then if pkg-config --atleast-pkgconfig-version 0.7 ; then : else echo "*** pkg-config too old; version 0.7 or better required." no_gtk=yes PKG_CONFIG=no fi else no_gtk=yes fi min_gtk_version=2.7.1 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GTK+ - version >= $min_gtk_version" >&5 $as_echo_n "checking for GTK+ - version >= $min_gtk_version... " >&6; } if test x$PKG_CONFIG != xno ; then ## don't try to run the test against uninstalled libtool libs if $PKG_CONFIG --uninstalled $pkg_config_args; then echo "Will use uninstalled version of GTK+ found in PKG_CONFIG_PATH" enable_gtktest=no fi if $PKG_CONFIG --atleast-version $min_gtk_version $pkg_config_args; then : else no_gtk=yes fi fi if test x"$no_gtk" = x ; then GTK_CFLAGS=`$PKG_CONFIG $pkg_config_args --cflags` GTK_LIBS=`$PKG_CONFIG $pkg_config_args --libs` gtk_config_major_version=`$PKG_CONFIG --modversion gtk+-2.0 | \ sed 's/\([0-9]*\).\([0-9]*\).\([0-9]*\)/\1/'` gtk_config_minor_version=`$PKG_CONFIG --modversion gtk+-2.0 | \ sed 's/\([0-9]*\).\([0-9]*\).\([0-9]*\)/\2/'` gtk_config_micro_version=`$PKG_CONFIG --modversion gtk+-2.0 | \ sed 's/\([0-9]*\).\([0-9]*\).\([0-9]*\)/\3/'` if test "x$enable_gtktest" = "xyes" ; then ac_save_CFLAGS="$CFLAGS" ac_save_LIBS="$LIBS" CFLAGS="$CFLAGS $GTK_CFLAGS" LIBS="$GTK_LIBS $LIBS" rm -f conf.gtktest if test "$cross_compiling" = yes; then : echo $ac_n "cross compiling; assumed OK... $ac_c" else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include int main () { int major, minor, micro; char *tmp_version; fclose (fopen ("conf.gtktest", "w")); /* HP/UX 9 (%@#!) writes to sscanf strings */ tmp_version = g_strdup("$min_gtk_version"); if (sscanf(tmp_version, "%d.%d.%d", &major, &minor, µ) != 3) { printf("%s, bad version string\n", "$min_gtk_version"); exit(1); } if ((gtk_major_version != $gtk_config_major_version) || (gtk_minor_version != $gtk_config_minor_version) || (gtk_micro_version != $gtk_config_micro_version)) { printf("\n*** 'pkg-config --modversion gtk+-2.0' returned %d.%d.%d, but GTK+ (%d.%d.%d)\n", $gtk_config_major_version, $gtk_config_minor_version, $gtk_config_micro_version, gtk_major_version, gtk_minor_version, gtk_micro_version); printf ("*** was found! If pkg-config was correct, then it is best\n"); printf ("*** to remove the old version of GTK+. You may also be able to fix the error\n"); printf("*** by modifying your LD_LIBRARY_PATH enviroment variable, or by editing\n"); printf("*** /etc/ld.so.conf. Make sure you have run ldconfig if that is\n"); printf("*** required on your system.\n"); printf("*** If pkg-config was wrong, set the environment variable PKG_CONFIG_PATH\n"); printf("*** to point to the correct configuration files\n"); } else if ((gtk_major_version != GTK_MAJOR_VERSION) || (gtk_minor_version != GTK_MINOR_VERSION) || (gtk_micro_version != GTK_MICRO_VERSION)) { printf("*** GTK+ header files (version %d.%d.%d) do not match\n", GTK_MAJOR_VERSION, GTK_MINOR_VERSION, GTK_MICRO_VERSION); printf("*** library (version %d.%d.%d)\n", gtk_major_version, gtk_minor_version, gtk_micro_version); } else { if ((gtk_major_version > major) || ((gtk_major_version == major) && (gtk_minor_version > minor)) || ((gtk_major_version == major) && (gtk_minor_version == minor) && (gtk_micro_version >= micro))) { return 0; } else { printf("\n*** An old version of GTK+ (%d.%d.%d) was found.\n", gtk_major_version, gtk_minor_version, gtk_micro_version); printf("*** You need a version of GTK+ newer than %d.%d.%d. The latest version of\n", major, minor, micro); printf("*** GTK+ is always available from ftp://ftp.gtk.org.\n"); printf("***\n"); printf("*** If you have already installed a sufficiently new version, this error\n"); printf("*** probably means that the wrong copy of the pkg-config shell script is\n"); printf("*** being found. The easiest way to fix this is to remove the old version\n"); printf("*** of GTK+, but you can also set the PKG_CONFIG environment to point to the\n"); printf("*** correct copy of pkg-config. (In this case, you will have to\n"); printf("*** modify your LD_LIBRARY_PATH enviroment variable, or edit /etc/ld.so.conf\n"); printf("*** so that the correct libraries are found at run-time))\n"); } } return 1; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : else no_gtk=yes fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi CFLAGS="$ac_save_CFLAGS" LIBS="$ac_save_LIBS" fi fi if test "x$no_gtk" = x ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes (version $gtk_config_major_version.$gtk_config_minor_version.$gtk_config_micro_version)" >&5 $as_echo "yes (version $gtk_config_major_version.$gtk_config_minor_version.$gtk_config_micro_version)" >&6; } ac_gtk_test="yes" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if test "$PKG_CONFIG" = "no" ; then echo "*** A new enough version of pkg-config was not found." echo "*** See http://pkgconfig.sourceforge.net" else if test -f conf.gtktest ; then : else echo "*** Could not run GTK+ test program, checking why..." ac_save_CFLAGS="$CFLAGS" ac_save_LIBS="$LIBS" CFLAGS="$CFLAGS $GTK_CFLAGS" LIBS="$LIBS $GTK_LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { return ((gtk_major_version) || (gtk_minor_version) || (gtk_micro_version)); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : echo "*** The test program compiled, but did not run. This usually means" echo "*** that the run-time linker is not finding GTK+ or finding the wrong" echo "*** version of GTK+. If it is not finding GTK+, you'll need to set your" echo "*** LD_LIBRARY_PATH environment variable, or edit /etc/ld.so.conf to point" echo "*** to the installed location Also, make sure you have run ldconfig if that" echo "*** is required on your system" echo "***" echo "*** If you have an old version installed, it is best to remove it, although" echo "*** you may also be able to get things to work by modifying LD_LIBRARY_PATH" else echo "*** The test program failed to compile or link. See the file config.log for the" echo "*** exact error that occured. This usually means GTK+ is incorrectly installed." fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext CFLAGS="$ac_save_CFLAGS" LIBS="$ac_save_LIBS" fi fi GTK_CFLAGS="" GTK_LIBS="" ac_gtk_test="no" fi rm -f conf.gtktest if test ${gtk_config_minor_version=0} -ge 10 ; then $as_echo "#define GTK2_USE_RADIO_SET 1" >>confdefs.h fi if test ${gtk_config_minor_version=0} -ge 12 ; then $as_echo "#define GTK2_USE_TOOLTIP 1" >>confdefs.h fi if test ${gtk_config_minor_version=0} -ge 14 ; then $as_echo "#define GTK2_USE_GET_FOCUS 1" >>confdefs.h fi if test "x$ac_glib_test" != "xyes" -o "x$ac_gtk_test" != "xyes"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cannot build sview without gtk library" >&5 $as_echo "$as_me: WARNING: cannot build sview without gtk library" >&2;}; fi if test "x$ac_glib_test" = "xyes" && test "x$ac_gtk_test" = "xyes"; then BUILD_SVIEW_TRUE= BUILD_SVIEW_FALSE='#' else BUILD_SVIEW_TRUE='#' BUILD_SVIEW_FALSE= fi #Check for MySQL ac_have_mysql="no" _x_ac_mysql_bin="no" ### Check for mysql_config program # Check whether --with-mysql_config was given. if test "${with_mysql_config+set}" = set; then : withval=$with_mysql_config; _x_ac_mysql_bin="$withval" fi if test x$_x_ac_mysql_bin = xno; then # Extract the first word of "mysql_config", so it can be a program name with args. set dummy mysql_config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_HAVEMYSQLCONFIG+:} false; then : $as_echo_n "(cached) " >&6 else case $HAVEMYSQLCONFIG in [\\/]* | ?:[\\/]*) ac_cv_path_HAVEMYSQLCONFIG="$HAVEMYSQLCONFIG" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_HAVEMYSQLCONFIG="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_path_HAVEMYSQLCONFIG" && ac_cv_path_HAVEMYSQLCONFIG="no" ;; esac fi HAVEMYSQLCONFIG=$ac_cv_path_HAVEMYSQLCONFIG if test -n "$HAVEMYSQLCONFIG"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $HAVEMYSQLCONFIG" >&5 $as_echo "$HAVEMYSQLCONFIG" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else # Extract the first word of "mysql_config", so it can be a program name with args. set dummy mysql_config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_HAVEMYSQLCONFIG+:} false; then : $as_echo_n "(cached) " >&6 else case $HAVEMYSQLCONFIG in [\\/]* | ?:[\\/]*) ac_cv_path_HAVEMYSQLCONFIG="$HAVEMYSQLCONFIG" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $_x_ac_mysql_bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_HAVEMYSQLCONFIG="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_path_HAVEMYSQLCONFIG" && ac_cv_path_HAVEMYSQLCONFIG="no" ;; esac fi HAVEMYSQLCONFIG=$ac_cv_path_HAVEMYSQLCONFIG if test -n "$HAVEMYSQLCONFIG"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $HAVEMYSQLCONFIG" >&5 $as_echo "$HAVEMYSQLCONFIG" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test x$HAVEMYSQLCONFIG = xno; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** mysql_config not found. Evidently no MySQL development libs installed on system." >&5 $as_echo "$as_me: WARNING: *** mysql_config not found. Evidently no MySQL development libs installed on system." >&2;} else # check for mysql-5.0.0+ mysql_config_major_version=`$HAVEMYSQLCONFIG --version | \ sed 's/\([0-9]*\).\([0-9]*\).\([a-zA-Z0-9]*\)/\1/'` mysql_config_minor_version=`$HAVEMYSQLCONFIG --version | \ sed 's/\([0-9]*\).\([0-9]*\).\([a-zA-Z0-9]*\)/\2/'` mysql_config_micro_version=`$HAVEMYSQLCONFIG --version | \ sed 's/\([0-9]*\).\([0-9]*\).\([a-zA-Z0-9]*\)/\3/'` if test $mysql_config_major_version -lt 5; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** mysql-$mysql_config_major_version.$mysql_config_minor_version.$mysql_config_micro_version available, we need >= mysql-5.0.0 installed for the mysql interface." >&5 $as_echo "$as_me: WARNING: *** mysql-$mysql_config_major_version.$mysql_config_minor_version.$mysql_config_micro_version available, we need >= mysql-5.0.0 installed for the mysql interface." >&2;} ac_have_mysql="no" else # mysql_config puts -I on the front of the dir. We don't # want that so we remove it. MYSQL_CFLAGS=`$HAVEMYSQLCONFIG --include` MYSQL_LIBS=`$HAVEMYSQLCONFIG --libs_r` save_CFLAGS="$CFLAGS" save_LIBS="$LIBS" CFLAGS="$MYSQL_CFLAGS $save_CFLAGS" LIBS="$MYSQL_LIBS $save_LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { MYSQL mysql; (void) mysql_init(&mysql); (void) mysql_close(&mysql); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_have_mysql="yes" else ac_have_mysql="no" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext CFLAGS="$save_CFLAGS" LIBS="$save_LIBS" if test "$ac_have_mysql" = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: MySQL test program built properly." >&5 $as_echo "MySQL test program built properly." >&6; } $as_echo "#define HAVE_MYSQL 1" >>confdefs.h else MYSQL_CFLAGS=`$HAVEMYSQLCONFIG --include` MYSQL_LIBS=`$HAVEMYSQLCONFIG --libs` save_CFLAGS="$CFLAGS" save_LIBS="$LIBS" CFLAGS="$MYSQL_CFLAGS $save_CFLAGS" LIBS="$MYSQL_LIBS $save_LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { MYSQL mysql; (void) mysql_init(&mysql); (void) mysql_close(&mysql); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_have_mysql="yes" else ac_have_mysql="no" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext CFLAGS="$save_CFLAGS" LIBS="$save_LIBS" if test "$ac_have_mysql" = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: MySQL (non-threaded) test program built properly." >&5 $as_echo "MySQL (non-threaded) test program built properly." >&6; } $as_echo "#define MYSQL_NOT_THREAD_SAFE 1" >>confdefs.h $as_echo "#define HAVE_MYSQL 1" >>confdefs.h else MYSQL_CFLAGS="" MYSQL_LIBS="" { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** MySQL test program execution failed." >&5 $as_echo "$as_me: WARNING: *** MySQL test program execution failed." >&2;} fi fi fi fi if test x"$ac_have_mysql" = x"yes"; then WITH_MYSQL_TRUE= WITH_MYSQL_FALSE='#' else WITH_MYSQL_TRUE='#' WITH_MYSQL_FALSE= fi ac_have_native_cray="no" ac_have_alps_cray="no" ac_have_real_cray="no" ac_have_alps_emulation="no" ac_have_alps_cray_emulation="no" ac_have_cray_network="no" ac_really_no_cray="no" # Check whether --with-alps-emulation was given. if test "${with_alps_emulation+set}" = set; then : withval=$with_alps_emulation; test "$withval" = no || ac_have_alps_emulation=yes else ac_have_alps_emulation=no fi # Check whether --enable-cray-emulation was given. if test "${enable_cray_emulation+set}" = set; then : enableval=$enable_cray_emulation; case "$enableval" in yes) ac_have_alps_cray_emulation="yes" ;; no) ac_have_alps_cray_emulation="no" ;; *) as_fn_error $? "bad value \"$enableval\" for --enable-alps-cray-emulation" "$LINENO" 5 ;; esac fi # Check whether --enable-native-cray was given. if test "${enable_native_cray+set}" = set; then : enableval=$enable_native_cray; case "$enableval" in yes) ac_have_native_cray="yes" ;; no) ac_have_native_cray="no" ;; *) as_fn_error $? "bad value \"$enableval\" for --enable-native-cray" "$LINENO" 5 ;; esac fi # Check whether --enable-cray-network was given. if test "${enable_cray_network+set}" = set; then : enableval=$enable_cray_network; case "$enableval" in yes) ac_have_cray_network="yes" ;; no) ac_have_cray_network="no" ;; *) as_fn_error $? "bad value \"$enableval\" for --enable-cray-network" "$LINENO" 5 ;: esac fi # Check whether --enable-really-no-cray was given. if test "${enable_really_no_cray+set}" = set; then : enableval=$enable_really_no_cray; case "$enableval" in yes) ac_really_no_cray="yes" ;; no) ac_really_no_cray="no" ;; *) as_fn_error $? "bad value \"$enableval\" for --enable-really-no-cray" "$LINENO" 5 ;; esac fi if test "$ac_have_alps_emulation" = "yes"; then ac_have_alps_cray="yes" { $as_echo "$as_me:${as_lineno-$LINENO}: Running A ALPS Cray system against an ALPS emulation" >&5 $as_echo "$as_me: Running A ALPS Cray system against an ALPS emulation" >&6;} $as_echo "#define HAVE_ALPS_EMULATION 1" >>confdefs.h elif test "$ac_have_alps_cray_emulation" = "yes"; then ac_have_alps_cray="yes" { $as_echo "$as_me:${as_lineno-$LINENO}: Running in Cray emulation mode" >&5 $as_echo "$as_me: Running in Cray emulation mode" >&6;} $as_echo "#define HAVE_ALPS_CRAY_EMULATION 1" >>confdefs.h elif test "$ac_have_native_cray" = "yes" || test "$ac_have_cray_network" = "yes" ; then _x_ac_cray_job_dir="job/default" _x_ac_cray_alpscomm_dir="alpscomm/default" _x_ac_cray_dirs="/opt/cray" for d in $_x_ac_cray_dirs; do test -d "$d" || continue if test "$ac_have_native_cray" = "yes"; then _test_dir="$d/$_x_ac_cray_job_dir" test -d "$_test_dir" || continue test -d "$_test_dir/include" || continue test -f "$_test_dir/include/job.h" || continue test -d "$_test_dir/lib64" || continue test -f "$_test_dir/lib64/libjob.so" || continue CRAY_JOB_CPPFLAGS="$CRAY_JOB_CPPFLAGS -I$_test_dir/include" CRAY_JOB_LDFLAGS="$CRAY_JOB_LDFLAGS -L$_test_dir/lib64 -ljob" fi _test_dir="$d/$_x_ac_cray_alpscomm_dir" test -d "$_test_dir" || continue test -d "$_test_dir/include" || continue test -f "$_test_dir/include/alpscomm_cn.h" || continue test -f "$_test_dir/include/alpscomm_sn.h" || continue test -d "$_test_dir/lib64" || continue test -f "$_test_dir/lib64/libalpscomm_cn.so" || continue test -f "$_test_dir/lib64/libalpscomm_sn.so" || continue CRAY_ALPSC_CN_CPPFLAGS="$CRAY_ALPSC_CN_CPPFLAGS -I$_test_dir/include" CRAY_ALPSC_SN_CPPFLAGS="$CRAY_ALPSC_SN_CPPFLAGS -I$_test_dir/include" CRAY_ALPSC_CN_LDFLAGS="$CRAY_ALPSC_CN_LDFLAGS -L$_test_dir/lib64 -lalpscomm_cn" CRAY_ALPSC_SN_LDFLAGS="$CRAY_ALPSC_SN_LDFLAGS -L$_test_dir/lib64 -lalpscomm_sn" CRAY_SWITCH_CPPFLAGS="$CRAY_SWITCH_CPPFLAGS $CRAY_JOB_CPPFLAGS $CRAY_ALPSC_CN_CPPFLAGS $CRAY_ALPSC_SN_CPPFLAGS" CRAY_SWITCH_LDFLAGS="$CRAY_SWITCH_LDFLAGS $CRAY_JOB_LDFLAGS $CRAY_ALPSC_CN_LDFLAGS $CRAY_ALPSC_SN_LDFLAGS" CRAY_SELECT_CPPFLAGS="$CRAY_SELECT_CPPFLAGS $CRAY_ALPSC_SN_CPPFLAGS" CRAY_SELECT_LDFLAGS="$CRAY_SELECT_LDFLAGS $CRAY_ALPSC_SN_LDFLAGS" if test "$ac_have_native_cray" = "yes"; then CRAY_TASK_CPPFLAGS="$CRAY_TASK_CPPFLAGS $CRAY_ALPSC_CN_CPPFLAGS" CRAY_TASK_LDFLAGS="$CRAY_TASK_LDFLAGS $CRAY_ALPSC_CN_LDFLAGS" fi saved_CPPFLAGS="$CPPFLAGS" saved_LIBS="$LIBS" CPPFLAGS="$CRAY_JOB_CPPFLAGS $CRAY_ALPSC_CN_CPPFLAGS $CRAY_ALPSC_SN_CPPFLAGS $saved_CPPFLAGS" LIBS="$CRAY_JOB_LDFLAGS $CRAY_ALPSC_CN_LDFLAGS $CRAY_ALPSC_SN_LDFLAGS $saved_LIBS" if test "$ac_have_native_cray" = "yes"; then cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include int main () { job_getjidcnt(); alpsc_release_cookies((char **)0, 0, 0); alpsc_flush_lustre((char **)0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : have_cray_files="yes" else as_fn_error $? "There is a problem linking to the Cray API" "$LINENO" 5 fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext # See if we have 5.2UP01 alpscomm functions { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing alpsc_pre_suspend" >&5 $as_echo_n "checking for library containing alpsc_pre_suspend... " >&6; } if ${ac_cv_search_alpsc_pre_suspend+:} false; then : $as_echo_n "(cached) " >&6 else ac_func_search_save_LIBS=$LIBS cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char alpsc_pre_suspend (); int main () { return alpsc_pre_suspend (); ; return 0; } _ACEOF for ac_lib in '' alpscomm_cn; do if test -z "$ac_lib"; then ac_res="none required" else ac_res=-l$ac_lib LIBS="-l$ac_lib $ac_func_search_save_LIBS" fi if ac_fn_c_try_link "$LINENO"; then : ac_cv_search_alpsc_pre_suspend=$ac_res fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext if ${ac_cv_search_alpsc_pre_suspend+:} false; then : break fi done if ${ac_cv_search_alpsc_pre_suspend+:} false; then : else ac_cv_search_alpsc_pre_suspend=no fi rm conftest.$ac_ext LIBS=$ac_func_search_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_alpsc_pre_suspend" >&5 $as_echo "$ac_cv_search_alpsc_pre_suspend" >&6; } ac_res=$ac_cv_search_alpsc_pre_suspend if test "$ac_res" != no; then : test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" $as_echo "#define HAVE_NATIVE_CRAY_GA 1" >>confdefs.h fi elif test "$ac_have_cray_network" = "yes"; then cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { alpsc_release_cookies((char **)0, 0, 0); alpsc_flush_lustre((char **)0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : have_cray_files="yes" else as_fn_error $? "There is a problem linking to the Cray API" "$LINENO" 5 fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext fi LIBS="$saved_LIBS" CPPFLAGS="$saved_CPPFLAGS" break done if test -z "$have_cray_files"; then as_fn_error $? "Unable to locate Cray APIs (usually in /opt/cray/alpscomm and /opt/cray/job)" "$LINENO" 5 else if test "$ac_have_native_cray" = "yes"; then { $as_echo "$as_me:${as_lineno-$LINENO}: Running on a Cray system in native mode without ALPS" >&5 $as_echo "$as_me: Running on a Cray system in native mode without ALPS" >&6;} elif test "$ac_have_cray_network" = "yes"; then { $as_echo "$as_me:${as_lineno-$LINENO}: Running on a system with a Cray network" >&5 $as_echo "$as_me: Running on a system with a Cray network" >&6;} fi fi if test "$ac_have_native_cray" = "yes"; then ac_have_real_cray="yes" ac_have_native_cray="yes" $as_echo "#define HAVE_NATIVE_CRAY 1" >>confdefs.h $as_echo "#define HAVE_REAL_CRAY 1" >>confdefs.h elif test "$ac_have_cray_network" = "yes"; then ac_have_cray_network="yes" $as_echo "#define HAVE_3D 1" >>confdefs.h $as_echo "#define SYSTEM_DIMENSIONS 3" >>confdefs.h $as_echo "#define HAVE_CRAY_NETWORK 1" >>confdefs.h fi else # Check for a Cray-specific file: # * older XT systems use an /etc/xtrelease file # * newer XT/XE systems use an /etc/opt/cray/release/xtrelease file # * both have an /etc/xthostname { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether this is a Cray XT or XE system running on ALPS or ALPS simulator" >&5 $as_echo_n "checking whether this is a Cray XT or XE system running on ALPS or ALPS simulator... " >&6; } if test -f /etc/xtrelease || test -d /etc/opt/cray/release; then ac_have_alps_cray="yes" ac_have_real_cray="yes" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_have_alps_cray" >&5 $as_echo "$ac_have_alps_cray" >&6; } fi if test "$ac_really_no_cray" = "yes"; then ac_have_alps_cray="no" ac_have_real_cray="no" fi if test "$ac_have_alps_cray" = "yes"; then # libexpat is always required for the XML-RPC interface, but it is only # needed in the select plugin, so set it up here instead of everywhere. ac_fn_c_check_header_mongrel "$LINENO" "expat.h" "ac_cv_header_expat_h" "$ac_includes_default" if test "x$ac_cv_header_expat_h" = xyes; then : else as_fn_error $? "Cray BASIL requires expat headers/rpm" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for XML_ParserCreate in -lexpat" >&5 $as_echo_n "checking for XML_ParserCreate in -lexpat... " >&6; } if ${ac_cv_lib_expat_XML_ParserCreate+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lexpat $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char XML_ParserCreate (); int main () { return XML_ParserCreate (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_expat_XML_ParserCreate=yes else ac_cv_lib_expat_XML_ParserCreate=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_expat_XML_ParserCreate" >&5 $as_echo "$ac_cv_lib_expat_XML_ParserCreate" >&6; } if test "x$ac_cv_lib_expat_XML_ParserCreate" = xyes; then : CRAY_SELECT_LDFLAGS="$CRAY_SELECT_LDFLAGS -lexpat" else as_fn_error $? "Cray BASIL requires libexpat.so (i.e. libexpat1-dev)" "$LINENO" 5 fi if test "$ac_have_real_cray" = "yes"; then # libjob is needed, but we don't want to put it on the LIBS line here. # If we are on a native system it is handled elsewhere, and on a hybrid # we only need this in libsrun. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for job_getjid in -ljob" >&5 $as_echo_n "checking for job_getjid in -ljob... " >&6; } if ${ac_cv_lib_job_job_getjid+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ljob $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char job_getjid (); int main () { return job_getjid (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_job_job_getjid=yes else ac_cv_lib_job_job_getjid=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_job_job_getjid" >&5 $as_echo "$ac_cv_lib_job_job_getjid" >&6; } if test "x$ac_cv_lib_job_job_getjid" = xyes; then : CRAY_JOB_LDFLAGS="$CRAY_JOB_LDFLAGS -ljob" else as_fn_error $? "Need cray-job (usually in /opt/cray/job/default)" "$LINENO" 5 fi $as_echo "#define HAVE_REAL_CRAY 1" >>confdefs.h fi if test -z "$MYSQL_CFLAGS" || test -z "$MYSQL_LIBS"; then as_fn_error $? "Cray BASIL requires the cray-MySQL-devel-enterprise rpm" "$LINENO" 5 fi # Used by X_AC_DEBUG to set default SALLOC_RUN_FOREGROUND value to 1 x_ac_salloc_background=no $as_echo "#define HAVE_3D 1" >>confdefs.h $as_echo "#define SYSTEM_DIMENSIONS 3" >>confdefs.h $as_echo "#define HAVE_FRONT_END 1" >>confdefs.h $as_echo "#define HAVE_ALPS_CRAY 1" >>confdefs.h $as_echo "#define SALLOC_KILL_CMD 1" >>confdefs.h fi if test "$ac_have_native_cray" = "yes"; then HAVE_NATIVE_CRAY_TRUE= HAVE_NATIVE_CRAY_FALSE='#' else HAVE_NATIVE_CRAY_TRUE='#' HAVE_NATIVE_CRAY_FALSE= fi if test "$ac_have_alps_cray" = "yes"; then HAVE_ALPS_CRAY_TRUE= HAVE_ALPS_CRAY_FALSE='#' else HAVE_ALPS_CRAY_TRUE='#' HAVE_ALPS_CRAY_FALSE= fi if test "$ac_have_real_cray" = "yes"; then HAVE_REAL_CRAY_TRUE= HAVE_REAL_CRAY_FALSE='#' else HAVE_REAL_CRAY_TRUE='#' HAVE_REAL_CRAY_FALSE= fi if test "$ac_have_cray_network" = "yes"; then HAVE_CRAY_NETWORK_TRUE= HAVE_CRAY_NETWORK_FALSE='#' else HAVE_CRAY_NETWORK_TRUE='#' HAVE_CRAY_NETWORK_FALSE= fi if test "$ac_have_alps_emulation" = "yes"; then HAVE_ALPS_EMULATION_TRUE= HAVE_ALPS_EMULATION_FALSE='#' else HAVE_ALPS_EMULATION_TRUE='#' HAVE_ALPS_EMULATION_FALSE= fi if test "$ac_have_alps_cray_emulation" = "yes"; then HAVE_ALPS_CRAY_EMULATION_TRUE= HAVE_ALPS_CRAY_EMULATION_FALSE='#' else HAVE_ALPS_CRAY_EMULATION_TRUE='#' HAVE_ALPS_CRAY_EMULATION_FALSE= fi _x_ac_datawarp_dirs="/opt/cray/dws/default" _x_ac_datawarp_libs="lib64 lib" # Check whether --with-datawarp was given. if test "${with_datawarp+set}" = set; then : withval=$with_datawarp; _x_ac_datawarp_dirs="$withval $_x_ac_datawarp_dirs" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for datawarp installation" >&5 $as_echo_n "checking for datawarp installation... " >&6; } if ${x_ac_cv_datawarp_dir+:} false; then : $as_echo_n "(cached) " >&6 else for d in $_x_ac_datawarp_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/dws_thin.h" || continue for bit in $_x_ac_datawarp_libs; do test -d "$d/$bit" || continue test -f "$d/$bit/libdws_thin.so" || continue x_ac_cv_datawarp_dir=$d break done test -n "$x_ac_cv_datawarp_dir" && break done fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $x_ac_cv_datawarp_dir" >&5 $as_echo "$x_ac_cv_datawarp_dir" >&6; } if test -z "$x_ac_cv_datawarp_dir"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate DataWarp installation" >&5 $as_echo "$as_me: WARNING: unable to locate DataWarp installation" >&2;} else DATAWARP_CPPFLAGS="-I$x_ac_cv_datawarp_dir/include" if test "$ac_with_rpath" = "yes"; then DATAWARP_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_datawarp_dir/$bit -L$x_ac_cv_datawarp_dir/$bit -ldws_thin" else DATAWARP_LDFLAGS="-L$x_ac_cv_datawarp_dir/$bit -ldws_thin" fi $as_echo "#define HAVE_DATAWARP 1" >>confdefs.h fi # case "$host" in *-*-aix*) $as_echo "#define SETPROCTITLE_STRATEGY PS_USE_CLOBBER_ARGV" >>confdefs.h $as_echo "#define SETPROCTITLE_PS_PADDING '\\0'" >>confdefs.h ;; *-*-hpux*) $as_echo "#define SETPROCTITLE_STRATEGY PS_USE_PSTAT" >>confdefs.h ;; *-*-linux*) $as_echo "#define SETPROCTITLE_STRATEGY PS_USE_CLOBBER_ARGV" >>confdefs.h $as_echo "#define SETPROCTITLE_PS_PADDING '\\0'" >>confdefs.h ;; *) $as_echo "#define SETPROCTITLE_STRATEGY PS_USE_NONE" >>confdefs.h $as_echo "#define SETPROCTITLE_PS_PADDING '\\0'" >>confdefs.h ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for __progname" >&5 $as_echo_n "checking for __progname... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { extern char *__progname; puts(__progname); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_have__progname=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${ac_have__progname=no}" >&5 $as_echo "${ac_have__progname=no}" >&6; } if test "$ac_have__progname" = "yes"; then $as_echo "#define HAVE__PROGNAME 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether or not developer options are enabled" >&5 $as_echo_n "checking whether or not developer options are enabled... " >&6; } # Check whether --enable-developer was given. if test "${enable_developer+set}" = set; then : enableval=$enable_developer; case "$enableval" in yes) x_ac_developer=yes ;; no) x_ac_developer=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --enable-developer" "$LINENO" 5 ;; esac fi if test "$x_ac_developer" = yes; then test "$GCC" = yes && CFLAGS="$CFLAGS -Werror" test "$GXX" = yes && CXXFLAGS="$CXXFLAGS -Werror" # automatically turn on --enable-debug if being a developer x_ac_debug=yes else $as_echo "#define NDEBUG 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${x_ac_developer=no}" >&5 $as_echo "${x_ac_developer=no}" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether debugging is enabled" >&5 $as_echo_n "checking whether debugging is enabled... " >&6; } # Check whether --enable-debug was given. if test "${enable_debug+set}" = set; then : enableval=$enable_debug; case "$enableval" in yes) x_ac_debug=yes ;; no) x_ac_debug=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --enable-debug" "$LINENO" 5 ;; esac else x_ac_debug=yes fi if test "$x_ac_debug" = yes; then # you will most likely get a -O2 in you compile line, but the last option # is the only one that is looked at. test "$GCC" = yes && CFLAGS="$CFLAGS -Wall -g -O0 -fno-strict-aliasing" test "$GXX" = yes && CXXFLAGS="$CXXFLAGS -Wall -g -O0 -fno-strict-aliasing" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${x_ac_debug=no}" >&5 $as_echo "${x_ac_debug=no}" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether memory leak debugging is enabled" >&5 $as_echo_n "checking whether memory leak debugging is enabled... " >&6; } # Check whether --enable-memory-leak-debug was given. if test "${enable_memory_leak_debug+set}" = set; then : enableval=$enable_memory_leak_debug; case "$enableval" in yes) x_ac_memory_debug=yes ;; no) x_ac_memory_debug=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --enable-memory-leak-debug" "$LINENO" 5 ;; esac fi if test "$x_ac_memory_debug" = yes; then $as_echo "#define MEMORY_LEAK_DEBUG 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${x_ac_memory_debug=no}" >&5 $as_echo "${x_ac_memory_debug=no}" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable slurmd operation on a front-end" >&5 $as_echo_n "checking whether to enable slurmd operation on a front-end... " >&6; } # Check whether --enable-front-end was given. if test "${enable_front_end+set}" = set; then : enableval=$enable_front_end; case "$enableval" in yes) x_ac_front_end=yes ;; no) x_ac_front_end=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --enable-front-end" "$LINENO" 5 ;; esac fi if test "$x_ac_front_end" = yes; then $as_echo "#define HAVE_FRONT_END 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${x_ac_front_end=no}" >&5 $as_echo "${x_ac_front_end=no}" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether debugger partial attach enabled" >&5 $as_echo_n "checking whether debugger partial attach enabled... " >&6; } # Check whether --enable-partial-attach was given. if test "${enable_partial_attach+set}" = set; then : enableval=$enable_partial_attach; case "$enableval" in yes) x_ac_partial_attach=yes ;; no) x_ac_partial_attach=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --enable-partial-leak-attach" "$LINENO" 5 ;; esac fi if test "$x_ac_partial_attach" != "no"; then $as_echo "#define DEBUGGER_PARTIAL_ATTACH 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${x_ac_partial_attach=no}" >&5 $as_echo "${x_ac_partial_attach=no}" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether salloc should kill child processes at job termination" >&5 $as_echo_n "checking whether salloc should kill child processes at job termination... " >&6; } # Check whether --enable-salloc-kill-cmd was given. if test "${enable_salloc_kill_cmd+set}" = set; then : enableval=$enable_salloc_kill_cmd; case "$enableval" in yes) x_ac_salloc_kill_cmd=yes ;; no) x_ac_salloc_kill_cmd=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --enable-salloc-kill-cmd" "$LINENO" 5 ;; esac fi if test "$x_ac_salloc_kill_cmd" = yes; then $as_echo "#define SALLOC_KILL_CMD 1" >>confdefs.h { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi # NOTE: Default value of SALLOC_RUN_FOREGROUND is system dependent # x_ac_salloc_background is set to "no" for Cray systems in x_ac_cray.m4 { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to disable salloc execution in the background" >&5 $as_echo_n "checking whether to disable salloc execution in the background... " >&6; } # Check whether --enable-salloc-background was given. if test "${enable_salloc_background+set}" = set; then : enableval=$enable_salloc_background; case "$enableval" in yes) x_ac_salloc_background=yes ;; no) x_ac_salloc_background=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --disable-salloc-background" "$LINENO" 5 ;; esac fi if test "$x_ac_salloc_background" = no; then $as_echo "#define SALLOC_RUN_FOREGROUND 1" >>confdefs.h { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable slurm simulator" >&5 $as_echo_n "checking whether to enable slurm simulator... " >&6; } # Check whether --enable-simulator was given. if test "${enable_simulator+set}" = set; then : enableval=$enable_simulator; case "$enableval" in yes) x_ac_simulator=yes ;; no) x_ac_simulator=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$enableval\" for --enable-simulator" "$LINENO" 5 ;; esac fi if test "$x_ac_simulator" = yes; then $as_echo "#define SLURM_SIMULATOR 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${x_ac_simulator=no}" >&5 $as_echo "${x_ac_simulator=no}" >&6; } if test "x$ac_debug" = "xtrue"; then DEBUG_MODULES_TRUE= DEBUG_MODULES_FALSE='#' else DEBUG_MODULES_TRUE='#' DEBUG_MODULES_FALSE= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for slurmctld default port" >&5 $as_echo_n "checking for slurmctld default port... " >&6; } # Check whether --with-slurmctld-port was given. if test "${with_slurmctld_port+set}" = set; then : withval=$with_slurmctld_port; if test `expr match "$withval" '[0-9]*$'` -gt 0; then slurmctldport="$withval" fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${slurmctldport=6817}" >&5 $as_echo "${slurmctldport=6817}" >&6; } cat >>confdefs.h <<_ACEOF #define SLURMCTLD_PORT $slurmctldport _ACEOF { $as_echo "$as_me:${as_lineno-$LINENO}: checking for slurmd default port" >&5 $as_echo_n "checking for slurmd default port... " >&6; } # Check whether --with-slurmd-port was given. if test "${with_slurmd_port+set}" = set; then : withval=$with_slurmd_port; if test `expr match "$withval" '[0-9]*$'` -gt 0; then slurmdport="$withval" fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${slurmdport=6818}" >&5 $as_echo "${slurmdport=6818}" >&6; } cat >>confdefs.h <<_ACEOF #define SLURMD_PORT $slurmdport _ACEOF { $as_echo "$as_me:${as_lineno-$LINENO}: checking for slurmdbd default port" >&5 $as_echo_n "checking for slurmdbd default port... " >&6; } # Check whether --with-slurmdbd-port was given. if test "${with_slurmdbd_port+set}" = set; then : withval=$with_slurmdbd_port; if test `expr match "$withval" '[0-9]*$'` -gt 0; then slurmdbdport="$withval" fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${slurmdbdport=6819}" >&5 $as_echo "${slurmdbdport=6819}" >&6; } cat >>confdefs.h <<_ACEOF #define SLURMDBD_PORT $slurmdbdport _ACEOF { $as_echo "$as_me:${as_lineno-$LINENO}: checking for slurmctld default port count" >&5 $as_echo_n "checking for slurmctld default port count... " >&6; } # Check whether --with-slurmctld-port-count was given. if test "${with_slurmctld_port_count+set}" = set; then : withval=$with_slurmctld_port_count; if test `expr match "$withval" '[0-9]*$'` -gt 0; then slurmctldportcount="$withval" fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${slurmctldportcount=1}" >&5 $as_echo "${slurmctldportcount=1}" >&6; } cat >>confdefs.h <<_ACEOF #define SLURMCTLD_PORT_COUNT $slurmctldportcount _ACEOF if test "x$prefix" = "xNONE" ; then cat >>confdefs.h <<_ACEOF #define SLURM_PREFIX "/usr/local" _ACEOF else cat >>confdefs.h <<_ACEOF #define SLURM_PREFIX "$prefix" _ACEOF fi nrt_default_dirs="/usr/include" # Check whether --with-nrth was given. if test "${with_nrth+set}" = set; then : withval=$with_nrth; nrt_default_dirs="$withval $nrt_default_dirs" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking Checking NRT and PERMAPI header files" >&5 $as_echo_n "checking Checking NRT and PERMAPI header files... " >&6; } for nrt_dir in $nrt_default_dirs; do # skip dirs that don't exist if test ! -z "$nrt_dir" -a ! -d "$nrt_dir" ; then continue; fi # search for required NRT and PERMAPI header files if test -f "$nrt_dir/nrt.h" -a -f "$nrt_dir/permapi.h"; then ac_have_nrt_h="yes" NRT_CPPFLAGS="-I$nrt_dir" $as_echo "#define HAVE_NRT_H 1" >>confdefs.h $as_echo "#define HAVE_PERMAPI_H 1" >>confdefs.h break; fi done if test "x$ac_have_nrt_h" != "xyes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: Cannot support IBM NRT without nrt.h and permapi.h" >&5 $as_echo "$as_me: Cannot support IBM NRT without nrt.h and permapi.h" >&6;} else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi nrt_default_dirs="/usr/lib64 /usr/lib" # Check whether --with-libnrt was given. if test "${with_libnrt+set}" = set; then : withval=$with_libnrt; nrt_default_dirs="$withval $nrt_default_dirs" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable IBM NRT support" >&5 $as_echo_n "checking whether to enable IBM NRT support... " >&6; } for nrt_dir in $nrt_default_dirs; do # skip dirs that don't exist if test ! -z "$nrt_dir" -a ! -d "$nrt_dir" ; then continue; fi # search for required NRT API libraries if test -f "$nrt_dir/libnrt.so"; then cat >>confdefs.h <<_ACEOF #define LIBNRT_SO "$nrt_dir/libnrt.so" _ACEOF ac_have_libnrt="yes" break; fi done if test "x$ac_have_libnrt" != "xyes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi if test "x$ac_have_nrt_h" = "xyes"; then ac_have_nrt="yes" fi if test "x$ac_have_nrt" = "xyes"; then HAVE_NRT_TRUE= HAVE_NRT_FALSE='#' else HAVE_NRT_TRUE='#' HAVE_NRT_FALSE= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for job_attachpid in -ljob" >&5 $as_echo_n "checking for job_attachpid in -ljob... " >&6; } if ${ac_cv_lib_job_job_attachpid+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ljob $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char job_attachpid (); int main () { return job_attachpid (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_job_job_attachpid=yes else ac_cv_lib_job_job_attachpid=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_job_job_attachpid" >&5 $as_echo "$ac_cv_lib_job_job_attachpid" >&6; } if test "x$ac_cv_lib_job_job_attachpid" = xyes; then : ac_have_sgi_job="yes" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for SGI job container support" >&5 $as_echo_n "checking for SGI job container support... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${ac_have_sgi_job=no}" >&5 $as_echo "${ac_have_sgi_job=no}" >&6; } if test "x$ac_have_sgi_job" = "xyes"; then HAVE_SGI_JOB_TRUE= HAVE_SGI_JOB_FALSE='#' else HAVE_SGI_JOB_TRUE='#' HAVE_SGI_JOB_FALSE= fi _x_ac_netloc_dirs="/usr /usr/local" _x_ac_netloc_libs="lib64 lib" x_ac_cv_netloc_nosub="no" # Check whether --with-netloc was given. if test "${with_netloc+set}" = set; then : withval=$with_netloc; _x_ac_netloc_dirs="$withval $_x_ac_netloc_dirs" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for netloc installation" >&5 $as_echo_n "checking for netloc installation... " >&6; } if ${x_ac_cv_netloc_dir+:} false; then : $as_echo_n "(cached) " >&6 else for d in $_x_ac_netloc_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/netloc.h" || continue for bit in $_x_ac_netloc_libs; do test -d "$d/$bit" || continue _x_ac_netloc_cppflags_save="$CPPFLAGS" CPPFLAGS="-I$d/include $CPPFLAGS" _x_ac_netloc_libs_save="$LIBS" LIBS="-L$d/$bit -lnetloc $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { netloc_map_t map; netloc_map_create(&map); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : x_ac_cv_netloc_dir=$d fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { netloc_map_t map; netloc_map_create(&map) ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : x_ac_cv_netloc_dir=$d x_ac_cv_netloc_nosub="yes" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext CPPFLAGS="$_x_ac_netloc_cppflags_save" LIBS="$_x_ac_netloc_libs_save" test -n "$x_ac_cv_netloc_dir" && break done test -n "$x_ac_cv_netloc_dir" && break done fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $x_ac_cv_netloc_dir" >&5 $as_echo "$x_ac_cv_netloc_dir" >&6; } if test -z "$x_ac_cv_netloc_dir"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate netloc installation" >&5 $as_echo "$as_me: WARNING: unable to locate netloc installation" >&2;} else NETLOC_CPPFLAGS="-I$x_ac_cv_netloc_dir/include" if test "$ac_with_rpath" = "yes"; then NETLOC_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_netloc_dir/$bit -L$x_ac_cv_netloc_dir/$bit" else NETLOC_LDFLAGS="-L$x_ac_cv_netloc_dir/$bit" fi NETLOC_LIBS="-lnetloc" $as_echo "#define HAVE_NETLOC 1" >>confdefs.h if test "$x_ac_cv_netloc_nosub" = "yes"; then $as_echo "#define HAVE_NETLOC_NOSUB 1" >>confdefs.h fi fi if test -n "$x_ac_cv_netloc_dir"; then HAVE_NETLOC_TRUE= HAVE_NETLOC_FALSE='#' else HAVE_NETLOC_TRUE='#' HAVE_NETLOC_FALSE= fi x_ac_lua_pkg_name="lua" #check for 5.2 if that fails check for 5.1 if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"lua5.2\""; } >&5 ($PKG_CONFIG --exists --print-errors "lua5.2") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then x_ac_lua_pkg_name=lua5.2 else if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"lua5.1\""; } >&5 ($PKG_CONFIG --exists --print-errors "lua5.1") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then x_ac_lua_pkg_name=lua5.1 fi fi pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for lua" >&5 $as_echo_n "checking for lua... " >&6; } if test -n "$lua_CFLAGS"; then pkg_cv_lua_CFLAGS="$lua_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"\${x_ac_lua_pkg_name}\""; } >&5 ($PKG_CONFIG --exists --print-errors "${x_ac_lua_pkg_name}") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_lua_CFLAGS=`$PKG_CONFIG --cflags "${x_ac_lua_pkg_name}" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$lua_LIBS"; then pkg_cv_lua_LIBS="$lua_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"\${x_ac_lua_pkg_name}\""; } >&5 ($PKG_CONFIG --exists --print-errors "${x_ac_lua_pkg_name}") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_lua_LIBS=`$PKG_CONFIG --libs "${x_ac_lua_pkg_name}" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then lua_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "${x_ac_lua_pkg_name}" 2>&1` else lua_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "${x_ac_lua_pkg_name}" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$lua_PKG_ERRORS" >&5 x_ac_have_lua="no" elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } x_ac_have_lua="no" else lua_CFLAGS=$pkg_cv_lua_CFLAGS lua_LIBS=$pkg_cv_lua_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } x_ac_have_lua="yes" fi if test "x$x_ac_have_lua" = "xyes"; then saved_CFLAGS="$CFLAGS" saved_LIBS="$LIBS" # -DLUA_COMPAT_ALL is needed to support lua 5.2 lua_CFLAGS="$lua_CFLAGS -DLUA_COMPAT_ALL" CFLAGS="$CFLAGS $lua_CFLAGS" LIBS="$LIBS $lua_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for whether we can link to liblua" >&5 $as_echo_n "checking for whether we can link to liblua... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include int main () { lua_State *L = luaL_newstate (); luaL_openlibs(L); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : else x_ac_have_lua="no" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $x_ac_have_lua $x_ac_lua_pkg_name" >&5 $as_echo "$x_ac_have_lua $x_ac_lua_pkg_name" >&6; } if test "x$x_ac_have_lua" = "xno"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to link against lua libraries" >&5 $as_echo "$as_me: WARNING: unable to link against lua libraries" >&2;} fi CFLAGS="$saved_CFLAGS" LIBS="$saved_LIBS" else { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate lua package" >&5 $as_echo "$as_me: WARNING: unable to locate lua package" >&2;} fi if test "x$x_ac_have_lua" = "xyes"; then HAVE_LUA_TRUE= HAVE_LUA_FALSE='#' else HAVE_LUA_TRUE='#' HAVE_LUA_FALSE= fi if test "x$x_ac_have_lua" = "xyes" ; then if test "x$x_ac_lua_pkg_name" = "xlua5.2" ; then $as_echo "#define HAVE_LUA_5_2 1" >>confdefs.h elif test "x$x_ac_lua_pkg_name" = "xlua5.1"; then $as_echo "#define HAVE_LUA_5_1 1" >>confdefs.h fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether man2html is available" >&5 $as_echo_n "checking whether man2html is available... " >&6; } # Extract the first word of "man2html", so it can be a program name with args. set dummy man2html; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_have_man2html+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_have_man2html"; then ac_cv_prog_ac_have_man2html="$ac_have_man2html" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR as_dummy="$bindir:/usr/bin:/usr/local/bin" for as_dir in $as_dummy do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_have_man2html="yes" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_prog_ac_have_man2html" && ac_cv_prog_ac_have_man2html="no" fi fi ac_have_man2html=$ac_cv_prog_ac_have_man2html if test -n "$ac_have_man2html"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_have_man2html" >&5 $as_echo "$ac_have_man2html" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_have_man2html" = "xyes"; then HAVE_MAN2HTML_TRUE= HAVE_MAN2HTML_FALSE='#' else HAVE_MAN2HTML_TRUE='#' HAVE_MAN2HTML_FALSE= fi if test "x$ac_have_man2html" != "xyes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to build man page html files without man2html" >&5 $as_echo "$as_me: WARNING: unable to build man page html files without man2html" >&2;} fi if test "x$ac_have_man2html" = "xyes"; then HAVE_MAN2HTML_TRUE= HAVE_MAN2HTML_FALSE='#' else HAVE_MAN2HTML_TRUE='#' HAVE_MAN2HTML_FALSE= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for support of printf(\"%s\", NULL)" >&5 $as_echo_n "checking for support of printf(\"%s\", NULL)... " >&6; } if test "$cross_compiling" = yes; then : printf_null_ok=yes else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { char tmp[8]; char *n=NULL; snprintf(tmp,8,"%s",n); exit(0); ; return 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : printf_null_ok=yes else printf_null_ok=no fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi case "$host" in *solaris*) have_solaris=yes ;; *) have_solaris=no ;; esac if test "$printf_null_ok" = "no" -a "$have_solaris" = "yes" -a -d /usr/lib64/0@0.so.1; then as_fn_error $? "printf(\"%s\", NULL) results in abort, upgrade to OpenSolaris release 119 or set LD_PRELOAD=/usr/lib64/0@0.so.1" "$LINENO" 5 elif test "$printf_null_ok" = "no" -a "$have_solaris" = "yes" -a -d /usr/lib/0@0.so.1; then as_fn_error $? "printf(\"%s\", NULL) results in abort, upgrade to OpenSolaris release 119 or set LD_PRELOAD=/usr/lib/0@0.so.1" "$LINENO" 5 elif test "$printf_null_ok" = "no" -a "$have_solaris" = "yes"; then as_fn_error $? "printf(\"%s\", NULL) results in abort, upgrade to OpenSolaris release 119" "$LINENO" 5 elif test "$printf_null_ok" = "no"; then as_fn_error $? "printf(\"%s\", NULL) results in abort" "$LINENO" 5 else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for whether to include readline suport" >&5 $as_echo_n "checking for whether to include readline suport... " >&6; } # Check whether --with-readline was given. if test "${with_readline+set}" = set; then : withval=$with_readline; case "$withval" in yes) ac_with_readline=yes ;; no) ac_with_readline=no ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: doh!" >&5 $as_echo "doh!" >&6; } as_fn_error $? "bad value \"$withval\" for --without-readline" "$LINENO" 5 ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${ac_with_readline=yes}" >&5 $as_echo "${ac_with_readline=yes}" >&6; } if test "$ac_with_readline" = "yes"; then saved_LIBS="$LIBS" READLINE_LIBS="-lreadline -lhistory $NCURSES" LIBS="$saved_LIBS $READLINE_LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include int main () { readline("in:"); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : $as_echo "#define HAVE_READLINE 1" >>confdefs.h else READLINE_LIBS="" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS="$saved_LIBS" if test "$READLINE_LIBS" = ""; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: configured for readline support, but couldn't find libraries" >&5 $as_echo "$as_me: WARNING: configured for readline support, but couldn't find libraries" >&2;}; fi fi ssl_default_dirs="/usr/local/openssl64 /usr/local/openssl /usr/lib/openssl \ /usr/local/ssl /usr/lib/ssl /usr/local \ /usr/pkg /opt /opt/openssl /usr" SSL_LIB_TEST="-lcrypto" # Check whether --with-ssl was given. if test "${with_ssl+set}" = set; then : withval=$with_ssl; tryssldir=$withval # Hack around a libtool bug on AIX. # libcrypto is in a non-standard library path on AIX (/opt/freeware # which is specified with --with-ssl), and libtool is not setting # the correct runtime library path in the binaries. if test "x$ac_have_aix" = "xyes"; then SSL_LIB_TEST="-lcrypto-static" elif test "x$ac_have_nrt" = "xyes"; then # it appears on p7 machines the openssl doesn't # link correctly so we need to add -ldl SSL_LIB_TEST="$SSL_LIB_TEST -ldl" fi fi saved_LIBS="$LIBS" saved_LDFLAGS="$LDFLAGS" saved_CPPFLAGS="$CPPFLAGS" if test "x$prefix" != "xNONE" ; then tryssldir="$tryssldir $prefix" fi if test "x$tryssldir" != "xno" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for OpenSSL directory" >&5 $as_echo_n "checking for OpenSSL directory... " >&6; } if ${ac_cv_openssldir+:} false; then : $as_echo_n "(cached) " >&6 else for ssldir in $tryssldir "" $ssl_default_dirs; do CPPFLAGS="$saved_CPPFLAGS" LDFLAGS="$saved_LDFLAGS" LIBS="$saved_LIBS $SSL_LIB_TEST" # Skip directories if they don't exist if test ! -z "$ssldir" -a ! -d "$ssldir" ; then continue; fi sslincludedir="$ssldir" if test ! -z "$ssldir"; then # Try to use $ssldir/lib if it exists, otherwise # $ssldir if test -d "$ssldir/lib" ; then LDFLAGS="-L$ssldir/lib $saved_LDFLAGS" if test ! -z "$need_dash_r" ; then LDFLAGS="-R$ssldir/lib $LDFLAGS" fi else LDFLAGS="-L$ssldir $saved_LDFLAGS" if test ! -z "$need_dash_r" ; then LDFLAGS="-R$ssldir $LDFLAGS" fi fi # Try to use $ssldir/include if it exists, otherwise # $ssldir if test -d "$ssldir/include" ; then sslincludedir="$ssldir/include" CPPFLAGS="-I$ssldir/include $saved_CPPFLAGS" else CPPFLAGS="-I$ssldir $saved_CPPFLAGS" fi fi test -f "$sslincludedir/openssl/rand.h" || continue test -f "$sslincludedir/openssl/hmac.h" || continue test -f "$sslincludedir/openssl/sha.h" || continue # Basic test to check for compatible version and correct linking if test "$cross_compiling" = yes; then : { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot run test program while cross compiling See \`config.log' for more details" "$LINENO" 5; } else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include #include #define SIZE 8 int main(void) { int a[SIZE], i; for (i=0; i&5 $as_echo "$ac_cv_openssldir" >&6; } fi if test ! -z "$ac_have_openssl" ; then SSL_LIBS="$SSL_LIB_TEST" $as_echo "#define HAVE_OPENSSL 1" >>confdefs.h if (test ! -z "$ac_cv_openssldir") ; then ssldir=$ac_cv_openssldir if test ! -z "$ssldir" -a "x$ssldir" != "x/usr"; then # Try to use $ssldir/lib if it exists, otherwise # $ssldir if test -d "$ssldir/lib" ; then SSL_LDFLAGS="-L$ssldir/lib" else SSL_LDFLAGS="-L$ssldir" fi # Try to use $ssldir/include if it exists, otherwise # $ssldir if test -d "$ssldir/include" ; then SSL_CPPFLAGS="-I$ssldir/include" else SSL_CPPFLAGS="-I$ssldir" fi fi fi cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { EVP_MD_CTX_cleanup(NULL); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : $as_echo "#define HAVE_EVP_MD_CTX_CLEANUP 1" >>confdefs.h fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext else SSL_LIBS="" { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: could not find working OpenSSL library" >&5 $as_echo "$as_me: WARNING: could not find working OpenSSL library" >&2;} fi LIBS="$saved_LIBS" CPPFLAGS="$saved_CPPFLAGS" LDFLAGS="$saved_LDFLAGS" if test "x$ac_have_openssl" = "xyes"; then HAVE_OPENSSL_TRUE= HAVE_OPENSSL_FALSE='#' else HAVE_OPENSSL_TRUE='#' HAVE_OPENSSL_FALSE= fi _x_ac_munge_dirs="/usr /usr/local /opt/freeware /opt/munge" _x_ac_munge_libs="lib64 lib" # Check whether --with-munge was given. if test "${with_munge+set}" = set; then : withval=$with_munge; _x_ac_munge_dirs="$withval $_x_ac_munge_dirs" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for munge installation" >&5 $as_echo_n "checking for munge installation... " >&6; } if ${x_ac_cv_munge_dir+:} false; then : $as_echo_n "(cached) " >&6 else for d in $_x_ac_munge_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/munge.h" || continue for bit in $_x_ac_munge_libs; do test -d "$d/$bit" || continue _x_ac_munge_libs_save="$LIBS" LIBS="-L$d/$bit -lmunge $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char munge_encode (); int main () { return munge_encode (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : x_ac_cv_munge_dir=$d fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS="$_x_ac_munge_libs_save" test -n "$x_ac_cv_munge_dir" && break done test -n "$x_ac_cv_munge_dir" && break done fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $x_ac_cv_munge_dir" >&5 $as_echo "$x_ac_cv_munge_dir" >&6; } if test -z "$x_ac_cv_munge_dir"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate munge installation" >&5 $as_echo "$as_me: WARNING: unable to locate munge installation" >&2;} else MUNGE_LIBS="-lmunge" MUNGE_CPPFLAGS="-I$x_ac_cv_munge_dir/include" MUNGE_DIR="$x_ac_cv_munge_dir" if test "$ac_with_rpath" = "yes"; then MUNGE_LDFLAGS="-Wl,-rpath -Wl,$x_ac_cv_munge_dir/$bit -L$x_ac_cv_munge_dir/$bit" else MUNGE_LDFLAGS="-L$x_ac_cv_munge_dir/$bit" fi fi if test -n "$x_ac_cv_munge_dir"; then WITH_MUNGE_TRUE= WITH_MUNGE_FALSE='#' else WITH_MUNGE_TRUE='#' WITH_MUNGE_FALSE= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable multiple-slurmd support" >&5 $as_echo_n "checking whether to enable multiple-slurmd support... " >&6; } # Check whether --enable-multiple-slurmd was given. if test "${enable_multiple_slurmd+set}" = set; then : enableval=$enable_multiple_slurmd; case "$enableval" in yes) multiple_slurmd=yes ;; no) multiple_slurmd=no ;; *) as_fn_error $? "bad value \"$enableval\" for --enable-multiple-slurmd" "$LINENO" 5;; esac fi if test "x$multiple_slurmd" = "xyes"; then $as_echo "#define MULTIPLE_SLURMD 1" >>confdefs.h { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi AUTHD_LIBS="-lauth -le" savedLIBS="$LIBS" savedCFLAGS="$CFLAGS" LIBS="$SSL_LIBS $AUTHD_LIBS $LIBS" CFLAGS="$SSL_CPPFLAGS $CFLAGS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for auth_init_credentials in -lauth" >&5 $as_echo_n "checking for auth_init_credentials in -lauth... " >&6; } if ${ac_cv_lib_auth_auth_init_credentials+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lauth $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char auth_init_credentials (); int main () { return auth_init_credentials (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_auth_auth_init_credentials=yes else ac_cv_lib_auth_auth_init_credentials=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_auth_auth_init_credentials" >&5 $as_echo "$ac_cv_lib_auth_auth_init_credentials" >&6; } if test "x$ac_cv_lib_auth_auth_init_credentials" = xyes; then : have_authd=yes else have_authd=no fi if test "x$have_authd" = "xyes"; then WITH_AUTHD_TRUE= WITH_AUTHD_FALSE='#' else WITH_AUTHD_TRUE='#' WITH_AUTHD_FALSE= fi LIBS="$savedLIBS" CFLAGS="$savedCFLAGS" savedLIBS="$LIBS" LIBS="-lutil $LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for openpty in -lutil" >&5 $as_echo_n "checking for openpty in -lutil... " >&6; } if ${ac_cv_lib_util_openpty+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lutil $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char openpty (); int main () { return openpty (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_util_openpty=yes else ac_cv_lib_util_openpty=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_util_openpty" >&5 $as_echo "$ac_cv_lib_util_openpty" >&6; } if test "x$ac_cv_lib_util_openpty" = xyes; then : UTIL_LIBS="-lutil" fi LIBS="$savedLIBS" _x_ac_blcr_dirs="/usr /usr/local /opt/freeware /opt/blcr" _x_ac_blcr_libs="lib64 lib" # Check whether --with-blcr was given. if test "${with_blcr+set}" = set; then : withval=$with_blcr; _x_ac_blcr_dirs="$withval $_x_ac_blcr_dirs" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for blcr installation" >&5 $as_echo_n "checking for blcr installation... " >&6; } if ${x_ac_cv_blcr_dir+:} false; then : $as_echo_n "(cached) " >&6 else for d in $_x_ac_blcr_dirs; do test -d "$d" || continue test -d "$d/include" || continue test -f "$d/include/libcr.h" || continue for bit in $_x_ac_blcr_libs; do test -d "$d/$bit" || continue _x_ac_blcr_libs_save="$LIBS" LIBS="-L$d/$bit -lcr $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char cr_get_restart_info (); int main () { return cr_get_restart_info (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : x_ac_cv_blcr_dir=$d fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS="$_x_ac_blcr_libs_save" test -n "$x_ac_cv_blcr_dir" && break done test -n "$x_ac_cv_blcr_dir" && break done fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $x_ac_cv_blcr_dir" >&5 $as_echo "$x_ac_cv_blcr_dir" >&6; } if test -z "$x_ac_cv_blcr_dir"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unable to locate blcr installation" >&5 $as_echo "$as_me: WARNING: unable to locate blcr installation" >&2;} else BLCR_HOME="$x_ac_cv_blcr_dir" BLCR_LIBS="-lcr" BLCR_CPPFLAGS="-I$x_ac_cv_blcr_dir/include" BLCR_LDFLAGS="-L$x_ac_cv_blcr_dir/$bit" fi cat >>confdefs.h <<_ACEOF #define BLCR_HOME "$x_ac_cv_blcr_dir" _ACEOF if test -n "$x_ac_cv_blcr_dir"; then WITH_BLCR_TRUE= WITH_BLCR_FALSE='#' else WITH_BLCR_TRUE='#' WITH_BLCR_FALSE= fi # Check whether --with-libcurl was given. if test "${with_libcurl+set}" = set; then : withval=$with_libcurl; _libcurl_with=$withval else _libcurl_with=yes fi if test "$_libcurl_with" != "no" ; then for ac_prog in gawk mawk nawk awk do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_AWK+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$AWK"; then ac_cv_prog_AWK="$AWK" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_AWK="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi AWK=$ac_cv_prog_AWK if test -n "$AWK"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $AWK" >&5 $as_echo "$AWK" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$AWK" && break done _libcurl_version_parse="eval $AWK '{split(\$NF,A,\".\"); X=256*256*A[1]+256*A[2]+A[3]; print X;}'" _libcurl_try_link=yes if test -d "$_libcurl_with" ; then LIBCURL_CPPFLAGS="-I$withval/include" _libcurl_ldflags="-L$withval/lib" # Extract the first word of "curl-config", so it can be a program name with args. set dummy curl-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path__libcurl_config+:} false; then : $as_echo_n "(cached) " >&6 else case $_libcurl_config in [\\/]* | ?:[\\/]*) ac_cv_path__libcurl_config="$_libcurl_config" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in "$withval/bin" do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path__libcurl_config="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS ;; esac fi _libcurl_config=$ac_cv_path__libcurl_config if test -n "$_libcurl_config"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $_libcurl_config" >&5 $as_echo "$_libcurl_config" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else # Extract the first word of "curl-config", so it can be a program name with args. set dummy curl-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path__libcurl_config+:} false; then : $as_echo_n "(cached) " >&6 else case $_libcurl_config in [\\/]* | ?:[\\/]*) ac_cv_path__libcurl_config="$_libcurl_config" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path__libcurl_config="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS ;; esac fi _libcurl_config=$ac_cv_path__libcurl_config if test -n "$_libcurl_config"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $_libcurl_config" >&5 $as_echo "$_libcurl_config" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test x$_libcurl_config != "x" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for the version of libcurl" >&5 $as_echo_n "checking for the version of libcurl... " >&6; } if ${libcurl_cv_lib_curl_version+:} false; then : $as_echo_n "(cached) " >&6 else libcurl_cv_lib_curl_version=`$_libcurl_config --version | $AWK '{print $2}'` fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $libcurl_cv_lib_curl_version" >&5 $as_echo "$libcurl_cv_lib_curl_version" >&6; } _libcurl_version=`echo $libcurl_cv_lib_curl_version | $_libcurl_version_parse` _libcurl_wanted=`echo 0 | $_libcurl_version_parse` if test $_libcurl_wanted -gt 0 ; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= version " >&5 $as_echo_n "checking for libcurl >= version ... " >&6; } if ${libcurl_cv_lib_version_ok+:} false; then : $as_echo_n "(cached) " >&6 else if test $_libcurl_version -ge $_libcurl_wanted ; then libcurl_cv_lib_version_ok=yes else libcurl_cv_lib_version_ok=no fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $libcurl_cv_lib_version_ok" >&5 $as_echo "$libcurl_cv_lib_version_ok" >&6; } fi if test $_libcurl_wanted -eq 0 || test x$libcurl_cv_lib_version_ok = xyes ; then if test x"$LIBCURL_CPPFLAGS" = "x" ; then LIBCURL_CPPFLAGS=`$_libcurl_config --cflags` fi if test x"$LIBCURL" = "x" ; then LIBCURL=`$_libcurl_config --libs` # This is so silly, but Apple actually has a bug in their # curl-config script. Fixed in Tiger, but there are still # lots of Panther installs around. case "${host}" in powerpc-apple-darwin7*) LIBCURL=`echo $LIBCURL | sed -e 's|-arch i386||g'` ;; esac fi # All curl-config scripts support --feature _libcurl_features=`$_libcurl_config --feature` # Is it modern enough to have --protocols? (7.12.4) if test $_libcurl_version -ge 461828 ; then _libcurl_protocols=`$_libcurl_config --protocols` fi else _libcurl_try_link=no fi unset _libcurl_wanted fi if test $_libcurl_try_link = yes ; then # we didn't find curl-config, so let's see if the user-supplied # link line (or failing that, "-lcurl") is enough. LIBCURL=${LIBCURL-"$_libcurl_ldflags -lcurl"} { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether libcurl is usable" >&5 $as_echo_n "checking whether libcurl is usable... " >&6; } if ${libcurl_cv_lib_curl_usable+:} false; then : $as_echo_n "(cached) " >&6 else _libcurl_save_cppflags=$CPPFLAGS CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS" _libcurl_save_libs=$LIBS LIBS="$LIBCURL $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { /* Try and use a few common options to force a failure if we are missing symbols or can't link. */ int x; curl_easy_setopt(NULL,CURLOPT_URL,NULL); x=CURL_ERROR_SIZE; x=CURLOPT_WRITEFUNCTION; x=CURLOPT_WRITEDATA; x=CURLOPT_ERRORBUFFER; x=CURLOPT_STDERR; x=CURLOPT_VERBOSE; if (x) ; ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : libcurl_cv_lib_curl_usable=yes else libcurl_cv_lib_curl_usable=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext CPPFLAGS=$_libcurl_save_cppflags LIBS=$_libcurl_save_libs unset _libcurl_save_cppflags unset _libcurl_save_libs fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $libcurl_cv_lib_curl_usable" >&5 $as_echo "$libcurl_cv_lib_curl_usable" >&6; } if test $libcurl_cv_lib_curl_usable = yes ; then # Does curl_free() exist in this version of libcurl? # If not, fake it with free() _libcurl_save_cppflags=$CPPFLAGS CPPFLAGS="$CPPFLAGS $LIBCURL_CPPFLAGS" _libcurl_save_libs=$LIBS LIBS="$LIBS $LIBCURL" ac_fn_c_check_func "$LINENO" "curl_free" "ac_cv_func_curl_free" if test "x$ac_cv_func_curl_free" = xyes; then : else $as_echo "#define curl_free free" >>confdefs.h fi CPPFLAGS=$_libcurl_save_cppflags LIBS=$_libcurl_save_libs unset _libcurl_save_cppflags unset _libcurl_save_libs $as_echo "#define HAVE_LIBCURL 1" >>confdefs.h for _libcurl_feature in $_libcurl_features ; do cat >>confdefs.h <<_ACEOF #define `$as_echo "libcurl_feature_$_libcurl_feature" | $as_tr_cpp` 1 _ACEOF eval `$as_echo "libcurl_feature_$_libcurl_feature" | $as_tr_sh`=yes done if test "x$_libcurl_protocols" = "x" ; then # We don't have --protocols, so just assume that all # protocols are available _libcurl_protocols="HTTP FTP FILE TELNET LDAP DICT TFTP" if test x$libcurl_feature_SSL = xyes ; then _libcurl_protocols="$_libcurl_protocols HTTPS" # FTPS wasn't standards-compliant until version # 7.11.0 (0x070b00 == 461568) if test $_libcurl_version -ge 461568; then _libcurl_protocols="$_libcurl_protocols FTPS" fi fi # RTSP, IMAP, POP3 and SMTP were added in # 7.20.0 (0x071400 == 463872) if test $_libcurl_version -ge 463872; then _libcurl_protocols="$_libcurl_protocols RTSP IMAP POP3 SMTP" fi fi for _libcurl_protocol in $_libcurl_protocols ; do cat >>confdefs.h <<_ACEOF #define `$as_echo "libcurl_protocol_$_libcurl_protocol" | $as_tr_cpp` 1 _ACEOF eval `$as_echo "libcurl_protocol_$_libcurl_protocol" | $as_tr_sh`=yes done else unset LIBCURL unset LIBCURL_CPPFLAGS fi fi unset _libcurl_try_link unset _libcurl_version_parse unset _libcurl_config unset _libcurl_feature unset _libcurl_features unset _libcurl_protocol unset _libcurl_protocols unset _libcurl_version unset _libcurl_ldflags fi if test x$_libcurl_with = xno || test x$libcurl_cv_lib_curl_usable != xyes ; then # This is the IF-NO path : else # This is the IF-YES path : fi if test x$_libcurl_with = xyes && test x$libcurl_cv_lib_curl_usable = xyes; then WITH_CURL_TRUE= WITH_CURL_FALSE='#' else WITH_CURL_TRUE='#' WITH_CURL_FALSE= fi unset _libcurl_with ac_build_smap="no" if test "x$ac_have_some_curses" = "xyes" ; then ac_build_smap="yes" fi if test "x$ac_build_smap" = "xyes"; then BUILD_SMAP_TRUE= BUILD_SMAP_FALSE='#' else BUILD_SMAP_TRUE='#' BUILD_SMAP_FALSE= fi ac_config_files="$ac_config_files Makefile config.xml auxdir/Makefile contribs/Makefile contribs/cray/Makefile contribs/cray/csm/Makefile contribs/lua/Makefile contribs/mic/Makefile contribs/pam/Makefile contribs/pam_slurm_adopt/Makefile contribs/perlapi/Makefile contribs/perlapi/libslurm/Makefile contribs/perlapi/libslurm/perl/Makefile.PL contribs/perlapi/libslurmdb/Makefile contribs/perlapi/libslurmdb/perl/Makefile.PL contribs/torque/Makefile contribs/phpext/Makefile contribs/phpext/slurm_php/config.m4 contribs/sgather/Makefile contribs/sgi/Makefile contribs/sjobexit/Makefile contribs/slurmdb-direct/Makefile contribs/pmi2/Makefile doc/Makefile doc/man/Makefile doc/man/man1/Makefile doc/man/man3/Makefile doc/man/man5/Makefile doc/man/man8/Makefile doc/html/Makefile doc/html/configurator.html doc/html/configurator.easy.html etc/cgroup.release_common.example etc/init.d.slurm etc/init.d.slurmdbd etc/slurmctld.service etc/slurmd.service etc/slurmdbd.service src/Makefile src/api/Makefile src/common/Makefile src/db_api/Makefile src/layouts/Makefile src/layouts/power/Makefile src/layouts/unit/Makefile src/database/Makefile src/sacct/Makefile src/sacctmgr/Makefile src/sreport/Makefile src/salloc/Makefile src/sbatch/Makefile src/sbcast/Makefile src/sattach/Makefile src/scancel/Makefile src/scontrol/Makefile src/sdiag/Makefile src/sinfo/Makefile src/slurmctld/Makefile src/slurmd/Makefile src/slurmd/common/Makefile src/slurmd/slurmd/Makefile src/slurmd/slurmstepd/Makefile src/slurmdbd/Makefile src/smap/Makefile src/smd/Makefile src/sprio/Makefile src/squeue/Makefile src/srun/Makefile src/srun/libsrun/Makefile src/srun_cr/Makefile src/sshare/Makefile src/sstat/Makefile src/strigger/Makefile src/sview/Makefile src/plugins/Makefile src/plugins/accounting_storage/Makefile src/plugins/accounting_storage/common/Makefile src/plugins/accounting_storage/filetxt/Makefile src/plugins/accounting_storage/mysql/Makefile src/plugins/accounting_storage/none/Makefile src/plugins/accounting_storage/slurmdbd/Makefile src/plugins/acct_gather_energy/Makefile src/plugins/acct_gather_energy/cray/Makefile src/plugins/acct_gather_energy/rapl/Makefile src/plugins/acct_gather_energy/ibmaem/Makefile src/plugins/acct_gather_energy/ipmi/Makefile src/plugins/acct_gather_energy/none/Makefile src/plugins/acct_gather_infiniband/Makefile src/plugins/acct_gather_infiniband/ofed/Makefile src/plugins/acct_gather_infiniband/none/Makefile src/plugins/acct_gather_filesystem/Makefile src/plugins/acct_gather_filesystem/lustre/Makefile src/plugins/acct_gather_filesystem/none/Makefile src/plugins/acct_gather_profile/Makefile src/plugins/acct_gather_profile/hdf5/Makefile src/plugins/acct_gather_profile/hdf5/sh5util/Makefile src/plugins/acct_gather_profile/hdf5/sh5util/libsh5util_old/Makefile src/plugins/acct_gather_profile/none/Makefile src/plugins/auth/Makefile src/plugins/auth/authd/Makefile src/plugins/auth/munge/Makefile src/plugins/auth/none/Makefile src/plugins/burst_buffer/Makefile src/plugins/burst_buffer/common/Makefile src/plugins/burst_buffer/cray/Makefile src/plugins/burst_buffer/generic/Makefile src/plugins/checkpoint/Makefile src/plugins/checkpoint/aix/Makefile src/plugins/checkpoint/blcr/Makefile src/plugins/checkpoint/blcr/cr_checkpoint.sh src/plugins/checkpoint/blcr/cr_restart.sh src/plugins/checkpoint/none/Makefile src/plugins/checkpoint/ompi/Makefile src/plugins/checkpoint/poe/Makefile src/plugins/core_spec/Makefile src/plugins/core_spec/cray/Makefile src/plugins/core_spec/none/Makefile src/plugins/crypto/Makefile src/plugins/crypto/munge/Makefile src/plugins/crypto/openssl/Makefile src/plugins/ext_sensors/Makefile src/plugins/ext_sensors/rrd/Makefile src/plugins/ext_sensors/none/Makefile src/plugins/gres/Makefile src/plugins/gres/gpu/Makefile src/plugins/gres/nic/Makefile src/plugins/gres/mic/Makefile src/plugins/jobacct_gather/Makefile src/plugins/jobacct_gather/common/Makefile src/plugins/jobacct_gather/linux/Makefile src/plugins/jobacct_gather/aix/Makefile src/plugins/jobacct_gather/cgroup/Makefile src/plugins/jobacct_gather/none/Makefile src/plugins/jobcomp/Makefile src/plugins/jobcomp/elasticsearch/Makefile src/plugins/jobcomp/filetxt/Makefile src/plugins/jobcomp/none/Makefile src/plugins/jobcomp/script/Makefile src/plugins/jobcomp/mysql/Makefile src/plugins/job_container/Makefile src/plugins/job_container/cncu/Makefile src/plugins/job_container/none/Makefile src/plugins/job_submit/Makefile src/plugins/job_submit/all_partitions/Makefile src/plugins/job_submit/cnode/Makefile src/plugins/job_submit/cray/Makefile src/plugins/job_submit/defaults/Makefile src/plugins/job_submit/logging/Makefile src/plugins/job_submit/lua/Makefile src/plugins/job_submit/partition/Makefile src/plugins/job_submit/pbs/Makefile src/plugins/job_submit/require_timelimit/Makefile src/plugins/job_submit/throttle/Makefile src/plugins/launch/Makefile src/plugins/launch/aprun/Makefile src/plugins/launch/poe/Makefile src/plugins/launch/runjob/Makefile src/plugins/launch/slurm/Makefile src/plugins/power/Makefile src/plugins/power/common/Makefile src/plugins/power/cray/Makefile src/plugins/power/none/Makefile src/plugins/preempt/Makefile src/plugins/preempt/job_prio/Makefile src/plugins/preempt/none/Makefile src/plugins/preempt/partition_prio/Makefile src/plugins/preempt/qos/Makefile src/plugins/priority/Makefile src/plugins/priority/basic/Makefile src/plugins/priority/multifactor/Makefile src/plugins/proctrack/Makefile src/plugins/proctrack/aix/Makefile src/plugins/proctrack/cray/Makefile src/plugins/proctrack/cgroup/Makefile src/plugins/proctrack/pgid/Makefile src/plugins/proctrack/linuxproc/Makefile src/plugins/proctrack/sgi_job/Makefile src/plugins/proctrack/lua/Makefile src/plugins/route/Makefile src/plugins/route/default/Makefile src/plugins/route/topology/Makefile src/plugins/sched/Makefile src/plugins/sched/backfill/Makefile src/plugins/sched/builtin/Makefile src/plugins/sched/hold/Makefile src/plugins/sched/wiki/Makefile src/plugins/sched/wiki2/Makefile src/plugins/select/Makefile src/plugins/select/alps/Makefile src/plugins/select/alps/libalps/Makefile src/plugins/select/alps/libemulate/Makefile src/plugins/select/bluegene/Makefile src/plugins/select/bluegene/ba/Makefile src/plugins/select/bluegene/ba_bgq/Makefile src/plugins/select/bluegene/bl/Makefile src/plugins/select/bluegene/bl_bgq/Makefile src/plugins/select/bluegene/sfree/Makefile src/plugins/select/cons_res/Makefile src/plugins/select/cray/Makefile src/plugins/select/linear/Makefile src/plugins/select/other/Makefile src/plugins/select/serial/Makefile src/plugins/slurmctld/Makefile src/plugins/slurmctld/nonstop/Makefile src/plugins/slurmd/Makefile src/plugins/switch/Makefile src/plugins/switch/cray/Makefile src/plugins/switch/generic/Makefile src/plugins/switch/none/Makefile src/plugins/switch/nrt/Makefile src/plugins/switch/nrt/libpermapi/Makefile src/plugins/mpi/Makefile src/plugins/mpi/mpich1_p4/Makefile src/plugins/mpi/mpich1_shmem/Makefile src/plugins/mpi/mpichgm/Makefile src/plugins/mpi/mpichmx/Makefile src/plugins/mpi/mvapich/Makefile src/plugins/mpi/lam/Makefile src/plugins/mpi/none/Makefile src/plugins/mpi/openmpi/Makefile src/plugins/mpi/pmi2/Makefile src/plugins/task/Makefile src/plugins/task/affinity/Makefile src/plugins/task/cgroup/Makefile src/plugins/task/cray/Makefile src/plugins/task/none/Makefile src/plugins/topology/Makefile src/plugins/topology/3d_torus/Makefile src/plugins/topology/hypercube/Makefile src/plugins/topology/node_rank/Makefile src/plugins/topology/none/Makefile src/plugins/topology/tree/Makefile testsuite/Makefile testsuite/expect/Makefile testsuite/slurm_unit/Makefile testsuite/slurm_unit/api/Makefile testsuite/slurm_unit/api/manual/Makefile testsuite/slurm_unit/common/Makefile" cat >confcache <<\_ACEOF # This file is a shell script that caches the results of configure # tests run on this system so they can be shared between configure # scripts and configure runs, see configure's option --config-cache. # It is not useful on other systems. If it contains results you don't # want to keep, you may remove or edit it. # # config.status only pays attention to the cache file if you give it # the --recheck option to rerun configure. # # `ac_cv_env_foo' variables (set or unset) will be overridden when # loading this file, other *unset* `ac_cv_foo' will be assigned the # following values. _ACEOF # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 $as_echo "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space=' '; set) 2>&1` in #( *${as_nl}ac_space=\ *) # `set' does not quote correctly, so add quotes: double-quote # substitution turns \\\\ into \\, and sed turns \\ into \. sed -n \ "s/'/'\\\\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p" ;; #( *) # `set' quotes correctly as required by POSIX, so do not add quotes. sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) | sed ' /^ac_cv_env_/b end t clear :clear s/^\([^=]*\)=\(.*[{}].*\)$/test "${\1+set}" = set || &/ t end s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/ :end' >>confcache if diff "$cache_file" confcache >/dev/null 2>&1; then :; else if test -w "$cache_file"; then if test "x$cache_file" != "x/dev/null"; then { $as_echo "$as_me:${as_lineno-$LINENO}: updating cache $cache_file" >&5 $as_echo "$as_me: updating cache $cache_file" >&6;} if test ! -f "$cache_file" || test -h "$cache_file"; then cat confcache >"$cache_file" else case $cache_file in #( */* | ?:*) mv -f confcache "$cache_file"$$ && mv -f "$cache_file"$$ "$cache_file" ;; #( *) mv -f confcache "$cache_file" ;; esac fi fi else { $as_echo "$as_me:${as_lineno-$LINENO}: not updating unwritable cache $cache_file" >&5 $as_echo "$as_me: not updating unwritable cache $cache_file" >&6;} fi fi rm -f confcache test "x$prefix" = xNONE && prefix=$ac_default_prefix # Let make expand exec_prefix. test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' DEFS=-DHAVE_CONFIG_H ac_libobjs= ac_ltlibobjs= U= for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue # 1. Remove the extension, and $U if already installed. ac_script='s/\$U\././;s/\.o$//;s/\.obj$//' ac_i=`$as_echo "$ac_i" | sed "$ac_script"` # 2. Prepend LIBOBJDIR. When used with automake>=1.10 LIBOBJDIR # will be set to the directory where LIBOBJS objects are built. as_fn_append ac_libobjs " \${LIBOBJDIR}$ac_i\$U.$ac_objext" as_fn_append ac_ltlibobjs " \${LIBOBJDIR}$ac_i"'$U.lo' done LIBOBJS=$ac_libobjs LTLIBOBJS=$ac_ltlibobjs if test -z "${DONT_BUILD_TRUE}" && test -z "${DONT_BUILD_FALSE}"; then as_fn_error $? "conditional \"DONT_BUILD\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking that generated files are newer than configure" >&5 $as_echo_n "checking that generated files are newer than configure... " >&6; } if test -n "$am_sleep_pid"; then # Hide warnings about reused PIDs. wait $am_sleep_pid 2>/dev/null fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: done" >&5 $as_echo "done" >&6; } if test -n "$EXEEXT"; then am__EXEEXT_TRUE= am__EXEEXT_FALSE='#' else am__EXEEXT_TRUE='#' am__EXEEXT_FALSE= fi if test -z "${MAINTAINER_MODE_TRUE}" && test -z "${MAINTAINER_MODE_FALSE}"; then as_fn_error $? "conditional \"MAINTAINER_MODE\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${AMDEP_TRUE}" && test -z "${AMDEP_FALSE}"; then as_fn_error $? "conditional \"AMDEP\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCC_TRUE}" && test -z "${am__fastdepCC_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCC\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${BGL_LOADED_TRUE}" && test -z "${BGL_LOADED_FALSE}"; then as_fn_error $? "conditional \"BGL_LOADED\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${BG_L_P_LOADED_TRUE}" && test -z "${BG_L_P_LOADED_FALSE}"; then as_fn_error $? "conditional \"BG_L_P_LOADED\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${REAL_BG_L_P_LOADED_TRUE}" && test -z "${REAL_BG_L_P_LOADED_FALSE}"; then as_fn_error $? "conditional \"REAL_BG_L_P_LOADED\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCXX_TRUE}" && test -z "${am__fastdepCXX_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCXX\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${BGQ_LOADED_TRUE}" && test -z "${BGQ_LOADED_FALSE}"; then as_fn_error $? "conditional \"BGQ_LOADED\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${REAL_BGQ_LOADED_TRUE}" && test -z "${REAL_BGQ_LOADED_FALSE}"; then as_fn_error $? "conditional \"REAL_BGQ_LOADED\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${BLUEGENE_LOADED_TRUE}" && test -z "${BLUEGENE_LOADED_FALSE}"; then as_fn_error $? "conditional \"BLUEGENE_LOADED\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_AIX_TRUE}" && test -z "${HAVE_AIX_FALSE}"; then as_fn_error $? "conditional \"HAVE_AIX\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_AIX_PROCTRACK_TRUE}" && test -z "${HAVE_AIX_PROCTRACK_FALSE}"; then as_fn_error $? "conditional \"HAVE_AIX_PROCTRACK\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_CYGWIN_TRUE}" && test -z "${WITH_CYGWIN_FALSE}"; then as_fn_error $? "conditional \"WITH_CYGWIN\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCC_TRUE}" && test -z "${am__fastdepCC_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCC\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCXX_TRUE}" && test -z "${am__fastdepCXX_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCXX\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_CXX_TRUE}" && test -z "${WITH_CXX_FALSE}"; then as_fn_error $? "conditional \"WITH_CXX\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_GNU_LD_TRUE}" && test -z "${WITH_GNU_LD_FALSE}"; then as_fn_error $? "conditional \"WITH_GNU_LD\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_NUMA_TRUE}" && test -z "${HAVE_NUMA_FALSE}"; then as_fn_error $? "conditional \"HAVE_NUMA\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_SCHED_SETAFFINITY_TRUE}" && test -z "${HAVE_SCHED_SETAFFINITY_FALSE}"; then as_fn_error $? "conditional \"HAVE_SCHED_SETAFFINITY\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_PAM_TRUE}" && test -z "${HAVE_PAM_FALSE}"; then as_fn_error $? "conditional \"HAVE_PAM\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_JSON_PARSER_TRUE}" && test -z "${WITH_JSON_PARSER_FALSE}"; then as_fn_error $? "conditional \"WITH_JSON_PARSER\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_UNSETENV_TRUE}" && test -z "${HAVE_UNSETENV_FALSE}"; then as_fn_error $? "conditional \"HAVE_UNSETENV\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${BUILD_OFED_TRUE}" && test -z "${BUILD_OFED_FALSE}"; then as_fn_error $? "conditional \"BUILD_OFED\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${BUILD_HDF5_TRUE}" && test -z "${BUILD_HDF5_FALSE}"; then as_fn_error $? "conditional \"BUILD_HDF5\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${BUILD_IPMI_TRUE}" && test -z "${BUILD_IPMI_FALSE}"; then as_fn_error $? "conditional \"BUILD_IPMI\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${BUILD_RRD_TRUE}" && test -z "${BUILD_RRD_FALSE}"; then as_fn_error $? "conditional \"BUILD_RRD\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_SOME_CURSES_TRUE}" && test -z "${HAVE_SOME_CURSES_FALSE}"; then as_fn_error $? "conditional \"HAVE_SOME_CURSES\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_CHECK_TRUE}" && test -z "${HAVE_CHECK_FALSE}"; then as_fn_error $? "conditional \"HAVE_CHECK\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${BUILD_SVIEW_TRUE}" && test -z "${BUILD_SVIEW_FALSE}"; then as_fn_error $? "conditional \"BUILD_SVIEW\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_MYSQL_TRUE}" && test -z "${WITH_MYSQL_FALSE}"; then as_fn_error $? "conditional \"WITH_MYSQL\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_NATIVE_CRAY_TRUE}" && test -z "${HAVE_NATIVE_CRAY_FALSE}"; then as_fn_error $? "conditional \"HAVE_NATIVE_CRAY\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_ALPS_CRAY_TRUE}" && test -z "${HAVE_ALPS_CRAY_FALSE}"; then as_fn_error $? "conditional \"HAVE_ALPS_CRAY\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_REAL_CRAY_TRUE}" && test -z "${HAVE_REAL_CRAY_FALSE}"; then as_fn_error $? "conditional \"HAVE_REAL_CRAY\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_CRAY_NETWORK_TRUE}" && test -z "${HAVE_CRAY_NETWORK_FALSE}"; then as_fn_error $? "conditional \"HAVE_CRAY_NETWORK\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_ALPS_EMULATION_TRUE}" && test -z "${HAVE_ALPS_EMULATION_FALSE}"; then as_fn_error $? "conditional \"HAVE_ALPS_EMULATION\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_ALPS_CRAY_EMULATION_TRUE}" && test -z "${HAVE_ALPS_CRAY_EMULATION_FALSE}"; then as_fn_error $? "conditional \"HAVE_ALPS_CRAY_EMULATION\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${DEBUG_MODULES_TRUE}" && test -z "${DEBUG_MODULES_FALSE}"; then as_fn_error $? "conditional \"DEBUG_MODULES\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_NRT_TRUE}" && test -z "${HAVE_NRT_FALSE}"; then as_fn_error $? "conditional \"HAVE_NRT\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_SGI_JOB_TRUE}" && test -z "${HAVE_SGI_JOB_FALSE}"; then as_fn_error $? "conditional \"HAVE_SGI_JOB\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_NETLOC_TRUE}" && test -z "${HAVE_NETLOC_FALSE}"; then as_fn_error $? "conditional \"HAVE_NETLOC\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_LUA_TRUE}" && test -z "${HAVE_LUA_FALSE}"; then as_fn_error $? "conditional \"HAVE_LUA\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_MAN2HTML_TRUE}" && test -z "${HAVE_MAN2HTML_FALSE}"; then as_fn_error $? "conditional \"HAVE_MAN2HTML\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_MAN2HTML_TRUE}" && test -z "${HAVE_MAN2HTML_FALSE}"; then as_fn_error $? "conditional \"HAVE_MAN2HTML\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_OPENSSL_TRUE}" && test -z "${HAVE_OPENSSL_FALSE}"; then as_fn_error $? "conditional \"HAVE_OPENSSL\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_MUNGE_TRUE}" && test -z "${WITH_MUNGE_FALSE}"; then as_fn_error $? "conditional \"WITH_MUNGE\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_AUTHD_TRUE}" && test -z "${WITH_AUTHD_FALSE}"; then as_fn_error $? "conditional \"WITH_AUTHD\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_BLCR_TRUE}" && test -z "${WITH_BLCR_FALSE}"; then as_fn_error $? "conditional \"WITH_BLCR\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_CURL_TRUE}" && test -z "${WITH_CURL_FALSE}"; then as_fn_error $? "conditional \"WITH_CURL\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${BUILD_SMAP_TRUE}" && test -z "${BUILD_SMAP_FALSE}"; then as_fn_error $? "conditional \"BUILD_SMAP\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi : "${CONFIG_STATUS=./config.status}" ac_write_fail=0 ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files $CONFIG_STATUS" { $as_echo "$as_me:${as_lineno-$LINENO}: creating $CONFIG_STATUS" >&5 $as_echo "$as_me: creating $CONFIG_STATUS" >&6;} as_write_fail=0 cat >$CONFIG_STATUS <<_ASEOF || as_write_fail=1 #! $SHELL # Generated by $as_me. # Run this file to recreate the current configuration. # Compiler output produced by configure, useful for debugging # configure, is in config.log if it exists. debug=false ac_cs_recheck=false ac_cs_silent=false SHELL=\${CONFIG_SHELL-$SHELL} export SHELL _ASEOF cat >>$CONFIG_STATUS <<\_ASEOF || as_write_fail=1 ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi as_nl=' ' export as_nl # Printing a long string crashes Solaris 7 /usr/bin/printf. as_echo='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo$as_echo # Prefer a ksh shell builtin over an external printf program on Solaris, # but without wasting forks for bash or zsh. if test -z "$BASH_VERSION$ZSH_VERSION" \ && (test "X`print -r -- $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='print -r --' as_echo_n='print -rn --' elif (test "X`printf %s $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='printf %s\n' as_echo_n='printf %s' else if test "X`(/usr/ucb/echo -n -n $as_echo) 2>/dev/null`" = "X-n $as_echo"; then as_echo_body='eval /usr/ucb/echo -n "$1$as_nl"' as_echo_n='/usr/ucb/echo -n' else as_echo_body='eval expr "X$1" : "X\\(.*\\)"' as_echo_n_body='eval arg=$1; case $arg in #( *"$as_nl"*) expr "X$arg" : "X\\(.*\\)$as_nl"; arg=`expr "X$arg" : ".*$as_nl\\(.*\\)"`;; esac; expr "X$arg" : "X\\(.*\\)" | tr -d "$as_nl" ' export as_echo_n_body as_echo_n='sh -c $as_echo_n_body as_echo' fi export as_echo_body as_echo='sh -c $as_echo_body as_echo' fi # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # IFS # We need space, tab and new line, in precisely that order. Quoting is # there to prevent editors from complaining about space-tab. # (If _AS_PATH_WALK were called with IFS unset, it would disable word # splitting by setting IFS to empty value.) IFS=" "" $as_nl" # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then $as_echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Unset variables that we do not need and which cause bugs (e.g. in # pre-3.0 UWIN ksh). But do not cause bugs in bash 2.01; the "|| exit 1" # suppresses any "Segmentation fault" message there. '((' could # trigger a bug in pdksh 5.2.14. for as_var in BASH_ENV ENV MAIL MAILPATH do eval test x\${$as_var+set} = xset \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack $as_echo "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi $as_echo "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null; then : eval 'as_fn_append () { eval $1+=\$2 }' else as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null; then : eval 'as_fn_arith () { as_val=$(( $* )) }' else as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || $as_echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -pR' fi else as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`$as_echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi # as_fn_executable_p FILE # ----------------------- # Test if FILE is an executable regular file. as_fn_executable_p () { test -f "$1" && test -x "$1" } # as_fn_executable_p as_test_x='test -x' as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" exec 6>&1 ## ----------------------------------- ## ## Main body of $CONFIG_STATUS script. ## ## ----------------------------------- ## _ASEOF test $as_write_fail = 0 && chmod +x $CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Save the log message, to keep $0 and so on meaningful, and to # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. ac_log=" This file was extended by slurm $as_me 15.08, which was generated by GNU Autoconf 2.69. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS CONFIG_LINKS = $CONFIG_LINKS CONFIG_COMMANDS = $CONFIG_COMMANDS $ $0 $@ on `(hostname || uname -n) 2>/dev/null | sed 1q` " _ACEOF case $ac_config_files in *" "*) set x $ac_config_files; shift; ac_config_files=$*;; esac case $ac_config_headers in *" "*) set x $ac_config_headers; shift; ac_config_headers=$*;; esac cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # Files that config.status was made for. config_files="$ac_config_files" config_headers="$ac_config_headers" config_commands="$ac_config_commands" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 ac_cs_usage="\ \`$as_me' instantiates files and other configuration actions from templates according to the current configuration. Unless the files and actions are specified as TAGs, all are instantiated by default. Usage: $0 [OPTION]... [TAG]... -h, --help print this help, then exit -V, --version print version number and configuration settings, then exit --config print configuration, then exit -q, --quiet, --silent do not print progress messages -d, --debug don't remove temporary files --recheck update $as_me by reconfiguring in the same conditions --file=FILE[:TEMPLATE] instantiate the configuration file FILE --header=FILE[:TEMPLATE] instantiate the configuration header FILE Configuration files: $config_files Configuration headers: $config_headers Configuration commands: $config_commands Report bugs to . slurm home page: ." _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`" ac_cs_version="\\ slurm config.status 15.08 configured by $0, generated by GNU Autoconf 2.69, with options \\"\$ac_cs_config\\" Copyright (C) 2012 Free Software Foundation, Inc. This config.status script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it." ac_pwd='$ac_pwd' srcdir='$srcdir' INSTALL='$INSTALL' MKDIR_P='$MKDIR_P' AWK='$AWK' test -n "\$AWK" || AWK=awk _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # The default lists apply if the user does not specify any file. ac_need_defaults=: while test $# != 0 do case $1 in --*=?*) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)'` ac_shift=: ;; --*=) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg= ac_shift=: ;; *) ac_option=$1 ac_optarg=$2 ac_shift=shift ;; esac case $ac_option in # Handling of the options. -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) ac_cs_recheck=: ;; --version | --versio | --versi | --vers | --ver | --ve | --v | -V ) $as_echo "$ac_cs_version"; exit ;; --config | --confi | --conf | --con | --co | --c ) $as_echo "$ac_cs_config"; exit ;; --debug | --debu | --deb | --de | --d | -d ) debug=: ;; --file | --fil | --fi | --f ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`$as_echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; '') as_fn_error $? "missing file argument" ;; esac as_fn_append CONFIG_FILES " '$ac_optarg'" ac_need_defaults=false;; --header | --heade | --head | --hea ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`$as_echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; esac as_fn_append CONFIG_HEADERS " '$ac_optarg'" ac_need_defaults=false;; --he | --h) # Conflict between --help and --header as_fn_error $? "ambiguous option: \`$1' Try \`$0 --help' for more information.";; --help | --hel | -h ) $as_echo "$ac_cs_usage"; exit ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil | --si | --s) ac_cs_silent=: ;; # This is an error. -*) as_fn_error $? "unrecognized option: \`$1' Try \`$0 --help' for more information." ;; *) as_fn_append ac_config_targets " $1" ac_need_defaults=false ;; esac shift done ac_configure_extra_args= if $ac_cs_silent; then exec 6>/dev/null ac_configure_extra_args="$ac_configure_extra_args --silent" fi _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 if \$ac_cs_recheck; then set X $SHELL '$0' $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion shift \$as_echo "running CONFIG_SHELL=$SHELL \$*" >&6 CONFIG_SHELL='$SHELL' export CONFIG_SHELL exec "\$@" fi _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 exec 5>>config.log { echo sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX ## Running $as_me. ## _ASBOX $as_echo "$ac_log" } >&5 _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # # INIT-COMMANDS # AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir" # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH sed_quote_subst='$sed_quote_subst' double_quote_subst='$double_quote_subst' delay_variable_subst='$delay_variable_subst' macro_version='`$ECHO "$macro_version" | $SED "$delay_single_quote_subst"`' macro_revision='`$ECHO "$macro_revision" | $SED "$delay_single_quote_subst"`' enable_shared='`$ECHO "$enable_shared" | $SED "$delay_single_quote_subst"`' enable_static='`$ECHO "$enable_static" | $SED "$delay_single_quote_subst"`' pic_mode='`$ECHO "$pic_mode" | $SED "$delay_single_quote_subst"`' enable_fast_install='`$ECHO "$enable_fast_install" | $SED "$delay_single_quote_subst"`' SHELL='`$ECHO "$SHELL" | $SED "$delay_single_quote_subst"`' ECHO='`$ECHO "$ECHO" | $SED "$delay_single_quote_subst"`' PATH_SEPARATOR='`$ECHO "$PATH_SEPARATOR" | $SED "$delay_single_quote_subst"`' host_alias='`$ECHO "$host_alias" | $SED "$delay_single_quote_subst"`' host='`$ECHO "$host" | $SED "$delay_single_quote_subst"`' host_os='`$ECHO "$host_os" | $SED "$delay_single_quote_subst"`' build_alias='`$ECHO "$build_alias" | $SED "$delay_single_quote_subst"`' build='`$ECHO "$build" | $SED "$delay_single_quote_subst"`' build_os='`$ECHO "$build_os" | $SED "$delay_single_quote_subst"`' SED='`$ECHO "$SED" | $SED "$delay_single_quote_subst"`' Xsed='`$ECHO "$Xsed" | $SED "$delay_single_quote_subst"`' GREP='`$ECHO "$GREP" | $SED "$delay_single_quote_subst"`' EGREP='`$ECHO "$EGREP" | $SED "$delay_single_quote_subst"`' FGREP='`$ECHO "$FGREP" | $SED "$delay_single_quote_subst"`' LD='`$ECHO "$LD" | $SED "$delay_single_quote_subst"`' NM='`$ECHO "$NM" | $SED "$delay_single_quote_subst"`' LN_S='`$ECHO "$LN_S" | $SED "$delay_single_quote_subst"`' max_cmd_len='`$ECHO "$max_cmd_len" | $SED "$delay_single_quote_subst"`' ac_objext='`$ECHO "$ac_objext" | $SED "$delay_single_quote_subst"`' exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`' lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`' lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`' lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`' lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`' lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`' reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`' reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`' OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`' deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`' file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`' file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`' want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`' DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`' sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`' AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`' AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`' archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`' STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`' RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`' old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`' old_postuninstall_cmds='`$ECHO "$old_postuninstall_cmds" | $SED "$delay_single_quote_subst"`' old_archive_cmds='`$ECHO "$old_archive_cmds" | $SED "$delay_single_quote_subst"`' lock_old_archive_extraction='`$ECHO "$lock_old_archive_extraction" | $SED "$delay_single_quote_subst"`' CC='`$ECHO "$CC" | $SED "$delay_single_quote_subst"`' CFLAGS='`$ECHO "$CFLAGS" | $SED "$delay_single_quote_subst"`' compiler='`$ECHO "$compiler" | $SED "$delay_single_quote_subst"`' GCC='`$ECHO "$GCC" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`' nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`' lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`' objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`' MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`' lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`' need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`' MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`' DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`' NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`' LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`' OTOOL='`$ECHO "$OTOOL" | $SED "$delay_single_quote_subst"`' OTOOL64='`$ECHO "$OTOOL64" | $SED "$delay_single_quote_subst"`' libext='`$ECHO "$libext" | $SED "$delay_single_quote_subst"`' shrext_cmds='`$ECHO "$shrext_cmds" | $SED "$delay_single_quote_subst"`' extract_expsyms_cmds='`$ECHO "$extract_expsyms_cmds" | $SED "$delay_single_quote_subst"`' archive_cmds_need_lc='`$ECHO "$archive_cmds_need_lc" | $SED "$delay_single_quote_subst"`' enable_shared_with_static_runtimes='`$ECHO "$enable_shared_with_static_runtimes" | $SED "$delay_single_quote_subst"`' export_dynamic_flag_spec='`$ECHO "$export_dynamic_flag_spec" | $SED "$delay_single_quote_subst"`' whole_archive_flag_spec='`$ECHO "$whole_archive_flag_spec" | $SED "$delay_single_quote_subst"`' compiler_needs_object='`$ECHO "$compiler_needs_object" | $SED "$delay_single_quote_subst"`' old_archive_from_new_cmds='`$ECHO "$old_archive_from_new_cmds" | $SED "$delay_single_quote_subst"`' old_archive_from_expsyms_cmds='`$ECHO "$old_archive_from_expsyms_cmds" | $SED "$delay_single_quote_subst"`' archive_cmds='`$ECHO "$archive_cmds" | $SED "$delay_single_quote_subst"`' archive_expsym_cmds='`$ECHO "$archive_expsym_cmds" | $SED "$delay_single_quote_subst"`' module_cmds='`$ECHO "$module_cmds" | $SED "$delay_single_quote_subst"`' module_expsym_cmds='`$ECHO "$module_expsym_cmds" | $SED "$delay_single_quote_subst"`' with_gnu_ld='`$ECHO "$with_gnu_ld" | $SED "$delay_single_quote_subst"`' allow_undefined_flag='`$ECHO "$allow_undefined_flag" | $SED "$delay_single_quote_subst"`' no_undefined_flag='`$ECHO "$no_undefined_flag" | $SED "$delay_single_quote_subst"`' hardcode_libdir_flag_spec='`$ECHO "$hardcode_libdir_flag_spec" | $SED "$delay_single_quote_subst"`' hardcode_libdir_separator='`$ECHO "$hardcode_libdir_separator" | $SED "$delay_single_quote_subst"`' hardcode_direct='`$ECHO "$hardcode_direct" | $SED "$delay_single_quote_subst"`' hardcode_direct_absolute='`$ECHO "$hardcode_direct_absolute" | $SED "$delay_single_quote_subst"`' hardcode_minus_L='`$ECHO "$hardcode_minus_L" | $SED "$delay_single_quote_subst"`' hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_quote_subst"`' hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`' inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`' link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`' always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`' export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`' exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`' include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`' prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`' postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`' file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`' variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`' need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`' need_version='`$ECHO "$need_version" | $SED "$delay_single_quote_subst"`' version_type='`$ECHO "$version_type" | $SED "$delay_single_quote_subst"`' runpath_var='`$ECHO "$runpath_var" | $SED "$delay_single_quote_subst"`' shlibpath_var='`$ECHO "$shlibpath_var" | $SED "$delay_single_quote_subst"`' shlibpath_overrides_runpath='`$ECHO "$shlibpath_overrides_runpath" | $SED "$delay_single_quote_subst"`' libname_spec='`$ECHO "$libname_spec" | $SED "$delay_single_quote_subst"`' library_names_spec='`$ECHO "$library_names_spec" | $SED "$delay_single_quote_subst"`' soname_spec='`$ECHO "$soname_spec" | $SED "$delay_single_quote_subst"`' install_override_mode='`$ECHO "$install_override_mode" | $SED "$delay_single_quote_subst"`' postinstall_cmds='`$ECHO "$postinstall_cmds" | $SED "$delay_single_quote_subst"`' postuninstall_cmds='`$ECHO "$postuninstall_cmds" | $SED "$delay_single_quote_subst"`' finish_cmds='`$ECHO "$finish_cmds" | $SED "$delay_single_quote_subst"`' finish_eval='`$ECHO "$finish_eval" | $SED "$delay_single_quote_subst"`' hardcode_into_libs='`$ECHO "$hardcode_into_libs" | $SED "$delay_single_quote_subst"`' sys_lib_search_path_spec='`$ECHO "$sys_lib_search_path_spec" | $SED "$delay_single_quote_subst"`' sys_lib_dlsearch_path_spec='`$ECHO "$sys_lib_dlsearch_path_spec" | $SED "$delay_single_quote_subst"`' hardcode_action='`$ECHO "$hardcode_action" | $SED "$delay_single_quote_subst"`' enable_dlopen='`$ECHO "$enable_dlopen" | $SED "$delay_single_quote_subst"`' enable_dlopen_self='`$ECHO "$enable_dlopen_self" | $SED "$delay_single_quote_subst"`' enable_dlopen_self_static='`$ECHO "$enable_dlopen_self_static" | $SED "$delay_single_quote_subst"`' old_striplib='`$ECHO "$old_striplib" | $SED "$delay_single_quote_subst"`' striplib='`$ECHO "$striplib" | $SED "$delay_single_quote_subst"`' compiler_lib_search_dirs='`$ECHO "$compiler_lib_search_dirs" | $SED "$delay_single_quote_subst"`' predep_objects='`$ECHO "$predep_objects" | $SED "$delay_single_quote_subst"`' postdep_objects='`$ECHO "$postdep_objects" | $SED "$delay_single_quote_subst"`' predeps='`$ECHO "$predeps" | $SED "$delay_single_quote_subst"`' postdeps='`$ECHO "$postdeps" | $SED "$delay_single_quote_subst"`' compiler_lib_search_path='`$ECHO "$compiler_lib_search_path" | $SED "$delay_single_quote_subst"`' LD_CXX='`$ECHO "$LD_CXX" | $SED "$delay_single_quote_subst"`' reload_flag_CXX='`$ECHO "$reload_flag_CXX" | $SED "$delay_single_quote_subst"`' reload_cmds_CXX='`$ECHO "$reload_cmds_CXX" | $SED "$delay_single_quote_subst"`' old_archive_cmds_CXX='`$ECHO "$old_archive_cmds_CXX" | $SED "$delay_single_quote_subst"`' compiler_CXX='`$ECHO "$compiler_CXX" | $SED "$delay_single_quote_subst"`' GCC_CXX='`$ECHO "$GCC_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_no_builtin_flag_CXX='`$ECHO "$lt_prog_compiler_no_builtin_flag_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_pic_CXX='`$ECHO "$lt_prog_compiler_pic_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_static_CXX='`$ECHO "$lt_prog_compiler_static_CXX" | $SED "$delay_single_quote_subst"`' lt_cv_prog_compiler_c_o_CXX='`$ECHO "$lt_cv_prog_compiler_c_o_CXX" | $SED "$delay_single_quote_subst"`' archive_cmds_need_lc_CXX='`$ECHO "$archive_cmds_need_lc_CXX" | $SED "$delay_single_quote_subst"`' enable_shared_with_static_runtimes_CXX='`$ECHO "$enable_shared_with_static_runtimes_CXX" | $SED "$delay_single_quote_subst"`' export_dynamic_flag_spec_CXX='`$ECHO "$export_dynamic_flag_spec_CXX" | $SED "$delay_single_quote_subst"`' whole_archive_flag_spec_CXX='`$ECHO "$whole_archive_flag_spec_CXX" | $SED "$delay_single_quote_subst"`' compiler_needs_object_CXX='`$ECHO "$compiler_needs_object_CXX" | $SED "$delay_single_quote_subst"`' old_archive_from_new_cmds_CXX='`$ECHO "$old_archive_from_new_cmds_CXX" | $SED "$delay_single_quote_subst"`' old_archive_from_expsyms_cmds_CXX='`$ECHO "$old_archive_from_expsyms_cmds_CXX" | $SED "$delay_single_quote_subst"`' archive_cmds_CXX='`$ECHO "$archive_cmds_CXX" | $SED "$delay_single_quote_subst"`' archive_expsym_cmds_CXX='`$ECHO "$archive_expsym_cmds_CXX" | $SED "$delay_single_quote_subst"`' module_cmds_CXX='`$ECHO "$module_cmds_CXX" | $SED "$delay_single_quote_subst"`' module_expsym_cmds_CXX='`$ECHO "$module_expsym_cmds_CXX" | $SED "$delay_single_quote_subst"`' with_gnu_ld_CXX='`$ECHO "$with_gnu_ld_CXX" | $SED "$delay_single_quote_subst"`' allow_undefined_flag_CXX='`$ECHO "$allow_undefined_flag_CXX" | $SED "$delay_single_quote_subst"`' no_undefined_flag_CXX='`$ECHO "$no_undefined_flag_CXX" | $SED "$delay_single_quote_subst"`' hardcode_libdir_flag_spec_CXX='`$ECHO "$hardcode_libdir_flag_spec_CXX" | $SED "$delay_single_quote_subst"`' hardcode_libdir_separator_CXX='`$ECHO "$hardcode_libdir_separator_CXX" | $SED "$delay_single_quote_subst"`' hardcode_direct_CXX='`$ECHO "$hardcode_direct_CXX" | $SED "$delay_single_quote_subst"`' hardcode_direct_absolute_CXX='`$ECHO "$hardcode_direct_absolute_CXX" | $SED "$delay_single_quote_subst"`' hardcode_minus_L_CXX='`$ECHO "$hardcode_minus_L_CXX" | $SED "$delay_single_quote_subst"`' hardcode_shlibpath_var_CXX='`$ECHO "$hardcode_shlibpath_var_CXX" | $SED "$delay_single_quote_subst"`' hardcode_automatic_CXX='`$ECHO "$hardcode_automatic_CXX" | $SED "$delay_single_quote_subst"`' inherit_rpath_CXX='`$ECHO "$inherit_rpath_CXX" | $SED "$delay_single_quote_subst"`' link_all_deplibs_CXX='`$ECHO "$link_all_deplibs_CXX" | $SED "$delay_single_quote_subst"`' always_export_symbols_CXX='`$ECHO "$always_export_symbols_CXX" | $SED "$delay_single_quote_subst"`' export_symbols_cmds_CXX='`$ECHO "$export_symbols_cmds_CXX" | $SED "$delay_single_quote_subst"`' exclude_expsyms_CXX='`$ECHO "$exclude_expsyms_CXX" | $SED "$delay_single_quote_subst"`' include_expsyms_CXX='`$ECHO "$include_expsyms_CXX" | $SED "$delay_single_quote_subst"`' prelink_cmds_CXX='`$ECHO "$prelink_cmds_CXX" | $SED "$delay_single_quote_subst"`' postlink_cmds_CXX='`$ECHO "$postlink_cmds_CXX" | $SED "$delay_single_quote_subst"`' file_list_spec_CXX='`$ECHO "$file_list_spec_CXX" | $SED "$delay_single_quote_subst"`' hardcode_action_CXX='`$ECHO "$hardcode_action_CXX" | $SED "$delay_single_quote_subst"`' compiler_lib_search_dirs_CXX='`$ECHO "$compiler_lib_search_dirs_CXX" | $SED "$delay_single_quote_subst"`' predep_objects_CXX='`$ECHO "$predep_objects_CXX" | $SED "$delay_single_quote_subst"`' postdep_objects_CXX='`$ECHO "$postdep_objects_CXX" | $SED "$delay_single_quote_subst"`' predeps_CXX='`$ECHO "$predeps_CXX" | $SED "$delay_single_quote_subst"`' postdeps_CXX='`$ECHO "$postdeps_CXX" | $SED "$delay_single_quote_subst"`' compiler_lib_search_path_CXX='`$ECHO "$compiler_lib_search_path_CXX" | $SED "$delay_single_quote_subst"`' LTCC='$LTCC' LTCFLAGS='$LTCFLAGS' compiler='$compiler_DEFAULT' # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$1 _LTECHO_EOF' } # Quote evaled strings. for var in SHELL \ ECHO \ PATH_SEPARATOR \ SED \ GREP \ EGREP \ FGREP \ LD \ NM \ LN_S \ lt_SP2NL \ lt_NL2SP \ reload_flag \ OBJDUMP \ deplibs_check_method \ file_magic_cmd \ file_magic_glob \ want_nocaseglob \ DLLTOOL \ sharedlib_from_linklib_cmd \ AR \ AR_FLAGS \ archiver_list_spec \ STRIP \ RANLIB \ CC \ CFLAGS \ compiler \ lt_cv_sys_global_symbol_pipe \ lt_cv_sys_global_symbol_to_cdecl \ lt_cv_sys_global_symbol_to_c_name_address \ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \ nm_file_list_spec \ lt_prog_compiler_no_builtin_flag \ lt_prog_compiler_pic \ lt_prog_compiler_wl \ lt_prog_compiler_static \ lt_cv_prog_compiler_c_o \ need_locks \ MANIFEST_TOOL \ DSYMUTIL \ NMEDIT \ LIPO \ OTOOL \ OTOOL64 \ shrext_cmds \ export_dynamic_flag_spec \ whole_archive_flag_spec \ compiler_needs_object \ with_gnu_ld \ allow_undefined_flag \ no_undefined_flag \ hardcode_libdir_flag_spec \ hardcode_libdir_separator \ exclude_expsyms \ include_expsyms \ file_list_spec \ variables_saved_for_relink \ libname_spec \ library_names_spec \ soname_spec \ install_override_mode \ finish_eval \ old_striplib \ striplib \ compiler_lib_search_dirs \ predep_objects \ postdep_objects \ predeps \ postdeps \ compiler_lib_search_path \ LD_CXX \ reload_flag_CXX \ compiler_CXX \ lt_prog_compiler_no_builtin_flag_CXX \ lt_prog_compiler_pic_CXX \ lt_prog_compiler_wl_CXX \ lt_prog_compiler_static_CXX \ lt_cv_prog_compiler_c_o_CXX \ export_dynamic_flag_spec_CXX \ whole_archive_flag_spec_CXX \ compiler_needs_object_CXX \ with_gnu_ld_CXX \ allow_undefined_flag_CXX \ no_undefined_flag_CXX \ hardcode_libdir_flag_spec_CXX \ hardcode_libdir_separator_CXX \ exclude_expsyms_CXX \ include_expsyms_CXX \ file_list_spec_CXX \ compiler_lib_search_dirs_CXX \ predep_objects_CXX \ postdep_objects_CXX \ predeps_CXX \ postdeps_CXX \ compiler_lib_search_path_CXX; do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[\\\\\\\`\\"\\\$]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done # Double-quote double-evaled strings. for var in reload_cmds \ old_postinstall_cmds \ old_postuninstall_cmds \ old_archive_cmds \ extract_expsyms_cmds \ old_archive_from_new_cmds \ old_archive_from_expsyms_cmds \ archive_cmds \ archive_expsym_cmds \ module_cmds \ module_expsym_cmds \ export_symbols_cmds \ prelink_cmds \ postlink_cmds \ postinstall_cmds \ postuninstall_cmds \ finish_cmds \ sys_lib_search_path_spec \ sys_lib_dlsearch_path_spec \ reload_cmds_CXX \ old_archive_cmds_CXX \ old_archive_from_new_cmds_CXX \ old_archive_from_expsyms_cmds_CXX \ archive_cmds_CXX \ archive_expsym_cmds_CXX \ module_cmds_CXX \ module_expsym_cmds_CXX \ export_symbols_cmds_CXX \ prelink_cmds_CXX \ postlink_cmds_CXX; do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[\\\\\\\`\\"\\\$]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done ac_aux_dir='$ac_aux_dir' xsi_shell='$xsi_shell' lt_shell_append='$lt_shell_append' # See if we are running on zsh, and set the options which allow our # commands through without removal of \ escapes INIT. if test -n "\${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi PACKAGE='$PACKAGE' VERSION='$VERSION' TIMESTAMP='$TIMESTAMP' RM='$RM' ofile='$ofile' _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Handling of arguments. for ac_config_target in $ac_config_targets do case $ac_config_target in "config.h") CONFIG_HEADERS="$CONFIG_HEADERS config.h" ;; "slurm/slurm.h") CONFIG_HEADERS="$CONFIG_HEADERS slurm/slurm.h" ;; "depfiles") CONFIG_COMMANDS="$CONFIG_COMMANDS depfiles" ;; "libtool") CONFIG_COMMANDS="$CONFIG_COMMANDS libtool" ;; "Makefile") CONFIG_FILES="$CONFIG_FILES Makefile" ;; "config.xml") CONFIG_FILES="$CONFIG_FILES config.xml" ;; "auxdir/Makefile") CONFIG_FILES="$CONFIG_FILES auxdir/Makefile" ;; "contribs/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/Makefile" ;; "contribs/cray/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/cray/Makefile" ;; "contribs/cray/csm/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/cray/csm/Makefile" ;; "contribs/lua/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/lua/Makefile" ;; "contribs/mic/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/mic/Makefile" ;; "contribs/pam/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/pam/Makefile" ;; "contribs/pam_slurm_adopt/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/pam_slurm_adopt/Makefile" ;; "contribs/perlapi/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/perlapi/Makefile" ;; "contribs/perlapi/libslurm/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/perlapi/libslurm/Makefile" ;; "contribs/perlapi/libslurm/perl/Makefile.PL") CONFIG_FILES="$CONFIG_FILES contribs/perlapi/libslurm/perl/Makefile.PL" ;; "contribs/perlapi/libslurmdb/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/perlapi/libslurmdb/Makefile" ;; "contribs/perlapi/libslurmdb/perl/Makefile.PL") CONFIG_FILES="$CONFIG_FILES contribs/perlapi/libslurmdb/perl/Makefile.PL" ;; "contribs/torque/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/torque/Makefile" ;; "contribs/phpext/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/phpext/Makefile" ;; "contribs/phpext/slurm_php/config.m4") CONFIG_FILES="$CONFIG_FILES contribs/phpext/slurm_php/config.m4" ;; "contribs/sgather/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/sgather/Makefile" ;; "contribs/sgi/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/sgi/Makefile" ;; "contribs/sjobexit/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/sjobexit/Makefile" ;; "contribs/slurmdb-direct/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/slurmdb-direct/Makefile" ;; "contribs/pmi2/Makefile") CONFIG_FILES="$CONFIG_FILES contribs/pmi2/Makefile" ;; "doc/Makefile") CONFIG_FILES="$CONFIG_FILES doc/Makefile" ;; "doc/man/Makefile") CONFIG_FILES="$CONFIG_FILES doc/man/Makefile" ;; "doc/man/man1/Makefile") CONFIG_FILES="$CONFIG_FILES doc/man/man1/Makefile" ;; "doc/man/man3/Makefile") CONFIG_FILES="$CONFIG_FILES doc/man/man3/Makefile" ;; "doc/man/man5/Makefile") CONFIG_FILES="$CONFIG_FILES doc/man/man5/Makefile" ;; "doc/man/man8/Makefile") CONFIG_FILES="$CONFIG_FILES doc/man/man8/Makefile" ;; "doc/html/Makefile") CONFIG_FILES="$CONFIG_FILES doc/html/Makefile" ;; "doc/html/configurator.html") CONFIG_FILES="$CONFIG_FILES doc/html/configurator.html" ;; "doc/html/configurator.easy.html") CONFIG_FILES="$CONFIG_FILES doc/html/configurator.easy.html" ;; "etc/cgroup.release_common.example") CONFIG_FILES="$CONFIG_FILES etc/cgroup.release_common.example" ;; "etc/init.d.slurm") CONFIG_FILES="$CONFIG_FILES etc/init.d.slurm" ;; "etc/init.d.slurmdbd") CONFIG_FILES="$CONFIG_FILES etc/init.d.slurmdbd" ;; "etc/slurmctld.service") CONFIG_FILES="$CONFIG_FILES etc/slurmctld.service" ;; "etc/slurmd.service") CONFIG_FILES="$CONFIG_FILES etc/slurmd.service" ;; "etc/slurmdbd.service") CONFIG_FILES="$CONFIG_FILES etc/slurmdbd.service" ;; "src/Makefile") CONFIG_FILES="$CONFIG_FILES src/Makefile" ;; "src/api/Makefile") CONFIG_FILES="$CONFIG_FILES src/api/Makefile" ;; "src/common/Makefile") CONFIG_FILES="$CONFIG_FILES src/common/Makefile" ;; "src/db_api/Makefile") CONFIG_FILES="$CONFIG_FILES src/db_api/Makefile" ;; "src/layouts/Makefile") CONFIG_FILES="$CONFIG_FILES src/layouts/Makefile" ;; "src/layouts/power/Makefile") CONFIG_FILES="$CONFIG_FILES src/layouts/power/Makefile" ;; "src/layouts/unit/Makefile") CONFIG_FILES="$CONFIG_FILES src/layouts/unit/Makefile" ;; "src/database/Makefile") CONFIG_FILES="$CONFIG_FILES src/database/Makefile" ;; "src/sacct/Makefile") CONFIG_FILES="$CONFIG_FILES src/sacct/Makefile" ;; "src/sacctmgr/Makefile") CONFIG_FILES="$CONFIG_FILES src/sacctmgr/Makefile" ;; "src/sreport/Makefile") CONFIG_FILES="$CONFIG_FILES src/sreport/Makefile" ;; "src/salloc/Makefile") CONFIG_FILES="$CONFIG_FILES src/salloc/Makefile" ;; "src/sbatch/Makefile") CONFIG_FILES="$CONFIG_FILES src/sbatch/Makefile" ;; "src/sbcast/Makefile") CONFIG_FILES="$CONFIG_FILES src/sbcast/Makefile" ;; "src/sattach/Makefile") CONFIG_FILES="$CONFIG_FILES src/sattach/Makefile" ;; "src/scancel/Makefile") CONFIG_FILES="$CONFIG_FILES src/scancel/Makefile" ;; "src/scontrol/Makefile") CONFIG_FILES="$CONFIG_FILES src/scontrol/Makefile" ;; "src/sdiag/Makefile") CONFIG_FILES="$CONFIG_FILES src/sdiag/Makefile" ;; "src/sinfo/Makefile") CONFIG_FILES="$CONFIG_FILES src/sinfo/Makefile" ;; "src/slurmctld/Makefile") CONFIG_FILES="$CONFIG_FILES src/slurmctld/Makefile" ;; "src/slurmd/Makefile") CONFIG_FILES="$CONFIG_FILES src/slurmd/Makefile" ;; "src/slurmd/common/Makefile") CONFIG_FILES="$CONFIG_FILES src/slurmd/common/Makefile" ;; "src/slurmd/slurmd/Makefile") CONFIG_FILES="$CONFIG_FILES src/slurmd/slurmd/Makefile" ;; "src/slurmd/slurmstepd/Makefile") CONFIG_FILES="$CONFIG_FILES src/slurmd/slurmstepd/Makefile" ;; "src/slurmdbd/Makefile") CONFIG_FILES="$CONFIG_FILES src/slurmdbd/Makefile" ;; "src/smap/Makefile") CONFIG_FILES="$CONFIG_FILES src/smap/Makefile" ;; "src/smd/Makefile") CONFIG_FILES="$CONFIG_FILES src/smd/Makefile" ;; "src/sprio/Makefile") CONFIG_FILES="$CONFIG_FILES src/sprio/Makefile" ;; "src/squeue/Makefile") CONFIG_FILES="$CONFIG_FILES src/squeue/Makefile" ;; "src/srun/Makefile") CONFIG_FILES="$CONFIG_FILES src/srun/Makefile" ;; "src/srun/libsrun/Makefile") CONFIG_FILES="$CONFIG_FILES src/srun/libsrun/Makefile" ;; "src/srun_cr/Makefile") CONFIG_FILES="$CONFIG_FILES src/srun_cr/Makefile" ;; "src/sshare/Makefile") CONFIG_FILES="$CONFIG_FILES src/sshare/Makefile" ;; "src/sstat/Makefile") CONFIG_FILES="$CONFIG_FILES src/sstat/Makefile" ;; "src/strigger/Makefile") CONFIG_FILES="$CONFIG_FILES src/strigger/Makefile" ;; "src/sview/Makefile") CONFIG_FILES="$CONFIG_FILES src/sview/Makefile" ;; "src/plugins/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/Makefile" ;; "src/plugins/accounting_storage/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/accounting_storage/Makefile" ;; "src/plugins/accounting_storage/common/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/accounting_storage/common/Makefile" ;; "src/plugins/accounting_storage/filetxt/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/accounting_storage/filetxt/Makefile" ;; "src/plugins/accounting_storage/mysql/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/accounting_storage/mysql/Makefile" ;; "src/plugins/accounting_storage/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/accounting_storage/none/Makefile" ;; "src/plugins/accounting_storage/slurmdbd/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/accounting_storage/slurmdbd/Makefile" ;; "src/plugins/acct_gather_energy/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_energy/Makefile" ;; "src/plugins/acct_gather_energy/cray/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_energy/cray/Makefile" ;; "src/plugins/acct_gather_energy/rapl/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_energy/rapl/Makefile" ;; "src/plugins/acct_gather_energy/ibmaem/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_energy/ibmaem/Makefile" ;; "src/plugins/acct_gather_energy/ipmi/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_energy/ipmi/Makefile" ;; "src/plugins/acct_gather_energy/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_energy/none/Makefile" ;; "src/plugins/acct_gather_infiniband/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_infiniband/Makefile" ;; "src/plugins/acct_gather_infiniband/ofed/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_infiniband/ofed/Makefile" ;; "src/plugins/acct_gather_infiniband/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_infiniband/none/Makefile" ;; "src/plugins/acct_gather_filesystem/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_filesystem/Makefile" ;; "src/plugins/acct_gather_filesystem/lustre/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_filesystem/lustre/Makefile" ;; "src/plugins/acct_gather_filesystem/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_filesystem/none/Makefile" ;; "src/plugins/acct_gather_profile/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_profile/Makefile" ;; "src/plugins/acct_gather_profile/hdf5/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_profile/hdf5/Makefile" ;; "src/plugins/acct_gather_profile/hdf5/sh5util/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_profile/hdf5/sh5util/Makefile" ;; "src/plugins/acct_gather_profile/hdf5/sh5util/libsh5util_old/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_profile/hdf5/sh5util/libsh5util_old/Makefile" ;; "src/plugins/acct_gather_profile/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/acct_gather_profile/none/Makefile" ;; "src/plugins/auth/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/auth/Makefile" ;; "src/plugins/auth/authd/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/auth/authd/Makefile" ;; "src/plugins/auth/munge/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/auth/munge/Makefile" ;; "src/plugins/auth/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/auth/none/Makefile" ;; "src/plugins/burst_buffer/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/burst_buffer/Makefile" ;; "src/plugins/burst_buffer/common/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/burst_buffer/common/Makefile" ;; "src/plugins/burst_buffer/cray/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/burst_buffer/cray/Makefile" ;; "src/plugins/burst_buffer/generic/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/burst_buffer/generic/Makefile" ;; "src/plugins/checkpoint/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/checkpoint/Makefile" ;; "src/plugins/checkpoint/aix/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/checkpoint/aix/Makefile" ;; "src/plugins/checkpoint/blcr/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/checkpoint/blcr/Makefile" ;; "src/plugins/checkpoint/blcr/cr_checkpoint.sh") CONFIG_FILES="$CONFIG_FILES src/plugins/checkpoint/blcr/cr_checkpoint.sh" ;; "src/plugins/checkpoint/blcr/cr_restart.sh") CONFIG_FILES="$CONFIG_FILES src/plugins/checkpoint/blcr/cr_restart.sh" ;; "src/plugins/checkpoint/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/checkpoint/none/Makefile" ;; "src/plugins/checkpoint/ompi/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/checkpoint/ompi/Makefile" ;; "src/plugins/checkpoint/poe/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/checkpoint/poe/Makefile" ;; "src/plugins/core_spec/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/core_spec/Makefile" ;; "src/plugins/core_spec/cray/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/core_spec/cray/Makefile" ;; "src/plugins/core_spec/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/core_spec/none/Makefile" ;; "src/plugins/crypto/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/crypto/Makefile" ;; "src/plugins/crypto/munge/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/crypto/munge/Makefile" ;; "src/plugins/crypto/openssl/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/crypto/openssl/Makefile" ;; "src/plugins/ext_sensors/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/ext_sensors/Makefile" ;; "src/plugins/ext_sensors/rrd/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/ext_sensors/rrd/Makefile" ;; "src/plugins/ext_sensors/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/ext_sensors/none/Makefile" ;; "src/plugins/gres/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/gres/Makefile" ;; "src/plugins/gres/gpu/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/gres/gpu/Makefile" ;; "src/plugins/gres/nic/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/gres/nic/Makefile" ;; "src/plugins/gres/mic/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/gres/mic/Makefile" ;; "src/plugins/jobacct_gather/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobacct_gather/Makefile" ;; "src/plugins/jobacct_gather/common/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobacct_gather/common/Makefile" ;; "src/plugins/jobacct_gather/linux/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobacct_gather/linux/Makefile" ;; "src/plugins/jobacct_gather/aix/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobacct_gather/aix/Makefile" ;; "src/plugins/jobacct_gather/cgroup/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobacct_gather/cgroup/Makefile" ;; "src/plugins/jobacct_gather/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobacct_gather/none/Makefile" ;; "src/plugins/jobcomp/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobcomp/Makefile" ;; "src/plugins/jobcomp/elasticsearch/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobcomp/elasticsearch/Makefile" ;; "src/plugins/jobcomp/filetxt/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobcomp/filetxt/Makefile" ;; "src/plugins/jobcomp/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobcomp/none/Makefile" ;; "src/plugins/jobcomp/script/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobcomp/script/Makefile" ;; "src/plugins/jobcomp/mysql/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/jobcomp/mysql/Makefile" ;; "src/plugins/job_container/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_container/Makefile" ;; "src/plugins/job_container/cncu/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_container/cncu/Makefile" ;; "src/plugins/job_container/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_container/none/Makefile" ;; "src/plugins/job_submit/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_submit/Makefile" ;; "src/plugins/job_submit/all_partitions/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_submit/all_partitions/Makefile" ;; "src/plugins/job_submit/cnode/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_submit/cnode/Makefile" ;; "src/plugins/job_submit/cray/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_submit/cray/Makefile" ;; "src/plugins/job_submit/defaults/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_submit/defaults/Makefile" ;; "src/plugins/job_submit/logging/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_submit/logging/Makefile" ;; "src/plugins/job_submit/lua/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_submit/lua/Makefile" ;; "src/plugins/job_submit/partition/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_submit/partition/Makefile" ;; "src/plugins/job_submit/pbs/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_submit/pbs/Makefile" ;; "src/plugins/job_submit/require_timelimit/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_submit/require_timelimit/Makefile" ;; "src/plugins/job_submit/throttle/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/job_submit/throttle/Makefile" ;; "src/plugins/launch/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/launch/Makefile" ;; "src/plugins/launch/aprun/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/launch/aprun/Makefile" ;; "src/plugins/launch/poe/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/launch/poe/Makefile" ;; "src/plugins/launch/runjob/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/launch/runjob/Makefile" ;; "src/plugins/launch/slurm/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/launch/slurm/Makefile" ;; "src/plugins/power/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/power/Makefile" ;; "src/plugins/power/common/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/power/common/Makefile" ;; "src/plugins/power/cray/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/power/cray/Makefile" ;; "src/plugins/power/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/power/none/Makefile" ;; "src/plugins/preempt/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/preempt/Makefile" ;; "src/plugins/preempt/job_prio/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/preempt/job_prio/Makefile" ;; "src/plugins/preempt/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/preempt/none/Makefile" ;; "src/plugins/preempt/partition_prio/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/preempt/partition_prio/Makefile" ;; "src/plugins/preempt/qos/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/preempt/qos/Makefile" ;; "src/plugins/priority/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/priority/Makefile" ;; "src/plugins/priority/basic/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/priority/basic/Makefile" ;; "src/plugins/priority/multifactor/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/priority/multifactor/Makefile" ;; "src/plugins/proctrack/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/proctrack/Makefile" ;; "src/plugins/proctrack/aix/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/proctrack/aix/Makefile" ;; "src/plugins/proctrack/cray/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/proctrack/cray/Makefile" ;; "src/plugins/proctrack/cgroup/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/proctrack/cgroup/Makefile" ;; "src/plugins/proctrack/pgid/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/proctrack/pgid/Makefile" ;; "src/plugins/proctrack/linuxproc/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/proctrack/linuxproc/Makefile" ;; "src/plugins/proctrack/sgi_job/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/proctrack/sgi_job/Makefile" ;; "src/plugins/proctrack/lua/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/proctrack/lua/Makefile" ;; "src/plugins/route/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/route/Makefile" ;; "src/plugins/route/default/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/route/default/Makefile" ;; "src/plugins/route/topology/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/route/topology/Makefile" ;; "src/plugins/sched/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/sched/Makefile" ;; "src/plugins/sched/backfill/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/sched/backfill/Makefile" ;; "src/plugins/sched/builtin/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/sched/builtin/Makefile" ;; "src/plugins/sched/hold/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/sched/hold/Makefile" ;; "src/plugins/sched/wiki/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/sched/wiki/Makefile" ;; "src/plugins/sched/wiki2/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/sched/wiki2/Makefile" ;; "src/plugins/select/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/Makefile" ;; "src/plugins/select/alps/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/alps/Makefile" ;; "src/plugins/select/alps/libalps/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/alps/libalps/Makefile" ;; "src/plugins/select/alps/libemulate/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/alps/libemulate/Makefile" ;; "src/plugins/select/bluegene/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/bluegene/Makefile" ;; "src/plugins/select/bluegene/ba/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/bluegene/ba/Makefile" ;; "src/plugins/select/bluegene/ba_bgq/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/bluegene/ba_bgq/Makefile" ;; "src/plugins/select/bluegene/bl/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/bluegene/bl/Makefile" ;; "src/plugins/select/bluegene/bl_bgq/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/bluegene/bl_bgq/Makefile" ;; "src/plugins/select/bluegene/sfree/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/bluegene/sfree/Makefile" ;; "src/plugins/select/cons_res/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/cons_res/Makefile" ;; "src/plugins/select/cray/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/cray/Makefile" ;; "src/plugins/select/linear/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/linear/Makefile" ;; "src/plugins/select/other/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/other/Makefile" ;; "src/plugins/select/serial/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/select/serial/Makefile" ;; "src/plugins/slurmctld/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/slurmctld/Makefile" ;; "src/plugins/slurmctld/nonstop/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/slurmctld/nonstop/Makefile" ;; "src/plugins/slurmd/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/slurmd/Makefile" ;; "src/plugins/switch/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/switch/Makefile" ;; "src/plugins/switch/cray/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/switch/cray/Makefile" ;; "src/plugins/switch/generic/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/switch/generic/Makefile" ;; "src/plugins/switch/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/switch/none/Makefile" ;; "src/plugins/switch/nrt/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/switch/nrt/Makefile" ;; "src/plugins/switch/nrt/libpermapi/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/switch/nrt/libpermapi/Makefile" ;; "src/plugins/mpi/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/mpi/Makefile" ;; "src/plugins/mpi/mpich1_p4/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/mpi/mpich1_p4/Makefile" ;; "src/plugins/mpi/mpich1_shmem/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/mpi/mpich1_shmem/Makefile" ;; "src/plugins/mpi/mpichgm/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/mpi/mpichgm/Makefile" ;; "src/plugins/mpi/mpichmx/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/mpi/mpichmx/Makefile" ;; "src/plugins/mpi/mvapich/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/mpi/mvapich/Makefile" ;; "src/plugins/mpi/lam/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/mpi/lam/Makefile" ;; "src/plugins/mpi/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/mpi/none/Makefile" ;; "src/plugins/mpi/openmpi/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/mpi/openmpi/Makefile" ;; "src/plugins/mpi/pmi2/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/mpi/pmi2/Makefile" ;; "src/plugins/task/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/task/Makefile" ;; "src/plugins/task/affinity/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/task/affinity/Makefile" ;; "src/plugins/task/cgroup/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/task/cgroup/Makefile" ;; "src/plugins/task/cray/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/task/cray/Makefile" ;; "src/plugins/task/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/task/none/Makefile" ;; "src/plugins/topology/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/topology/Makefile" ;; "src/plugins/topology/3d_torus/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/topology/3d_torus/Makefile" ;; "src/plugins/topology/hypercube/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/topology/hypercube/Makefile" ;; "src/plugins/topology/node_rank/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/topology/node_rank/Makefile" ;; "src/plugins/topology/none/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/topology/none/Makefile" ;; "src/plugins/topology/tree/Makefile") CONFIG_FILES="$CONFIG_FILES src/plugins/topology/tree/Makefile" ;; "testsuite/Makefile") CONFIG_FILES="$CONFIG_FILES testsuite/Makefile" ;; "testsuite/expect/Makefile") CONFIG_FILES="$CONFIG_FILES testsuite/expect/Makefile" ;; "testsuite/slurm_unit/Makefile") CONFIG_FILES="$CONFIG_FILES testsuite/slurm_unit/Makefile" ;; "testsuite/slurm_unit/api/Makefile") CONFIG_FILES="$CONFIG_FILES testsuite/slurm_unit/api/Makefile" ;; "testsuite/slurm_unit/api/manual/Makefile") CONFIG_FILES="$CONFIG_FILES testsuite/slurm_unit/api/manual/Makefile" ;; "testsuite/slurm_unit/common/Makefile") CONFIG_FILES="$CONFIG_FILES testsuite/slurm_unit/common/Makefile" ;; *) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;; esac done # If the user did not use the arguments to specify the items to instantiate, # then the envvar interface is used. Set only those that are not. # We use the long form for the default assignment because of an extremely # bizarre bug on SunOS 4.1.3. if $ac_need_defaults; then test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files test "${CONFIG_HEADERS+set}" = set || CONFIG_HEADERS=$config_headers test "${CONFIG_COMMANDS+set}" = set || CONFIG_COMMANDS=$config_commands fi # Have a temporary directory for convenience. Make it in the build tree # simply because there is no reason against having it here, and in addition, # creating and moving files from /tmp can sometimes cause problems. # Hook for its removal unless debugging. # Note that there is a small window in which the directory will not be cleaned: # after its creation but before its name has been assigned to `$tmp'. $debug || { tmp= ac_tmp= trap 'exit_status=$? : "${ac_tmp:=$tmp}" { test ! -d "$ac_tmp" || rm -fr "$ac_tmp"; } && exit $exit_status ' 0 trap 'as_fn_exit 1' 1 2 13 15 } # Create a (secure) tmp directory for tmp files. { tmp=`(umask 077 && mktemp -d "./confXXXXXX") 2>/dev/null` && test -d "$tmp" } || { tmp=./conf$$-$RANDOM (umask 077 && mkdir "$tmp") } || as_fn_error $? "cannot create a temporary directory in ." "$LINENO" 5 ac_tmp=$tmp # Set up the scripts for CONFIG_FILES section. # No need to generate them if there are no CONFIG_FILES. # This happens for instance with `./config.status config.h'. if test -n "$CONFIG_FILES"; then ac_cr=`echo X | tr X '\015'` # On cygwin, bash can eat \r inside `` if the user requested igncr. # But we know of no other shell where ac_cr would be empty at this # point, so we can use a bashism as a fallback. if test "x$ac_cr" = x; then eval ac_cr=\$\'\\r\' fi ac_cs_awk_cr=`$AWK 'BEGIN { print "a\rb" }' /dev/null` if test "$ac_cs_awk_cr" = "a${ac_cr}b"; then ac_cs_awk_cr='\\r' else ac_cs_awk_cr=$ac_cr fi echo 'BEGIN {' >"$ac_tmp/subs1.awk" && _ACEOF { echo "cat >conf$$subs.awk <<_ACEOF" && echo "$ac_subst_vars" | sed 's/.*/&!$&$ac_delim/' && echo "_ACEOF" } >conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_num=`echo "$ac_subst_vars" | grep -c '^'` ac_delim='%!_!# ' for ac_last_try in false false false false false :; do . ./conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_n=`sed -n "s/.*$ac_delim\$/X/p" conf$$subs.awk | grep -c X` if test $ac_delim_n = $ac_delim_num; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done rm -f conf$$subs.sh cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 cat >>"\$ac_tmp/subs1.awk" <<\\_ACAWK && _ACEOF sed -n ' h s/^/S["/; s/!.*/"]=/ p g s/^[^!]*!// :repl t repl s/'"$ac_delim"'$// t delim :nl h s/\(.\{148\}\)..*/\1/ t more1 s/["\\]/\\&/g; s/^/"/; s/$/\\n"\\/ p n b repl :more1 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t nl :delim h s/\(.\{148\}\)..*/\1/ t more2 s/["\\]/\\&/g; s/^/"/; s/$/"/ p b :more2 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t delim ' >$CONFIG_STATUS || ac_write_fail=1 rm -f conf$$subs.awk cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 _ACAWK cat >>"\$ac_tmp/subs1.awk" <<_ACAWK && for (key in S) S_is_set[key] = 1 FS = "" } { line = $ 0 nfields = split(line, field, "@") substed = 0 len = length(field[1]) for (i = 2; i < nfields; i++) { key = field[i] keylen = length(key) if (S_is_set[key]) { value = S[key] line = substr(line, 1, len) "" value "" substr(line, len + keylen + 3) len += length(value) + length(field[++i]) substed = 1 } else len += 1 + keylen } print line } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 if sed "s/$ac_cr//" < /dev/null > /dev/null 2>&1; then sed "s/$ac_cr\$//; s/$ac_cr/$ac_cs_awk_cr/g" else cat fi < "$ac_tmp/subs1.awk" > "$ac_tmp/subs.awk" \ || as_fn_error $? "could not setup config files machinery" "$LINENO" 5 _ACEOF # VPATH may cause trouble with some makes, so we remove sole $(srcdir), # ${srcdir} and @srcdir@ entries from VPATH if srcdir is ".", strip leading and # trailing colons and then remove the whole line if VPATH becomes empty # (actually we leave an empty line to preserve line numbers). if test "x$srcdir" = x.; then ac_vpsub='/^[ ]*VPATH[ ]*=[ ]*/{ h s/// s/^/:/ s/[ ]*$/:/ s/:\$(srcdir):/:/g s/:\${srcdir}:/:/g s/:@srcdir@:/:/g s/^:*// s/:*$// x s/\(=[ ]*\).*/\1/ G s/\n// s/^[^=]*=[ ]*$// }' fi cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 fi # test -n "$CONFIG_FILES" # Set up the scripts for CONFIG_HEADERS section. # No need to generate them if there are no CONFIG_HEADERS. # This happens for instance with `./config.status Makefile'. if test -n "$CONFIG_HEADERS"; then cat >"$ac_tmp/defines.awk" <<\_ACAWK || BEGIN { _ACEOF # Transform confdefs.h into an awk script `defines.awk', embedded as # here-document in config.status, that substitutes the proper values into # config.h.in to produce config.h. # Create a delimiter string that does not exist in confdefs.h, to ease # handling of long lines. ac_delim='%!_!# ' for ac_last_try in false false :; do ac_tt=`sed -n "/$ac_delim/p" confdefs.h` if test -z "$ac_tt"; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_HEADERS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done # For the awk script, D is an array of macro values keyed by name, # likewise P contains macro parameters if any. Preserve backslash # newline sequences. ac_word_re=[_$as_cr_Letters][_$as_cr_alnum]* sed -n ' s/.\{148\}/&'"$ac_delim"'/g t rset :rset s/^[ ]*#[ ]*define[ ][ ]*/ / t def d :def s/\\$// t bsnl s/["\\]/\\&/g s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\ D["\1"]=" \3"/p s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2"/p d :bsnl s/["\\]/\\&/g s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\ D["\1"]=" \3\\\\\\n"\\/p t cont s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2\\\\\\n"\\/p t cont d :cont n s/.\{148\}/&'"$ac_delim"'/g t clear :clear s/\\$// t bsnlc s/["\\]/\\&/g; s/^/"/; s/$/"/p d :bsnlc s/["\\]/\\&/g; s/^/"/; s/$/\\\\\\n"\\/p b cont ' >$CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 for (key in D) D_is_set[key] = 1 FS = "" } /^[\t ]*#[\t ]*(define|undef)[\t ]+$ac_word_re([\t (]|\$)/ { line = \$ 0 split(line, arg, " ") if (arg[1] == "#") { defundef = arg[2] mac1 = arg[3] } else { defundef = substr(arg[1], 2) mac1 = arg[2] } split(mac1, mac2, "(") #) macro = mac2[1] prefix = substr(line, 1, index(line, defundef) - 1) if (D_is_set[macro]) { # Preserve the white space surrounding the "#". print prefix "define", macro P[macro] D[macro] next } else { # Replace #undef with comments. This is necessary, for example, # in the case of _POSIX_SOURCE, which is predefined and required # on some systems where configure will not decide to define it. if (defundef == "undef") { print "/*", prefix defundef, macro, "*/" next } } } { print } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 as_fn_error $? "could not setup config headers machinery" "$LINENO" 5 fi # test -n "$CONFIG_HEADERS" eval set X " :F $CONFIG_FILES :H $CONFIG_HEADERS :C $CONFIG_COMMANDS" shift for ac_tag do case $ac_tag in :[FHLC]) ac_mode=$ac_tag; continue;; esac case $ac_mode$ac_tag in :[FHL]*:*);; :L* | :C*:*) as_fn_error $? "invalid tag \`$ac_tag'" "$LINENO" 5;; :[FH]-) ac_tag=-:-;; :[FH]*) ac_tag=$ac_tag:$ac_tag.in;; esac ac_save_IFS=$IFS IFS=: set x $ac_tag IFS=$ac_save_IFS shift ac_file=$1 shift case $ac_mode in :L) ac_source=$1;; :[FH]) ac_file_inputs= for ac_f do case $ac_f in -) ac_f="$ac_tmp/stdin";; *) # Look for the file first in the build tree, then in the source tree # (if the path is not absolute). The absolute path cannot be DOS-style, # because $ac_f cannot contain `:'. test -f "$ac_f" || case $ac_f in [\\/$]*) false;; *) test -f "$srcdir/$ac_f" && ac_f="$srcdir/$ac_f";; esac || as_fn_error 1 "cannot find input file: \`$ac_f'" "$LINENO" 5;; esac case $ac_f in *\'*) ac_f=`$as_echo "$ac_f" | sed "s/'/'\\\\\\\\''/g"`;; esac as_fn_append ac_file_inputs " '$ac_f'" done # Let's still pretend it is `configure' which instantiates (i.e., don't # use $as_me), people would be surprised to read: # /* config.h. Generated by config.status. */ configure_input='Generated from '` $as_echo "$*" | sed 's|^[^:]*/||;s|:[^:]*/|, |g' `' by configure.' if test x"$ac_file" != x-; then configure_input="$ac_file. $configure_input" { $as_echo "$as_me:${as_lineno-$LINENO}: creating $ac_file" >&5 $as_echo "$as_me: creating $ac_file" >&6;} fi # Neutralize special characters interpreted by sed in replacement strings. case $configure_input in #( *\&* | *\|* | *\\* ) ac_sed_conf_input=`$as_echo "$configure_input" | sed 's/[\\\\&|]/\\\\&/g'`;; #( *) ac_sed_conf_input=$configure_input;; esac case $ac_tag in *:-:* | *:-) cat >"$ac_tmp/stdin" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; esac ;; esac ac_dir=`$as_dirname -- "$ac_file" || $as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$ac_file" : 'X\(//\)[^/]' \| \ X"$ac_file" : 'X\(//\)$' \| \ X"$ac_file" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$ac_file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` as_dir="$ac_dir"; as_fn_mkdir_p ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`$as_echo "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`$as_echo "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix case $ac_mode in :F) # # CONFIG_FILE # case $INSTALL in [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;; *) ac_INSTALL=$ac_top_build_prefix$INSTALL ;; esac ac_MKDIR_P=$MKDIR_P case $MKDIR_P in [\\/$]* | ?:[\\/]* ) ;; */*) ac_MKDIR_P=$ac_top_build_prefix$MKDIR_P ;; esac _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # If the template does not know about datarootdir, expand it. # FIXME: This hack should be removed a few years after 2.60. ac_datarootdir_hack=; ac_datarootdir_seen= ac_sed_dataroot=' /datarootdir/ { p q } /@datadir@/p /@docdir@/p /@infodir@/p /@localedir@/p /@mandir@/p' case `eval "sed -n \"\$ac_sed_dataroot\" $ac_file_inputs"` in *datarootdir*) ac_datarootdir_seen=yes;; *@datadir@*|*@docdir@*|*@infodir@*|*@localedir@*|*@mandir@*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&5 $as_echo "$as_me: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&2;} _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_datarootdir_hack=' s&@datadir@&$datadir&g s&@docdir@&$docdir&g s&@infodir@&$infodir&g s&@localedir@&$localedir&g s&@mandir@&$mandir&g s&\\\${datarootdir}&$datarootdir&g' ;; esac _ACEOF # Neutralize VPATH when `$srcdir' = `.'. # Shell code in configure.ac might set extrasub. # FIXME: do we really want to maintain this feature? cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_sed_extra="$ac_vpsub $extrasub _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 :t /@[a-zA-Z_][a-zA-Z_0-9]*@/!b s|@configure_input@|$ac_sed_conf_input|;t t s&@top_builddir@&$ac_top_builddir_sub&;t t s&@top_build_prefix@&$ac_top_build_prefix&;t t s&@srcdir@&$ac_srcdir&;t t s&@abs_srcdir@&$ac_abs_srcdir&;t t s&@top_srcdir@&$ac_top_srcdir&;t t s&@abs_top_srcdir@&$ac_abs_top_srcdir&;t t s&@builddir@&$ac_builddir&;t t s&@abs_builddir@&$ac_abs_builddir&;t t s&@abs_top_builddir@&$ac_abs_top_builddir&;t t s&@INSTALL@&$ac_INSTALL&;t t s&@MKDIR_P@&$ac_MKDIR_P&;t t $ac_datarootdir_hack " eval sed \"\$ac_sed_extra\" "$ac_file_inputs" | $AWK -f "$ac_tmp/subs.awk" \ >$ac_tmp/out || as_fn_error $? "could not create $ac_file" "$LINENO" 5 test -z "$ac_datarootdir_hack$ac_datarootdir_seen" && { ac_out=`sed -n '/\${datarootdir}/p' "$ac_tmp/out"`; test -n "$ac_out"; } && { ac_out=`sed -n '/^[ ]*datarootdir[ ]*:*=/p' \ "$ac_tmp/out"`; test -z "$ac_out"; } && { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&5 $as_echo "$as_me: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&2;} rm -f "$ac_tmp/stdin" case $ac_file in -) cat "$ac_tmp/out" && rm -f "$ac_tmp/out";; *) rm -f "$ac_file" && mv "$ac_tmp/out" "$ac_file";; esac \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; :H) # # CONFIG_HEADER # if test x"$ac_file" != x-; then { $as_echo "/* $configure_input */" \ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" } >"$ac_tmp/config.h" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 if diff "$ac_file" "$ac_tmp/config.h" >/dev/null 2>&1; then { $as_echo "$as_me:${as_lineno-$LINENO}: $ac_file is unchanged" >&5 $as_echo "$as_me: $ac_file is unchanged" >&6;} else rm -f "$ac_file" mv "$ac_tmp/config.h" "$ac_file" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 fi else $as_echo "/* $configure_input */" \ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" \ || as_fn_error $? "could not create -" "$LINENO" 5 fi # Compute "$ac_file"'s index in $config_headers. _am_arg="$ac_file" _am_stamp_count=1 for _am_header in $config_headers :; do case $_am_header in $_am_arg | $_am_arg:* ) break ;; * ) _am_stamp_count=`expr $_am_stamp_count + 1` ;; esac done echo "timestamp for $_am_arg" >`$as_dirname -- "$_am_arg" || $as_expr X"$_am_arg" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$_am_arg" : 'X\(//\)[^/]' \| \ X"$_am_arg" : 'X\(//\)$' \| \ X"$_am_arg" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$_am_arg" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'`/stamp-h$_am_stamp_count ;; :C) { $as_echo "$as_me:${as_lineno-$LINENO}: executing $ac_file commands" >&5 $as_echo "$as_me: executing $ac_file commands" >&6;} ;; esac case $ac_file$ac_mode in "depfiles":C) test x"$AMDEP_TRUE" != x"" || { # Older Autoconf quotes --file arguments for eval, but not when files # are listed without --file. Let's play safe and only enable the eval # if we detect the quoting. case $CONFIG_FILES in *\'*) eval set x "$CONFIG_FILES" ;; *) set x $CONFIG_FILES ;; esac shift for mf do # Strip MF so we end up with the name of the file. mf=`echo "$mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile or not. # We used to match only the files named 'Makefile.in', but # some people rename them; so instead we look at the file content. # Grep'ing the first line is not enough: some people post-process # each Makefile.in and add a new line on top of each file to say so. # Grep'ing the whole file is not good either: AIX grep has a line # limit of 2048, but all sed's we know have understand at least 4000. if sed -n 's,^#.*generated by automake.*,X,p' "$mf" | grep X >/dev/null 2>&1; then dirpart=`$as_dirname -- "$mf" || $as_expr X"$mf" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$mf" : 'X\(//\)[^/]' \| \ X"$mf" : 'X\(//\)$' \| \ X"$mf" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$mf" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` else continue fi # Extract the definition of DEPDIR, am__include, and am__quote # from the Makefile without running 'make'. DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` test -z "$DEPDIR" && continue am__include=`sed -n 's/^am__include = //p' < "$mf"` test -z "$am__include" && continue am__quote=`sed -n 's/^am__quote = //p' < "$mf"` # Find all dependency output files, they are included files with # $(DEPDIR) in their names. We invoke sed twice because it is the # simplest approach to changing $(DEPDIR) to its actual value in the # expansion. for file in `sed -n " s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g'`; do # Make sure the directory exists. test -f "$dirpart/$file" && continue fdir=`$as_dirname -- "$file" || $as_expr X"$file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$file" : 'X\(//\)[^/]' \| \ X"$file" : 'X\(//\)$' \| \ X"$file" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` as_dir=$dirpart/$fdir; as_fn_mkdir_p # echo "creating $dirpart/$file" echo '# dummy' > "$dirpart/$file" done done } ;; "libtool":C) # See if we are running on zsh, and set the options which allow our # commands through without removal of \ escapes. if test -n "${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi cfgfile="${ofile}T" trap "$RM \"$cfgfile\"; exit 1" 1 2 15 $RM "$cfgfile" cat <<_LT_EOF >> "$cfgfile" #! $SHELL # `$ECHO "$ofile" | sed 's%^.*/%%'` - Provide generalized library-building support services. # Generated automatically by $as_me ($PACKAGE$TIMESTAMP) $VERSION # Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: # NOTE: Changes made to this file will be lost: look at ltmain.sh. # # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, # 2006, 2007, 2008, 2009, 2010, 2011 Free Software # Foundation, Inc. # Written by Gordon Matzigkeit, 1996 # # This file is part of GNU Libtool. # # GNU Libtool is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License as # published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # # As a special exception to the GNU General Public License, # if you distribute this file as part of a program or library that # is built using GNU Libtool, you may include this file under the # same distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with GNU Libtool; see the file COPYING. If not, a copy # can be downloaded from http://www.gnu.org/licenses/gpl.html, or # obtained by writing to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # The names of the tagged configurations supported by this script. available_tags="CXX " # ### BEGIN LIBTOOL CONFIG # Which release of libtool.m4 was used? macro_version=$macro_version macro_revision=$macro_revision # Whether or not to build shared libraries. build_libtool_libs=$enable_shared # Whether or not to build static libraries. build_old_libs=$enable_static # What type of objects to build. pic_mode=$pic_mode # Whether or not to optimize for fast installation. fast_install=$enable_fast_install # Shell to use when invoking shell scripts. SHELL=$lt_SHELL # An echo program that protects backslashes. ECHO=$lt_ECHO # The PATH separator for the build system. PATH_SEPARATOR=$lt_PATH_SEPARATOR # The host system. host_alias=$host_alias host=$host host_os=$host_os # The build system. build_alias=$build_alias build=$build build_os=$build_os # A sed program that does not truncate output. SED=$lt_SED # Sed that helps us avoid accidentally triggering echo(1) options like -n. Xsed="\$SED -e 1s/^X//" # A grep program that handles long lines. GREP=$lt_GREP # An ERE matcher. EGREP=$lt_EGREP # A literal string matcher. FGREP=$lt_FGREP # A BSD- or MS-compatible name lister. NM=$lt_NM # Whether we need soft or hard links. LN_S=$lt_LN_S # What is the maximum length of a command? max_cmd_len=$max_cmd_len # Object file suffix (normally "o"). objext=$ac_objext # Executable file suffix (normally ""). exeext=$exeext # whether the shell understands "unset". lt_unset=$lt_unset # turn spaces into newlines. SP2NL=$lt_lt_SP2NL # turn newlines into spaces. NL2SP=$lt_lt_NL2SP # convert \$build file names to \$host format. to_host_file_cmd=$lt_cv_to_host_file_cmd # convert \$build files to toolchain format. to_tool_file_cmd=$lt_cv_to_tool_file_cmd # An object symbol dumper. OBJDUMP=$lt_OBJDUMP # Method to check whether dependent libraries are shared objects. deplibs_check_method=$lt_deplibs_check_method # Command to use when deplibs_check_method = "file_magic". file_magic_cmd=$lt_file_magic_cmd # How to find potential files when deplibs_check_method = "file_magic". file_magic_glob=$lt_file_magic_glob # Find potential files using nocaseglob when deplibs_check_method = "file_magic". want_nocaseglob=$lt_want_nocaseglob # DLL creation program. DLLTOOL=$lt_DLLTOOL # Command to associate shared and link libraries. sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd # The archiver. AR=$lt_AR # Flags to create an archive. AR_FLAGS=$lt_AR_FLAGS # How to feed a file listing to the archiver. archiver_list_spec=$lt_archiver_list_spec # A symbol stripping program. STRIP=$lt_STRIP # Commands used to install an old-style archive. RANLIB=$lt_RANLIB old_postinstall_cmds=$lt_old_postinstall_cmds old_postuninstall_cmds=$lt_old_postuninstall_cmds # Whether to use a lock for old archive extraction. lock_old_archive_extraction=$lock_old_archive_extraction # A C compiler. LTCC=$lt_CC # LTCC compiler flags. LTCFLAGS=$lt_CFLAGS # Take the output of nm and produce a listing of raw symbols and C names. global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe # Transform the output of nm in a proper C declaration. global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl # Transform the output of nm in a C name address pair. global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address # Transform the output of nm in a C name address pair when lib prefix is needed. global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix # Specify filename containing input files for \$NM. nm_file_list_spec=$lt_nm_file_list_spec # The root where to search for dependent libraries,and in which our libraries should be installed. lt_sysroot=$lt_sysroot # The name of the directory that contains temporary libtool files. objdir=$objdir # Used to examine libraries when file_magic_cmd begins with "file". MAGIC_CMD=$MAGIC_CMD # Must we lock files when doing compilation? need_locks=$lt_need_locks # Manifest tool. MANIFEST_TOOL=$lt_MANIFEST_TOOL # Tool to manipulate archived DWARF debug symbol files on Mac OS X. DSYMUTIL=$lt_DSYMUTIL # Tool to change global to local symbols on Mac OS X. NMEDIT=$lt_NMEDIT # Tool to manipulate fat objects and archives on Mac OS X. LIPO=$lt_LIPO # ldd/readelf like tool for Mach-O binaries on Mac OS X. OTOOL=$lt_OTOOL # ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4. OTOOL64=$lt_OTOOL64 # Old archive suffix (normally "a"). libext=$libext # Shared library suffix (normally ".so"). shrext_cmds=$lt_shrext_cmds # The commands to extract the exported symbol list from a shared archive. extract_expsyms_cmds=$lt_extract_expsyms_cmds # Variables whose values should be saved in libtool wrapper scripts and # restored at link time. variables_saved_for_relink=$lt_variables_saved_for_relink # Do we need the "lib" prefix for modules? need_lib_prefix=$need_lib_prefix # Do we need a version for libraries? need_version=$need_version # Library versioning type. version_type=$version_type # Shared library runtime path variable. runpath_var=$runpath_var # Shared library path variable. shlibpath_var=$shlibpath_var # Is shlibpath searched before the hard-coded library search path? shlibpath_overrides_runpath=$shlibpath_overrides_runpath # Format of library name prefix. libname_spec=$lt_libname_spec # List of archive names. First name is the real one, the rest are links. # The last name is the one that the linker finds with -lNAME library_names_spec=$lt_library_names_spec # The coded name of the library, if different from the real name. soname_spec=$lt_soname_spec # Permission mode override for installation of shared libraries. install_override_mode=$lt_install_override_mode # Command to use after installation of a shared archive. postinstall_cmds=$lt_postinstall_cmds # Command to use after uninstallation of a shared archive. postuninstall_cmds=$lt_postuninstall_cmds # Commands used to finish a libtool library installation in a directory. finish_cmds=$lt_finish_cmds # As "finish_cmds", except a single script fragment to be evaled but # not shown. finish_eval=$lt_finish_eval # Whether we should hardcode library paths into libraries. hardcode_into_libs=$hardcode_into_libs # Compile-time system search path for libraries. sys_lib_search_path_spec=$lt_sys_lib_search_path_spec # Run-time system search path for libraries. sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec # Whether dlopen is supported. dlopen_support=$enable_dlopen # Whether dlopen of programs is supported. dlopen_self=$enable_dlopen_self # Whether dlopen of statically linked programs is supported. dlopen_self_static=$enable_dlopen_self_static # Commands to strip libraries. old_striplib=$lt_old_striplib striplib=$lt_striplib # The linker used to build libraries. LD=$lt_LD # How to create reloadable object files. reload_flag=$lt_reload_flag reload_cmds=$lt_reload_cmds # Commands used to build an old-style archive. old_archive_cmds=$lt_old_archive_cmds # A language specific compiler. CC=$lt_compiler # Is the compiler the GNU compiler? with_gcc=$GCC # Compiler flag to turn off builtin functions. no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag # Additional compiler flags for building library objects. pic_flag=$lt_lt_prog_compiler_pic # How to pass a linker flag through the compiler. wl=$lt_lt_prog_compiler_wl # Compiler flag to prevent dynamic linking. link_static_flag=$lt_lt_prog_compiler_static # Does compiler simultaneously support -c and -o options? compiler_c_o=$lt_lt_cv_prog_compiler_c_o # Whether or not to add -lc for building shared libraries. build_libtool_need_lc=$archive_cmds_need_lc # Whether or not to disallow shared libs when runtime libs are static. allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes # Compiler flag to allow reflexive dlopens. export_dynamic_flag_spec=$lt_export_dynamic_flag_spec # Compiler flag to generate shared objects directly from archives. whole_archive_flag_spec=$lt_whole_archive_flag_spec # Whether the compiler copes with passing no objects directly. compiler_needs_object=$lt_compiler_needs_object # Create an old-style archive from a shared archive. old_archive_from_new_cmds=$lt_old_archive_from_new_cmds # Create a temporary old-style archive to link instead of a shared archive. old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds # Commands used to build a shared archive. archive_cmds=$lt_archive_cmds archive_expsym_cmds=$lt_archive_expsym_cmds # Commands used to build a loadable module if different from building # a shared archive. module_cmds=$lt_module_cmds module_expsym_cmds=$lt_module_expsym_cmds # Whether we are building with GNU ld or not. with_gnu_ld=$lt_with_gnu_ld # Flag that allows shared libraries with undefined symbols to be built. allow_undefined_flag=$lt_allow_undefined_flag # Flag that enforces no undefined symbols. no_undefined_flag=$lt_no_undefined_flag # Flag to hardcode \$libdir into a binary during linking. # This must work even if \$libdir does not exist hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec # Whether we need a single "-rpath" flag with a separated argument. hardcode_libdir_separator=$lt_hardcode_libdir_separator # Set to "yes" if using DIR/libNAME\${shared_ext} during linking hardcodes # DIR into the resulting binary. hardcode_direct=$hardcode_direct # Set to "yes" if using DIR/libNAME\${shared_ext} during linking hardcodes # DIR into the resulting binary and the resulting library dependency is # "absolute",i.e impossible to change by setting \${shlibpath_var} if the # library is relocated. hardcode_direct_absolute=$hardcode_direct_absolute # Set to "yes" if using the -LDIR flag during linking hardcodes DIR # into the resulting binary. hardcode_minus_L=$hardcode_minus_L # Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR # into the resulting binary. hardcode_shlibpath_var=$hardcode_shlibpath_var # Set to "yes" if building a shared library automatically hardcodes DIR # into the library and all subsequent libraries and executables linked # against it. hardcode_automatic=$hardcode_automatic # Set to yes if linker adds runtime paths of dependent libraries # to runtime path list. inherit_rpath=$inherit_rpath # Whether libtool must link a program against all its dependency libraries. link_all_deplibs=$link_all_deplibs # Set to "yes" if exported symbols are required. always_export_symbols=$always_export_symbols # The commands to list exported symbols. export_symbols_cmds=$lt_export_symbols_cmds # Symbols that should not be listed in the preloaded symbols. exclude_expsyms=$lt_exclude_expsyms # Symbols that must always be exported. include_expsyms=$lt_include_expsyms # Commands necessary for linking programs (against libraries) with templates. prelink_cmds=$lt_prelink_cmds # Commands necessary for finishing linking programs. postlink_cmds=$lt_postlink_cmds # Specify filename containing input files. file_list_spec=$lt_file_list_spec # How to hardcode a shared library path into an executable. hardcode_action=$hardcode_action # The directories searched by this compiler when creating a shared library. compiler_lib_search_dirs=$lt_compiler_lib_search_dirs # Dependencies to place before and after the objects being linked to # create a shared library. predep_objects=$lt_predep_objects postdep_objects=$lt_postdep_objects predeps=$lt_predeps postdeps=$lt_postdeps # The library search path used internally by the compiler when linking # a shared library. compiler_lib_search_path=$lt_compiler_lib_search_path # ### END LIBTOOL CONFIG _LT_EOF case $host_os in aix3*) cat <<\_LT_EOF >> "$cfgfile" # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test "X${COLLECT_NAMES+set}" != Xset; then COLLECT_NAMES= export COLLECT_NAMES fi _LT_EOF ;; esac ltmain="$ac_aux_dir/ltmain.sh" # We use sed instead of cat because bash on DJGPP gets confused if # if finds mixed CR/LF and LF-only lines. Since sed operates in # text mode, it properly converts lines to CR/LF. This bash problem # is reportedly fixed, but why not run on old versions too? sed '$q' "$ltmain" >> "$cfgfile" \ || (rm -f "$cfgfile"; exit 1) if test x"$xsi_shell" = xyes; then sed -e '/^func_dirname ()$/,/^} # func_dirname /c\ func_dirname ()\ {\ \ case ${1} in\ \ */*) func_dirname_result="${1%/*}${2}" ;;\ \ * ) func_dirname_result="${3}" ;;\ \ esac\ } # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_basename ()$/,/^} # func_basename /c\ func_basename ()\ {\ \ func_basename_result="${1##*/}"\ } # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\ func_dirname_and_basename ()\ {\ \ case ${1} in\ \ */*) func_dirname_result="${1%/*}${2}" ;;\ \ * ) func_dirname_result="${3}" ;;\ \ esac\ \ func_basename_result="${1##*/}"\ } # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_stripname ()$/,/^} # func_stripname /c\ func_stripname ()\ {\ \ # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\ \ # positional parameters, so assign one to ordinary parameter first.\ \ func_stripname_result=${3}\ \ func_stripname_result=${func_stripname_result#"${1}"}\ \ func_stripname_result=${func_stripname_result%"${2}"}\ } # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\ func_split_long_opt ()\ {\ \ func_split_long_opt_name=${1%%=*}\ \ func_split_long_opt_arg=${1#*=}\ } # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\ func_split_short_opt ()\ {\ \ func_split_short_opt_arg=${1#??}\ \ func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\ } # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\ func_lo2o ()\ {\ \ case ${1} in\ \ *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\ \ *) func_lo2o_result=${1} ;;\ \ esac\ } # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_xform ()$/,/^} # func_xform /c\ func_xform ()\ {\ func_xform_result=${1%.*}.lo\ } # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_arith ()$/,/^} # func_arith /c\ func_arith ()\ {\ func_arith_result=$(( $* ))\ } # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_len ()$/,/^} # func_len /c\ func_len ()\ {\ func_len_result=${#1}\ } # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: fi if test x"$lt_shell_append" = xyes; then sed -e '/^func_append ()$/,/^} # func_append /c\ func_append ()\ {\ eval "${1}+=\\${2}"\ } # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\ func_append_quoted ()\ {\ \ func_quote_for_eval "${2}"\ \ eval "${1}+=\\\\ \\$func_quote_for_eval_result"\ } # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: # Save a `func_append' function call where possible by direct use of '+=' sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: else # Save a `func_append' function call even when '+=' is not available sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: fi if test x"$_lt_function_replace_fail" = x":"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5 $as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;} fi mv -f "$cfgfile" "$ofile" || (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile") chmod +x "$ofile" cat <<_LT_EOF >> "$ofile" # ### BEGIN LIBTOOL TAG CONFIG: CXX # The linker used to build libraries. LD=$lt_LD_CXX # How to create reloadable object files. reload_flag=$lt_reload_flag_CXX reload_cmds=$lt_reload_cmds_CXX # Commands used to build an old-style archive. old_archive_cmds=$lt_old_archive_cmds_CXX # A language specific compiler. CC=$lt_compiler_CXX # Is the compiler the GNU compiler? with_gcc=$GCC_CXX # Compiler flag to turn off builtin functions. no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_CXX # Additional compiler flags for building library objects. pic_flag=$lt_lt_prog_compiler_pic_CXX # How to pass a linker flag through the compiler. wl=$lt_lt_prog_compiler_wl_CXX # Compiler flag to prevent dynamic linking. link_static_flag=$lt_lt_prog_compiler_static_CXX # Does compiler simultaneously support -c and -o options? compiler_c_o=$lt_lt_cv_prog_compiler_c_o_CXX # Whether or not to add -lc for building shared libraries. build_libtool_need_lc=$archive_cmds_need_lc_CXX # Whether or not to disallow shared libs when runtime libs are static. allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes_CXX # Compiler flag to allow reflexive dlopens. export_dynamic_flag_spec=$lt_export_dynamic_flag_spec_CXX # Compiler flag to generate shared objects directly from archives. whole_archive_flag_spec=$lt_whole_archive_flag_spec_CXX # Whether the compiler copes with passing no objects directly. compiler_needs_object=$lt_compiler_needs_object_CXX # Create an old-style archive from a shared archive. old_archive_from_new_cmds=$lt_old_archive_from_new_cmds_CXX # Create a temporary old-style archive to link instead of a shared archive. old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds_CXX # Commands used to build a shared archive. archive_cmds=$lt_archive_cmds_CXX archive_expsym_cmds=$lt_archive_expsym_cmds_CXX # Commands used to build a loadable module if different from building # a shared archive. module_cmds=$lt_module_cmds_CXX module_expsym_cmds=$lt_module_expsym_cmds_CXX # Whether we are building with GNU ld or not. with_gnu_ld=$lt_with_gnu_ld_CXX # Flag that allows shared libraries with undefined symbols to be built. allow_undefined_flag=$lt_allow_undefined_flag_CXX # Flag that enforces no undefined symbols. no_undefined_flag=$lt_no_undefined_flag_CXX # Flag to hardcode \$libdir into a binary during linking. # This must work even if \$libdir does not exist hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec_CXX # Whether we need a single "-rpath" flag with a separated argument. hardcode_libdir_separator=$lt_hardcode_libdir_separator_CXX # Set to "yes" if using DIR/libNAME\${shared_ext} during linking hardcodes # DIR into the resulting binary. hardcode_direct=$hardcode_direct_CXX # Set to "yes" if using DIR/libNAME\${shared_ext} during linking hardcodes # DIR into the resulting binary and the resulting library dependency is # "absolute",i.e impossible to change by setting \${shlibpath_var} if the # library is relocated. hardcode_direct_absolute=$hardcode_direct_absolute_CXX # Set to "yes" if using the -LDIR flag during linking hardcodes DIR # into the resulting binary. hardcode_minus_L=$hardcode_minus_L_CXX # Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR # into the resulting binary. hardcode_shlibpath_var=$hardcode_shlibpath_var_CXX # Set to "yes" if building a shared library automatically hardcodes DIR # into the library and all subsequent libraries and executables linked # against it. hardcode_automatic=$hardcode_automatic_CXX # Set to yes if linker adds runtime paths of dependent libraries # to runtime path list. inherit_rpath=$inherit_rpath_CXX # Whether libtool must link a program against all its dependency libraries. link_all_deplibs=$link_all_deplibs_CXX # Set to "yes" if exported symbols are required. always_export_symbols=$always_export_symbols_CXX # The commands to list exported symbols. export_symbols_cmds=$lt_export_symbols_cmds_CXX # Symbols that should not be listed in the preloaded symbols. exclude_expsyms=$lt_exclude_expsyms_CXX # Symbols that must always be exported. include_expsyms=$lt_include_expsyms_CXX # Commands necessary for linking programs (against libraries) with templates. prelink_cmds=$lt_prelink_cmds_CXX # Commands necessary for finishing linking programs. postlink_cmds=$lt_postlink_cmds_CXX # Specify filename containing input files. file_list_spec=$lt_file_list_spec_CXX # How to hardcode a shared library path into an executable. hardcode_action=$hardcode_action_CXX # The directories searched by this compiler when creating a shared library. compiler_lib_search_dirs=$lt_compiler_lib_search_dirs_CXX # Dependencies to place before and after the objects being linked to # create a shared library. predep_objects=$lt_predep_objects_CXX postdep_objects=$lt_postdep_objects_CXX predeps=$lt_predeps_CXX postdeps=$lt_postdeps_CXX # The library search path used internally by the compiler when linking # a shared library. compiler_lib_search_path=$lt_compiler_lib_search_path_CXX # ### END LIBTOOL TAG CONFIG: CXX _LT_EOF ;; esac done # for ac_tag as_fn_exit 0 _ACEOF ac_clean_files=$ac_clean_files_save test $ac_write_fail = 0 || as_fn_error $? "write failure creating $CONFIG_STATUS" "$LINENO" 5 # configure is writing to config.log, and then calls config.status. # config.status does its own redirection, appending to config.log. # Unfortunately, on DOS this fails, as config.log is still kept open # by configure, so config.status won't be able to write to it; its # output is simply discarded. So we exec the FD to /dev/null, # effectively closing config.log, so it can be properly (re)opened and # appended to by config.status. When coming back to configure, we # need to make the FD available again. if test "$no_create" != yes; then ac_cs_success=: ac_config_status_args= test "$silent" = yes && ac_config_status_args="$ac_config_status_args --quiet" exec 5>/dev/null $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false exec 5>>config.log # Use ||, not &&, to avoid exiting from the if with $? = 1, which # would make configure fail if this is the last instruction. $ac_cs_success || as_fn_exit 1 fi if test -n "$ac_unrecognized_opts" && test "$enable_option_checking" != no; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unrecognized options: $ac_unrecognized_opts" >&5 $as_echo "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2;} fi slurm-slurm-15-08-7-1/configure.ac000066400000000000000000000455111265000126300166210ustar00rootroot00000000000000# $Id$ # This file is to be processed with autoconf to generate a configure script dnl Prologue dnl AC_INIT(slurm, m4_esyscmd([perl -ne 'print,exit if s/^\s*VERSION:\s*(\d*.\d*).\S*/\1/i' ./META | sed 's/^v//' | tr '-' '_' | tr -d '\n']), [slurm-dev@schedmd.com], [], [http://slurm.schedmd.com]) AC_PREREQ(2.59) AC_CONFIG_SRCDIR([configure.ac]) AC_CONFIG_AUX_DIR([auxdir]) AC_CANONICAL_TARGET([]) dnl the is a generic flag to avoid building things AM_CONDITIONAL(DONT_BUILD, test "1" = "0") X_AC_GPL_LICENSED # Determine project/version from META file. # Sets PACKAGE, VERSION, SLURM_VERSION X_AC_SLURM_VERSION dnl Initialize Automake dnl dnl If you ever change to use AM_INIT_AUTOMAKE(subdir-objects) edit dnl auxdir/slurm.m4 to not define VERSION dnl AM_INIT_AUTOMAKE(no-define) AM_MAINTAINER_MODE AC_CONFIG_HEADERS([config.h]) AC_CONFIG_HEADERS([slurm/slurm.h]) dnl This needs to be close to the front to set CFLAGS=-m64 X_AC_RPATH X_AC_BGL dnl we need to know if this is a bgl in the Makefile.am to do dnl some things differently AM_CONDITIONAL(BGL_LOADED, test "x$ac_bluegene_loaded" = "xyes") AC_SUBST(BGL_LOADED) X_AC_BGP dnl ok now check if We have an L or P system, Q is handled differently dnl so handle it later. AM_CONDITIONAL(BG_L_P_LOADED, test "x$ac_bluegene_loaded" = "xyes") AC_SUBST(BG_L_P_LOADED) dnl ok now check if We are on a real L or P system, (test if to build srun dnl or not. If we are emulating things we should build it. AM_CONDITIONAL(REAL_BG_L_P_LOADED, test "x$ac_real_bluegene_loaded" = "xyes") AC_SUBST(REAL_BG_L_P_LOADED) X_AC_BGQ dnl We need to know if this is a Q system AM_CONDITIONAL(BGQ_LOADED, test "x$ac_bgq_loaded" = "xyes") AC_SUBST(BGQ_LOADED) dnl ok now check if We are on a real L or P system, (test if to build srun dnl or not. If we are emulating things we should build it. AM_CONDITIONAL(REAL_BGQ_LOADED, test "x$ac_real_bluegene_loaded" = "xyes") AC_SUBST(REAL_BGQ_LOADED) dnl ok now check if any bluegene was loaded. AM_CONDITIONAL(BLUEGENE_LOADED, test "x$ac_bluegene_loaded" = "xyes") AC_SUBST(BLUEGENE_LOADED) X_AC_AIX dnl dnl Check to see if this architecture should use slurm_* prefix function dnl aliases for plugins. dnl case "$host" in *-*-aix*) AC_DEFINE(USE_ALIAS, 0, [Define slurm_ prefix function aliases for plugins]) ;; *darwin*) AC_DEFINE(USE_ALIAS, 0, [Define slurm_ prefix function aliases for plugins]) ;; *) AC_DEFINE(USE_ALIAS, 1, [Define slurm_ prefix function aliases for plugins]) ;; esac ac_have_cygwin=no dnl dnl add some flags for Solaris and cygwin dnl case "$host" in *cygwin) LDFLAGS="$LDFLAGS -no-undefined" SO_LDFLAGS="$SO_LDFLAGS \$(top_builddir)/src/api/libslurmhelper.la" AC_SUBST(SO_LDFLAGS) ac_have_cygwin=yes ;; *solaris*) CC="/usr/sfw/bin/gcc" CFLAGS="$CFLAGS -D_POSIX_PTHREAD_SEMANTICS -I/usr/sfw/include" LDFLAGS="$LDFLAGS -L/usr/sfw/lib" ;; esac AM_CONDITIONAL(WITH_CYGWIN, test x"$ac_have_cygwin" = x"yes") dnl Checks for programs. dnl AC_PROG_CC AC_PROG_CXX AC_PROG_MAKE_SET AC_PROG_LIBTOOL PKG_PROG_PKG_CONFIG([0.9.0]) AM_CONDITIONAL(WITH_CXX, test -n "$ac_ct_CXX") AM_CONDITIONAL(WITH_GNU_LD, test "$with_gnu_ld" = "yes") AC_PATH_PROG([SLEEP_CMD], [sleep], [/bin/sleep]) AC_DEFINE_UNQUOTED([SLEEP_CMD], ["$SLEEP_CMD"], [Define path to sleep command]) AC_PATH_PROG([SUCMD], [su], [/bin/su]) AC_DEFINE_UNQUOTED([SUCMD], ["$SUCMD"], [Define path to su command]) dnl Checks for libraries dnl AC_SEARCH_LIBS([socket], [socket]) AC_SEARCH_LIBS([gethostbyname], [nsl]) AC_SEARCH_LIBS([hstrerror], [resolv]) AC_SEARCH_LIBS([kstat_open], [kstat]) dnl Checks for header files. dnl AC_CHECK_HEADERS(mcheck.h values.h socket.h sys/socket.h \ stdbool.h sys/ipc.h sys/shm.h sys/sem.h errno.h \ stdlib.h dirent.h pthread.h sys/prctl.h \ sysint.h inttypes.h termcap.h netdb.h sys/socket.h \ sys/systemcfg.h ncurses.h curses.h sys/dr.h sys/vfs.h \ pam/pam_appl.h security/pam_appl.h sys/sysctl.h \ pty.h utmp.h \ sys/syslog.h linux/sched.h \ kstat.h paths.h limits.h sys/statfs.h sys/ptrace.h \ sys/termios.h float.h sys/statvfs.h ) AC_HEADER_SYS_WAIT AC_HEADER_TIME AC_HEADER_STDC dnl Checks for structures. dnl X_AC__SYSTEM_CONFIGURATION dnl Check for dlfcn dnl X_AC_DLFCN dnl check to see if glibc's program_invocation_name is available: dnl X_AC_SLURM_PROGRAM_INVOCATION_NAME dnl Check if ptrace takes four or five arguments dnl X_AC_PTRACE dnl Check if setpgrp takes zero or two arguments dnl X_AC_SETPGRP dnl Check of sched_getaffinity exists and it's argument count dnl X_AC_AFFINITY dnl dnl Check for PAM module support X_AC_PAM dnl dnl Check for ISO compliance X_AC_ISO dnl dnl Check if we want to load .login with sbatch --get-user-env option X_AC_ENV_LOGIC dnl Checks for types. dnl X_AC_SLURM_BIGENDIAN dnl Check for JSON parser X_AC_JSON dnl Checks for compiler characteristics. dnl AC_PROG_GCC_TRADITIONAL([]) dnl checks for library functions. dnl AC_FUNC_MALLOC AC_FUNC_STRERROR_R AC_CHECK_FUNCS( \ fdatasync \ hstrerror \ strerror \ mtrace \ strndup \ strlcpy \ strsignal \ inet_aton \ inet_ntop \ inet_pton \ setproctitle \ sysctlbyname \ cfmakeraw \ setresuid \ get_current_dir_name \ faccessat \ eaccess \ statvfs \ statfs \ ) AC_CHECK_DECLS([hstrerror, strsignal, sys_siglist]) AC_CHECK_FUNCS(unsetenv, [have_unsetenv=yes]) AM_CONDITIONAL(HAVE_UNSETENV, test "x$have_unsetenv" = "xyes") ACX_PTHREAD([], AC_MSG_ERROR([Error: Cannot figure out how to use pthreads!])) # Always define WITH_PTHREADS if we make it this far AC_DEFINE(WITH_PTHREADS,1,[Define if you have pthreads.]) LDFLAGS="$LDFLAGS " CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" X_AC_SUN_CONST X_AC_DIMENSIONS X_AC_CFLAGS X_AC_OFED AX_LIB_HDF5() AM_CONDITIONAL(BUILD_HDF5, test "$with_hdf5" = "yes") # Some older systems (Debian/Ubuntu/...) configure HDF5 with # --with-default-api-version=v16 which creates problems for slurm # because slurm uses the 1.8 API. By defining this CPP macro we get # the 1.8 API. AC_DEFINE([H5_NO_DEPRECATED_SYMBOLS], [1], [Make sure we get the 1.8 HDF5 API]) X_AC_HWLOC X_AC_FREEIPMI X_AC_SLURM_SEMAPHORE X_AC_RRDTOOL X_AC_NCURSES AM_CONDITIONAL(HAVE_SOME_CURSES, test "x$ac_have_some_curses" = "xyes") AC_SUBST(HAVE_SOME_CURSES) # # Tests for Check # PKG_CHECK_MODULES([CHECK], [check >= 0.9.8], [ac_have_check="yes"], [ac_have_check="no"]) AM_CONDITIONAL(HAVE_CHECK, test "x$ac_have_check" = "xyes") # # Tests for GTK+ # # use the correct libs if running on 64bit if test -d "/usr/lib64/pkgconfig"; then PKG_CONFIG_PATH="/usr/lib64/pkgconfig/:$PKG_CONFIG_PATH" fi if test -d "/opt/gnome/lib64/pkgconfig"; then PKG_CONFIG_PATH="/opt/gnome/lib64/pkgconfig/:$PKG_CONFIG_PATH" fi AM_PATH_GLIB_2_0([2.7.1], [ac_glib_test="yes"], [ac_glib_test="no"], [gthread]) if test ${glib_config_minor_version=0} -ge 32 ; then AC_DEFINE([GLIB_NEW_THREADS], 1, [Define to 1 if using glib-2.32.0 or higher]) fi AM_PATH_GTK_2_0([2.7.1], [ac_gtk_test="yes"], [ac_gtk_test="no"], [gthread]) if test ${gtk_config_minor_version=0} -ge 10 ; then AC_DEFINE([GTK2_USE_RADIO_SET], 1, [Define to 1 if using gtk+-2.10.0 or higher]) fi if test ${gtk_config_minor_version=0} -ge 12 ; then AC_DEFINE([GTK2_USE_TOOLTIP], 1, [Define to 1 if using gtk+-2.12.0 or higher]) fi if test ${gtk_config_minor_version=0} -ge 14 ; then AC_DEFINE([GTK2_USE_GET_FOCUS], 1, [Define to 1 if using gtk+-2.14.0 or higher]) fi if test "x$ac_glib_test" != "xyes" -o "x$ac_gtk_test" != "xyes"; then AC_MSG_WARN([cannot build sview without gtk library]); fi AM_CONDITIONAL(BUILD_SVIEW, [test "x$ac_glib_test" = "xyes"] && [test "x$ac_gtk_test" = "xyes"]) X_AC_DATABASES dnl Cray ALPS/Basil support depends on mySQL X_AC_CRAY dnl checks for system services. dnl dnl checks for system-specific stuff. dnl dnl check for how to emulate setproctitle dnl X_AC_SETPROCTITLE dnl check for debug compilation, must follow X_AC_CRAY dnl X_AC_DEBUG AM_CONDITIONAL(DEBUG_MODULES, test "x$ac_debug" = "xtrue") dnl check for slurmctld, slurmd and slurmdbd default ports, dnl and default number of slurmctld ports dnl X_AC_SLURM_PORTS([6817], [6818], [6819], [1]) dnl add SLURM_PREFIX to config.h dnl if test "x$prefix" = "xNONE" ; then AC_DEFINE_UNQUOTED(SLURM_PREFIX, "/usr/local", [Define Slurm installation prefix]) else AC_DEFINE_UNQUOTED(SLURM_PREFIX, "$prefix", [Define Slurm installation prefix]) fi AC_SUBST(SLURM_PREFIX) dnl check for whether to include IBM NRT (Network Resource Table) support dnl X_AC_NRT dnl check for SGI job container support dnl X_AC_SGI_JOB dnl check for netloc library dnl X_AC_NETLOC dnl check for lua library dnl X_AC_LUA dnl check for presence of the man2html command dnl X_AC_MAN2HTML AM_CONDITIONAL(HAVE_MAN2HTML, test "x$ac_have_man2html" = "xyes") AC_SUBST(HAVE_MAN2HTML) dnl check if we can use standard printf functions dnl X_AC_PRINTF_NULL dnl Check for whether to include readline support dnl X_AC_READLINE dnl dnl X_AC_SLURM_WITH_SSL AM_CONDITIONAL(HAVE_OPENSSL, test "x$ac_have_openssl" = "xyes") AC_SUBST(HAVE_OPENSSL) dnl dnl Check for compilation of SLURM auth modules: dnl X_AC_MUNGE dnl dnl Check if multiple-slurmd support is requested and define MULTIPLE_SLURMD dnl if it is. dnl AC_MSG_CHECKING(whether to enable multiple-slurmd support) AC_ARG_ENABLE([multiple-slurmd], AS_HELP_STRING(--enable-multiple-slurmd,enable multiple-slurmd support), [ case "$enableval" in yes) multiple_slurmd=yes ;; no) multiple_slurmd=no ;; *) AC_MSG_ERROR([bad value "$enableval" for --enable-multiple-slurmd]);; esac ] ) if test "x$multiple_slurmd" = "xyes"; then AC_DEFINE([MULTIPLE_SLURMD], [1], [Enable multiple slurmd on one node]) AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) fi AUTHD_LIBS="-lauth -le" savedLIBS="$LIBS" savedCFLAGS="$CFLAGS" LIBS="$SSL_LIBS $AUTHD_LIBS $LIBS" CFLAGS="$SSL_CPPFLAGS $CFLAGS" AC_CHECK_LIB(auth, auth_init_credentials, [have_authd=yes], [have_authd=no]) AC_SUBST(AUTHD_LIBS) AC_SUBST(AUTHD_CFLAGS) AM_CONDITIONAL(WITH_AUTHD, test "x$have_authd" = "xyes") LIBS="$savedLIBS" CFLAGS="$savedCFLAGS" savedLIBS="$LIBS" LIBS="-lutil $LIBS" AC_CHECK_LIB(util, openpty, [UTIL_LIBS="-lutil"], []) AC_SUBST(UTIL_LIBS) LIBS="$savedLIBS" dnl dnl Check for compilation of SLURM with BLCR support: dnl X_AC_BLCR dnl dnl Check for compilation of SLURM with CURL support: dnl LIBCURL_CHECK_CONFIG dnl dnl Set some configuration based upon multiple configuration parameters dnl ac_build_smap="no" if test "x$ac_have_some_curses" = "xyes" ; then ac_build_smap="yes" fi AM_CONDITIONAL(BUILD_SMAP, test "x$ac_build_smap" = "xyes") dnl All slurm Makefiles: AC_CONFIG_FILES([Makefile config.xml auxdir/Makefile contribs/Makefile contribs/cray/Makefile contribs/cray/csm/Makefile contribs/lua/Makefile contribs/mic/Makefile contribs/pam/Makefile contribs/pam_slurm_adopt/Makefile contribs/perlapi/Makefile contribs/perlapi/libslurm/Makefile contribs/perlapi/libslurm/perl/Makefile.PL contribs/perlapi/libslurmdb/Makefile contribs/perlapi/libslurmdb/perl/Makefile.PL contribs/torque/Makefile contribs/phpext/Makefile contribs/phpext/slurm_php/config.m4 contribs/sgather/Makefile contribs/sgi/Makefile contribs/sjobexit/Makefile contribs/slurmdb-direct/Makefile contribs/pmi2/Makefile doc/Makefile doc/man/Makefile doc/man/man1/Makefile doc/man/man3/Makefile doc/man/man5/Makefile doc/man/man8/Makefile doc/html/Makefile doc/html/configurator.html doc/html/configurator.easy.html etc/cgroup.release_common.example etc/init.d.slurm etc/init.d.slurmdbd etc/slurmctld.service etc/slurmd.service etc/slurmdbd.service src/Makefile src/api/Makefile src/common/Makefile src/db_api/Makefile src/layouts/Makefile src/layouts/power/Makefile src/layouts/unit/Makefile src/database/Makefile src/sacct/Makefile src/sacctmgr/Makefile src/sreport/Makefile src/salloc/Makefile src/sbatch/Makefile src/sbcast/Makefile src/sattach/Makefile src/scancel/Makefile src/scontrol/Makefile src/sdiag/Makefile src/sinfo/Makefile src/slurmctld/Makefile src/slurmd/Makefile src/slurmd/common/Makefile src/slurmd/slurmd/Makefile src/slurmd/slurmstepd/Makefile src/slurmdbd/Makefile src/smap/Makefile src/smd/Makefile src/sprio/Makefile src/squeue/Makefile src/srun/Makefile src/srun/libsrun/Makefile src/srun_cr/Makefile src/sshare/Makefile src/sstat/Makefile src/strigger/Makefile src/sview/Makefile src/plugins/Makefile src/plugins/accounting_storage/Makefile src/plugins/accounting_storage/common/Makefile src/plugins/accounting_storage/filetxt/Makefile src/plugins/accounting_storage/mysql/Makefile src/plugins/accounting_storage/none/Makefile src/plugins/accounting_storage/slurmdbd/Makefile src/plugins/acct_gather_energy/Makefile src/plugins/acct_gather_energy/cray/Makefile src/plugins/acct_gather_energy/rapl/Makefile src/plugins/acct_gather_energy/ibmaem/Makefile src/plugins/acct_gather_energy/ipmi/Makefile src/plugins/acct_gather_energy/none/Makefile src/plugins/acct_gather_infiniband/Makefile src/plugins/acct_gather_infiniband/ofed/Makefile src/plugins/acct_gather_infiniband/none/Makefile src/plugins/acct_gather_filesystem/Makefile src/plugins/acct_gather_filesystem/lustre/Makefile src/plugins/acct_gather_filesystem/none/Makefile src/plugins/acct_gather_profile/Makefile src/plugins/acct_gather_profile/hdf5/Makefile src/plugins/acct_gather_profile/hdf5/sh5util/Makefile src/plugins/acct_gather_profile/hdf5/sh5util/libsh5util_old/Makefile src/plugins/acct_gather_profile/none/Makefile src/plugins/auth/Makefile src/plugins/auth/authd/Makefile src/plugins/auth/munge/Makefile src/plugins/auth/none/Makefile src/plugins/burst_buffer/Makefile src/plugins/burst_buffer/common/Makefile src/plugins/burst_buffer/cray/Makefile src/plugins/burst_buffer/generic/Makefile src/plugins/checkpoint/Makefile src/plugins/checkpoint/aix/Makefile src/plugins/checkpoint/blcr/Makefile src/plugins/checkpoint/blcr/cr_checkpoint.sh src/plugins/checkpoint/blcr/cr_restart.sh src/plugins/checkpoint/none/Makefile src/plugins/checkpoint/ompi/Makefile src/plugins/checkpoint/poe/Makefile src/plugins/core_spec/Makefile src/plugins/core_spec/cray/Makefile src/plugins/core_spec/none/Makefile src/plugins/crypto/Makefile src/plugins/crypto/munge/Makefile src/plugins/crypto/openssl/Makefile src/plugins/ext_sensors/Makefile src/plugins/ext_sensors/rrd/Makefile src/plugins/ext_sensors/none/Makefile src/plugins/gres/Makefile src/plugins/gres/gpu/Makefile src/plugins/gres/nic/Makefile src/plugins/gres/mic/Makefile src/plugins/jobacct_gather/Makefile src/plugins/jobacct_gather/common/Makefile src/plugins/jobacct_gather/linux/Makefile src/plugins/jobacct_gather/aix/Makefile src/plugins/jobacct_gather/cgroup/Makefile src/plugins/jobacct_gather/none/Makefile src/plugins/jobcomp/Makefile src/plugins/jobcomp/elasticsearch/Makefile src/plugins/jobcomp/filetxt/Makefile src/plugins/jobcomp/none/Makefile src/plugins/jobcomp/script/Makefile src/plugins/jobcomp/mysql/Makefile src/plugins/job_container/Makefile src/plugins/job_container/cncu/Makefile src/plugins/job_container/none/Makefile src/plugins/job_submit/Makefile src/plugins/job_submit/all_partitions/Makefile src/plugins/job_submit/cnode/Makefile src/plugins/job_submit/cray/Makefile src/plugins/job_submit/defaults/Makefile src/plugins/job_submit/logging/Makefile src/plugins/job_submit/lua/Makefile src/plugins/job_submit/partition/Makefile src/plugins/job_submit/pbs/Makefile src/plugins/job_submit/require_timelimit/Makefile src/plugins/job_submit/throttle/Makefile src/plugins/launch/Makefile src/plugins/launch/aprun/Makefile src/plugins/launch/poe/Makefile src/plugins/launch/runjob/Makefile src/plugins/launch/slurm/Makefile src/plugins/power/Makefile src/plugins/power/common/Makefile src/plugins/power/cray/Makefile src/plugins/power/none/Makefile src/plugins/preempt/Makefile src/plugins/preempt/job_prio/Makefile src/plugins/preempt/none/Makefile src/plugins/preempt/partition_prio/Makefile src/plugins/preempt/qos/Makefile src/plugins/priority/Makefile src/plugins/priority/basic/Makefile src/plugins/priority/multifactor/Makefile src/plugins/proctrack/Makefile src/plugins/proctrack/aix/Makefile src/plugins/proctrack/cray/Makefile src/plugins/proctrack/cgroup/Makefile src/plugins/proctrack/pgid/Makefile src/plugins/proctrack/linuxproc/Makefile src/plugins/proctrack/sgi_job/Makefile src/plugins/proctrack/lua/Makefile src/plugins/route/Makefile src/plugins/route/default/Makefile src/plugins/route/topology/Makefile src/plugins/sched/Makefile src/plugins/sched/backfill/Makefile src/plugins/sched/builtin/Makefile src/plugins/sched/hold/Makefile src/plugins/sched/wiki/Makefile src/plugins/sched/wiki2/Makefile src/plugins/select/Makefile src/plugins/select/alps/Makefile src/plugins/select/alps/libalps/Makefile src/plugins/select/alps/libemulate/Makefile src/plugins/select/bluegene/Makefile src/plugins/select/bluegene/ba/Makefile src/plugins/select/bluegene/ba_bgq/Makefile src/plugins/select/bluegene/bl/Makefile src/plugins/select/bluegene/bl_bgq/Makefile src/plugins/select/bluegene/sfree/Makefile src/plugins/select/cons_res/Makefile src/plugins/select/cray/Makefile src/plugins/select/linear/Makefile src/plugins/select/other/Makefile src/plugins/select/serial/Makefile src/plugins/slurmctld/Makefile src/plugins/slurmctld/nonstop/Makefile src/plugins/slurmd/Makefile src/plugins/switch/Makefile src/plugins/switch/cray/Makefile src/plugins/switch/generic/Makefile src/plugins/switch/none/Makefile src/plugins/switch/nrt/Makefile src/plugins/switch/nrt/libpermapi/Makefile src/plugins/mpi/Makefile src/plugins/mpi/mpich1_p4/Makefile src/plugins/mpi/mpich1_shmem/Makefile src/plugins/mpi/mpichgm/Makefile src/plugins/mpi/mpichmx/Makefile src/plugins/mpi/mvapich/Makefile src/plugins/mpi/lam/Makefile src/plugins/mpi/none/Makefile src/plugins/mpi/openmpi/Makefile src/plugins/mpi/pmi2/Makefile src/plugins/task/Makefile src/plugins/task/affinity/Makefile src/plugins/task/cgroup/Makefile src/plugins/task/cray/Makefile src/plugins/task/none/Makefile src/plugins/topology/Makefile src/plugins/topology/3d_torus/Makefile src/plugins/topology/hypercube/Makefile src/plugins/topology/node_rank/Makefile src/plugins/topology/none/Makefile src/plugins/topology/tree/Makefile testsuite/Makefile testsuite/expect/Makefile testsuite/slurm_unit/Makefile testsuite/slurm_unit/api/Makefile testsuite/slurm_unit/api/manual/Makefile testsuite/slurm_unit/common/Makefile ] ) AC_OUTPUT slurm-slurm-15-08-7-1/contribs/000077500000000000000000000000001265000126300161505ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/Makefile.am000066400000000000000000000004641265000126300202100ustar00rootroot00000000000000SUBDIRS = cray lua pam pam_slurm_adopt perlapi torque sgather sgi sjobexit slurmdb-direct pmi2 mic EXTRA_DIST = \ env_cache_builder.c \ make-3.81.slurm.patch \ make-4.0.slurm.patch \ mpich1.slurm.patch \ ptrace.patch \ sgather \ skilling.c \ sjstat \ spank_core.c \ time_login.c \ README slurm-slurm-15-08-7-1/contribs/Makefile.in000066400000000000000000000567021265000126300202270ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am README ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \ ctags-recursive dvi-recursive html-recursive info-recursive \ install-data-recursive install-dvi-recursive \ install-exec-recursive install-html-recursive \ install-info-recursive install-pdf-recursive \ install-ps-recursive install-recursive installcheck-recursive \ installdirs-recursive pdf-recursive ps-recursive \ tags-recursive uninstall-recursive am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \ distclean-recursive maintainer-clean-recursive am__recursive_targets = \ $(RECURSIVE_TARGETS) \ $(RECURSIVE_CLEAN_TARGETS) \ $(am__extra_recursive_targets) AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \ distdir am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` ETAGS = etags CTAGS = ctags DIST_SUBDIRS = $(SUBDIRS) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) am__relativize = \ dir0=`pwd`; \ sed_first='s,^\([^/]*\)/.*$$,\1,'; \ sed_rest='s,^[^/]*/*,,'; \ sed_last='s,^.*/\([^/]*\)$$,\1,'; \ sed_butlast='s,/*[^/]*$$,,'; \ while test -n "$$dir1"; do \ first=`echo "$$dir1" | sed -e "$$sed_first"`; \ if test "$$first" != "."; then \ if test "$$first" = ".."; then \ dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \ dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \ else \ first2=`echo "$$dir2" | sed -e "$$sed_first"`; \ if test "$$first2" = "$$first"; then \ dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \ else \ dir2="../$$dir2"; \ fi; \ dir0="$$dir0"/"$$first"; \ fi; \ fi; \ dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \ done; \ reldir="$$dir2" ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ SUBDIRS = cray lua pam pam_slurm_adopt perlapi torque sgather sgi sjobexit slurmdb-direct pmi2 mic EXTRA_DIST = \ env_cache_builder.c \ make-3.81.slurm.patch \ make-4.0.slurm.patch \ mpich1.slurm.patch \ ptrace.patch \ sgather \ skilling.c \ sjstat \ spank_core.c \ time_login.c \ README all: all-recursive .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu contribs/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu contribs/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs # This directory's subdirectories are mostly independent; you can cd # into them and run 'make' without going through this Makefile. # To change the values of 'make' variables: instead of editing Makefiles, # (1) if the variable is set in 'config.status', edit 'config.status' # (which will cause the Makefiles to be regenerated when you run 'make'); # (2) otherwise, pass the desired values on the 'make' command line. $(am__recursive_targets): @fail=; \ if $(am__make_keepgoing); then \ failcom='fail=yes'; \ else \ failcom='exit 1'; \ fi; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ case "$@" in \ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ *) list='$(SUBDIRS)' ;; \ esac; \ for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done; \ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-recursive TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ include_option=--etags-include; \ empty_fix=.; \ else \ include_option=--include; \ empty_fix=; \ fi; \ list='$(SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test ! -f $$subdir/TAGS || \ set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \ fi; \ done; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-recursive CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-recursive cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ $(am__make_dryrun) \ || test -d "$(distdir)/$$subdir" \ || $(MKDIR_P) "$(distdir)/$$subdir" \ || exit 1; \ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \ $(am__relativize); \ new_distdir=$$reldir; \ dir1=$$subdir; dir2="$(top_distdir)"; \ $(am__relativize); \ new_top_distdir=$$reldir; \ echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \ echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \ ($(am__cd) $$subdir && \ $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$$new_top_distdir" \ distdir="$$new_distdir" \ am__remove_distdir=: \ am__skip_length_check=: \ am__skip_mode_fix=: \ distdir) \ || exit 1; \ fi; \ done check-am: all-am check: check-recursive all-am: Makefile installdirs: installdirs-recursive installdirs-am: install: install-recursive install-exec: install-exec-recursive install-data: install-data-recursive uninstall: uninstall-recursive install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-recursive install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-recursive clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-recursive -rm -f Makefile distclean-am: clean-am distclean-generic distclean-tags dvi: dvi-recursive dvi-am: html: html-recursive html-am: info: info-recursive info-am: install-data-am: install-dvi: install-dvi-recursive install-dvi-am: install-exec-am: install-html: install-html-recursive install-html-am: install-info: install-info-recursive install-info-am: install-man: install-pdf: install-pdf-recursive install-pdf-am: install-ps: install-ps-recursive install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-recursive -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-recursive mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-recursive pdf-am: ps: ps-recursive ps-am: uninstall-am: .MAKE: $(am__recursive_targets) install-am install-strip .PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am check \ check-am clean clean-generic clean-libtool cscopelist-am ctags \ ctags-am distclean distclean-generic distclean-libtool \ distclean-tags distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ installdirs-am maintainer-clean maintainer-clean-generic \ mostlyclean mostlyclean-generic mostlyclean-libtool pdf pdf-am \ ps ps-am tags tags-am uninstall uninstall-am # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/README000066400000000000000000000162031265000126300170320ustar00rootroot00000000000000This is the contribs dir for Slurm. SOURCE DISTRIBUTION HIERARCHY ----------------------------- Subdirectories contain the source-code for the various contributations for Slurm as their documentation. A quick description of the subdirectories of the Slurm contribs distribution follows: cray [Tools for use on Cray systems] etc_sysconfig_slurm - /etc/sysconfig/slurm for Cray XT/XE systems libalps_test_programs.tar.gz - set of tools to verify ALPS/BASIL support logic. Note that this currently requires: * hardcoding in libsdb/basil_mysql_routines.c: mysql_real_connect(handle, "localhost", NULL, NULL, "XT5istanbul" * suitable /etc/my.cnf, containing at least the lines [client] user=basic password=basic * setting the APBASIL in the libalps/Makefile, e.g. APBASIL := slurm/alps_simulator/apbasil.sh To use, extract the files then: > cd libasil/ > make -C alps_tests all # runs basil parser tests > make -C sdb_tests all # checks if database routines work A tool named tuxadmin is also also included. When executed with the -s or --slurm.conf option, this contact the SDB to generate system-specific information needed in slurm.conf (e.g. "NodeName=nid..." and "PartitionName= Nodes=nid... MaxNodes=...". opt_modulefiles_slurm - enables use of Munge as soon as built pam_job.c - Less verbose version of the default Cray job service. env_cache_builder.c [C program] This program will build an environment variable cache file for specific users or all users on the system. This can be used to prevent the aborting of jobs submitted by Moab using the srun/sbatch --get-user-env option. Build with "make -f /dev/null env_cache_builder" and execute as user root on the node where the moab daemon runs. lua [ LUA scripts ] Example LUA scripts that can serve as Slurm plugins. job_submit.lua - job_submit plugin that can set a job's default partition using a very simple algorithm job_submit_license.lua - job_submit plugin that can set a job's use of system licenses proctrack.lua - proctrack (process tracking) plugin that implements a very simple job step container using CPUSETs make-3.81.slurm.patch [ Patch to "make" command for parallel build ] make-4.0.slurm.patch [ Patch to "make" command for parallel build ] This patch will use Slurm to launch tasks across a job's current resource allocation. Depending upon the size of modules to be compiled, this may or may not improve performance. If most modules are thousands of lines long, the use of additional resources should more than compensate for the overhead of Slurm's task launch. Use with make's "-j" option within an existing Slurm allocation. Outside of a Slurm allocation, make's behavior will be unchanged. Designed for GNU make-3.81 or make-4.0. mic [Tools for use on Intel MIC processors] mpich1.slurm.patch [ Patch to mpich1/p4 library for Slurm job task launch ] For Slurm based job initiations (from srun command), get the parameters from environment variables as needed. This allows for a truly parallel job launch using the existing "execer" mode of operation with slight modification. pam [ PAM (Pluggable Authentication Module) for Slurm ] This PAM module will restrict who can login to a node to users who have been allocated resources on the node and user root. pam_slurm_adopt [ Plugin for PAM to place incoming connections into existing Slurm job container ] This Slurm plugin provides a mechanism for new incomming connections to be placed into existing Slurm jobs containers so that then can be accounted for and killed at job termination. See the README file in the subdirectory for more details. perlapi/ [ Perl API to Slurm source ] API to Slurm using perl. Making available all Slurm command that exist in the Slurm proper API. phpext [ PHP API to Slurm source ] API to Slurm using php. Not a complete API, but offers quite a few interfaces to existing Slurm proper APIs. pmi2 [ PMI2 client library ] User applications can link with this library to use Slurm's mpi/pmi2 plugin. ptrace.patch [ Linux Kernel patch required to for TotalView use ] 0. This has been fixed on most recent Linux kernels. Older versions of Linux may need this patch support TotalView. 1. gdb and other tools cannot attach to a stopped process. The wait that follows the PTRACE_ATTACH will block indefinitely. 2. It is not possible to use PTRACE_DETACH to leave a process stopped, because ptrace ignores SIGSTOPs sent by the tracing process. sgather [ shell script ] Gather remote files from a job into a central location. Reverse of of sbcast command. sgi/ [Tools for use on SGI systems] netloc_to_topology.c [ C program ] Used to construct a Slurm topology.conf file based upon SGI network APIs. README.txt [Documentation] sjobexit/ [ Perl programs ] Tools for managing job exit code records sjstat [ Perl program ] Lists attributes of jobs under Slurm control skilling.c [ C program ] This program can be used to order the hostnames in a 2+ dimensional architecture for use in the slurm.conf file. It is used to generate the Hilbert number based upon a node's physical location in the computer. Nodes close together in their Hilbert number will also be physically close in 2-D or 3-D space, so we can reduce the 2-D or 3-D job placement problem to a 1-D problem that Slurm can easily handle by defining the node names in the slurm.conf file in order of their Hilbert number. If the computer is not a perfect square or cube with power of two size, then collapse the node list maintaining the numeric order based upon the Hilbert number. slurm_completion_help [shell script, vim file] Scripts to help in option completion when using slurm commands. slurmdb-direct [ Perl program ] Program that permits writing directly to SlurmDBD (Slurm DataBase Daemon). spank_core.c [ SPANK plugin, C program ] A Slurm SPANK plugin that can be used to permit users to generated light-weight core files rather than full core files. time_login.c [ C program ] This program will report how long a pseudo-login will take for specific users or all users on the system. Users identified by this program will not have their environment properly set for jobs submitted through Moab. Build with "make -f /dev/null time_login" and execute as user root. torque/ [ Wrapper Scripts for Torque migration to Slurm ] Helpful scripts to make transition to Slurm easier from PBS or Torque. These scripts are easily updatable if there is functionality missing. NOTE: For the showq command, see https://github.com/pedmon/slurm_showq slurm-slurm-15-08-7-1/contribs/cray/000077500000000000000000000000001265000126300171065ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/cray/Makefile.am000066400000000000000000000016021265000126300211410ustar00rootroot00000000000000# # Makefile for cray scripts # SUBDIRS = csm AUTOMAKE_OPTIONS = foreign EXTRA_DIST = \ etc_sysconfig_slurm \ libalps_test_programs.tar.gz \ opt_modulefiles_slurm.in \ pam_job.c \ plugstack.conf.template \ slurm.conf.template \ slurmconfgen.py.in if HAVE_NATIVE_CRAY sbin_SCRIPTS = slurmconfgen.py endif if HAVE_REAL_CRAY noinst_DATA = opt_modulefiles_slurm endif # Don't rely on autoconf to replace variables outside of makefiles opt_modulefiles_slurm: opt_modulefiles_slurm.in Makefile sed -e 's|@prefix[@]|$(prefix)|g' \ -e 's|@MUNGE_DIR[@]|$(MUNGE_DIR)|g' \ -e 's|@libdir[@]|$(libdir)|g' \ ${abs_srcdir}/opt_modulefiles_slurm.in >opt_modulefiles_slurm slurmconfgen.py: slurmconfgen.py.in Makefile sed -e 's|@sysconfdir[@]|$(sysconfdir)|g' \ ${abs_srcdir}/slurmconfgen.py.in >slurmconfgen.py clean-generic: rm -f opt_modulefiles_slurm slurmconfgen.py slurm-slurm-15-08-7-1/contribs/cray/Makefile.in000066400000000000000000000655621265000126300211710ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ # # Makefile for cray scripts # VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/cray DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(sbindir)" SCRIPTS = $(sbin_SCRIPTS) AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \ ctags-recursive dvi-recursive html-recursive info-recursive \ install-data-recursive install-dvi-recursive \ install-exec-recursive install-html-recursive \ install-info-recursive install-pdf-recursive \ install-ps-recursive install-recursive installcheck-recursive \ installdirs-recursive pdf-recursive ps-recursive \ tags-recursive uninstall-recursive am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac DATA = $(noinst_DATA) RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \ distclean-recursive maintainer-clean-recursive am__recursive_targets = \ $(RECURSIVE_TARGETS) \ $(RECURSIVE_CLEAN_TARGETS) \ $(am__extra_recursive_targets) AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \ distdir am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` ETAGS = etags CTAGS = ctags DIST_SUBDIRS = $(SUBDIRS) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) am__relativize = \ dir0=`pwd`; \ sed_first='s,^\([^/]*\)/.*$$,\1,'; \ sed_rest='s,^[^/]*/*,,'; \ sed_last='s,^.*/\([^/]*\)$$,\1,'; \ sed_butlast='s,/*[^/]*$$,,'; \ while test -n "$$dir1"; do \ first=`echo "$$dir1" | sed -e "$$sed_first"`; \ if test "$$first" != "."; then \ if test "$$first" = ".."; then \ dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \ dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \ else \ first2=`echo "$$dir2" | sed -e "$$sed_first"`; \ if test "$$first2" = "$$first"; then \ dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \ else \ dir2="../$$dir2"; \ fi; \ dir0="$$dir0"/"$$first"; \ fi; \ fi; \ dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \ done; \ reldir="$$dir2" ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ SUBDIRS = csm AUTOMAKE_OPTIONS = foreign EXTRA_DIST = \ etc_sysconfig_slurm \ libalps_test_programs.tar.gz \ opt_modulefiles_slurm.in \ pam_job.c \ plugstack.conf.template \ slurm.conf.template \ slurmconfgen.py.in @HAVE_NATIVE_CRAY_TRUE@sbin_SCRIPTS = slurmconfgen.py @HAVE_REAL_CRAY_TRUE@noinst_DATA = opt_modulefiles_slurm all: all-recursive .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign contribs/cray/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign contribs/cray/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): install-sbinSCRIPTS: $(sbin_SCRIPTS) @$(NORMAL_INSTALL) @list='$(sbin_SCRIPTS)'; test -n "$(sbindir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(sbindir)'"; \ $(MKDIR_P) "$(DESTDIR)$(sbindir)" || exit 1; \ fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ if test -f "$$d$$p"; then echo "$$d$$p"; echo "$$p"; else :; fi; \ done | \ sed -e 'p;s,.*/,,;n' \ -e 'h;s|.*|.|' \ -e 'p;x;s,.*/,,;$(transform)' | sed 'N;N;N;s,\n, ,g' | \ $(AWK) 'BEGIN { files["."] = ""; dirs["."] = 1; } \ { d=$$3; if (dirs[d] != 1) { print "d", d; dirs[d] = 1 } \ if ($$2 == $$4) { files[d] = files[d] " " $$1; \ if (++n[d] == $(am__install_max)) { \ print "f", d, files[d]; n[d] = 0; files[d] = "" } } \ else { print "f", d "/" $$4, $$1 } } \ END { for (d in files) print "f", d, files[d] }' | \ while read type dir files; do \ if test "$$dir" = .; then dir=; else dir=/$$dir; fi; \ test -z "$$files" || { \ echo " $(INSTALL_SCRIPT) $$files '$(DESTDIR)$(sbindir)$$dir'"; \ $(INSTALL_SCRIPT) $$files "$(DESTDIR)$(sbindir)$$dir" || exit $$?; \ } \ ; done uninstall-sbinSCRIPTS: @$(NORMAL_UNINSTALL) @list='$(sbin_SCRIPTS)'; test -n "$(sbindir)" || exit 0; \ files=`for p in $$list; do echo "$$p"; done | \ sed -e 's,.*/,,;$(transform)'`; \ dir='$(DESTDIR)$(sbindir)'; $(am__uninstall_files_from_dir) mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs # This directory's subdirectories are mostly independent; you can cd # into them and run 'make' without going through this Makefile. # To change the values of 'make' variables: instead of editing Makefiles, # (1) if the variable is set in 'config.status', edit 'config.status' # (which will cause the Makefiles to be regenerated when you run 'make'); # (2) otherwise, pass the desired values on the 'make' command line. $(am__recursive_targets): @fail=; \ if $(am__make_keepgoing); then \ failcom='fail=yes'; \ else \ failcom='exit 1'; \ fi; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ case "$@" in \ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ *) list='$(SUBDIRS)' ;; \ esac; \ for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done; \ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-recursive TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ include_option=--etags-include; \ empty_fix=.; \ else \ include_option=--include; \ empty_fix=; \ fi; \ list='$(SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test ! -f $$subdir/TAGS || \ set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \ fi; \ done; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-recursive CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-recursive cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ $(am__make_dryrun) \ || test -d "$(distdir)/$$subdir" \ || $(MKDIR_P) "$(distdir)/$$subdir" \ || exit 1; \ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \ $(am__relativize); \ new_distdir=$$reldir; \ dir1=$$subdir; dir2="$(top_distdir)"; \ $(am__relativize); \ new_top_distdir=$$reldir; \ echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \ echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \ ($(am__cd) $$subdir && \ $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$$new_top_distdir" \ distdir="$$new_distdir" \ am__remove_distdir=: \ am__skip_length_check=: \ am__skip_mode_fix=: \ distdir) \ || exit 1; \ fi; \ done check-am: all-am check: check-recursive all-am: Makefile $(SCRIPTS) $(DATA) installdirs: installdirs-recursive installdirs-am: for dir in "$(DESTDIR)$(sbindir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-recursive install-exec: install-exec-recursive install-data: install-data-recursive uninstall: uninstall-recursive install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-recursive install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-recursive clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-recursive -rm -f Makefile distclean-am: clean-am distclean-generic distclean-tags dvi: dvi-recursive dvi-am: html: html-recursive html-am: info: info-recursive info-am: install-data-am: install-dvi: install-dvi-recursive install-dvi-am: install-exec-am: install-sbinSCRIPTS install-html: install-html-recursive install-html-am: install-info: install-info-recursive install-info-am: install-man: install-pdf: install-pdf-recursive install-pdf-am: install-ps: install-ps-recursive install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-recursive -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-recursive mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-recursive pdf-am: ps: ps-recursive ps-am: uninstall-am: uninstall-sbinSCRIPTS .MAKE: $(am__recursive_targets) install-am install-strip .PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am check \ check-am clean clean-generic clean-libtool cscopelist-am ctags \ ctags-am distclean distclean-generic distclean-libtool \ distclean-tags distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-sbinSCRIPTS install-strip installcheck installcheck-am \ installdirs installdirs-am maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-generic \ mostlyclean-libtool pdf pdf-am ps ps-am tags tags-am uninstall \ uninstall-am uninstall-sbinSCRIPTS # Don't rely on autoconf to replace variables outside of makefiles opt_modulefiles_slurm: opt_modulefiles_slurm.in Makefile sed -e 's|@prefix[@]|$(prefix)|g' \ -e 's|@MUNGE_DIR[@]|$(MUNGE_DIR)|g' \ -e 's|@libdir[@]|$(libdir)|g' \ ${abs_srcdir}/opt_modulefiles_slurm.in >opt_modulefiles_slurm slurmconfgen.py: slurmconfgen.py.in Makefile sed -e 's|@sysconfdir[@]|$(sysconfdir)|g' \ ${abs_srcdir}/slurmconfgen.py.in >slurmconfgen.py clean-generic: rm -f opt_modulefiles_slurm slurmconfgen.py # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/cray/csm/000077500000000000000000000000001265000126300176705ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/cray/csm/Makefile.am000066400000000000000000000001561265000126300217260ustar00rootroot00000000000000# # Makefile for cray/csm scripts # EXTRA_DIST = \ gres.conf.j2 \ slurm.conf.j2 \ slurmconfgen_smw.py slurm-slurm-15-08-7-1/contribs/cray/csm/Makefile.in000066400000000000000000000420441265000126300217410ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ # # Makefile for cray/csm scripts # VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/cray/csm DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ EXTRA_DIST = \ gres.conf.j2 \ slurm.conf.j2 \ slurmconfgen_smw.py all: all-am .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu contribs/cray/csm/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu contribs/cray/csm/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile installdirs: install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: .MAKE: install-am install-strip .PHONY: all all-am check check-am clean clean-generic clean-libtool \ cscopelist-am ctags-am distclean distclean-generic \ distclean-libtool distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags-am uninstall uninstall-am # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/cray/csm/gres.conf.j2000066400000000000000000000005751265000126300220200ustar00rootroot00000000000000# # (c) Copyright 2015 Cray Inc. All Rights Reserved. # # This file was generated by {{script}} on {{date}}. # # See the gres.conf man page for more information. # {% for node in nodes.values() %}{% for gres in node.Gres %}NodeName={{ node.NodeName }} Name={{ gres.Name }} {% if gres.File %}File={{ gres.File }}{% else %}Count={{ gres.Count }}{% endif %} {% endfor %}{% endfor %} slurm-slurm-15-08-7-1/contribs/cray/csm/slurm.conf.j2000066400000000000000000000033531265000126300222170ustar00rootroot00000000000000# # (c) Copyright 2015 Cray Inc. All Rights Reserved. # # This file was generated by {{ script }} on {{ date }}. # # See the slurm.conf man page for more information. # ControlMachine={{ controlmachine }} AuthType=auth/munge CoreSpecPlugin=cray CryptoType=crypto/munge GresTypes={{ grestypes|join(',') }} JobContainerType=job_container/cncu JobSubmitPlugins=cray KillOnBadExit=1 MpiParams=ports=20000-32767 ProctrackType=proctrack/cray # Some programming models require unlimited virtual memory PropagateResourceLimitsExcept=AS # ReturnToService 2 will let rebooted nodes come back up immediately ReturnToService=2 SlurmctldPidFile=/var/spool/slurm/slurmctld.pid SlurmdPidFile=/var/spool/slurmd/slurmd.pid SlurmdSpoolDir=/var/spool/slurmd SlurmUser=root StateSaveLocation=/var/spool/slurm SwitchType=switch/cray TaskPlugin=task/affinity,task/cgroup,task/cray # # # SCHEDULING DefMemPerCPU={{ defmem }} FastSchedule=0 MaxMemPerCPU={{ maxmem }} SchedulerType=sched/backfill SelectType=select/cray SelectTypeParameters=CR_CORE_Memory,other_cons_res # # # LOGGING AND ACCOUNTING JobCompType=jobcomp/none JobAcctGatherFrequency=30 JobAcctGatherType=jobacct_gather/linux SlurmctldDebug=info SlurmctldLogFile=/var/spool/slurm/slurmctld.log SlurmdDebug=info SlurmdLogFile=/var/spool/slurmd/%h.log # # # POWER SAVE SUPPORT FOR IDLE NODES (optional) CpuFreqDef=performance # # # COMPUTE NODES {% for node in nodes.values() %}NodeName={{ node.NodeName }} Sockets={{ node.Sockets }} CoresPerSocket={{ node.CoresPerSocket }} ThreadsPerCore={{ node.ThreadsPerCore }} Gres={{ node.Gres|join(',') }} # RealMemory={{ node.RealMemory }} {% endfor %}# # # PARTITIONS PartitionName=workq Nodes={{ nodelist }} Shared=EXCLUSIVE Priority=1 Default=YES DefaultTime=60 MaxTime=24:00:00 State=UP slurm-slurm-15-08-7-1/contribs/cray/csm/slurmconfgen_smw.py000066400000000000000000000217431265000126300236410ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2015 Cray Inc. All Rights Reserved """ A script to generate slurm.conf and gres.conf for a Cray system on the smw """ import argparse import os import subprocess import sys import time import xml.etree.ElementTree from jinja2 import Environment, FileSystemLoader NAME = 'slurmconfgen_smw.py' class Gres(object): """ A class for generic resources """ def __init__(self, name, count): """ Initialize a gres with the given name and count """ self.Name = name self.Count = count if name == 'gpu': if count == 1: self.File = '/dev/nvidia0' else: self.File = '/dev/nvidia[0-{0}]'.format(count - 1) elif name == 'mic': if count == 1: self.File = '/dev/mic0' else: self.File = '/dev/mic[0-{0}]'.format(count - 1) else: self.File = None def __eq__(self, other): """ Check if two gres are equal """ return (self.Name == other.Name and self.Count == other.Count and self.File == other.File) def __str__(self): """ Return a gres string suitable for slurm.conf """ if self.Count == 1: return self.Name else: return '{0}:{1}'.format(self.Name, self.Count) def parse_args(): """ Parse arguments """ parser = argparse.ArgumentParser( description='Generate slurm.conf and gres.conf on a Cray smw') parser.add_argument('controlmachine', help='Hostname of the node to run slurmctld') parser.add_argument('partition', help='Partition to generate slurm.conf for') parser.add_argument('-t', '--templatedir', help='Directory containing j2 templates', default='.') parser.add_argument('-o', '--output', help='Output directory for slurm.conf and gres.conf', default='.') return parser.parse_args() def get_inventory(partition): """ Gets a hardware inventory for the given partition. Returns the node dictionary """ print 'Gathering hardware inventory...' nodes = {} # Get an inventory and parse the XML xthwinv = subprocess.Popen(['/opt/cray/hss/default/bin/xthwinv', '-X', partition], stdout=subprocess.PIPE) inventory, _ = xthwinv.communicate() inventoryxml = xml.etree.ElementTree.fromstring(inventory) # Loop through all modules for modulexml in inventoryxml.findall('module_list/module'): # Skip service nodes board_type = modulexml.find('board_type').text if board_type == '10': continue elif board_type != '13': print 'WARNING: board type {} unknown'.format(board_type) # Loop through nodes in this module for nodexml in modulexml.findall('node_list/node'): nid = int(nodexml.find('nic').text) cores = int(nodexml.find('cores').text) sockets = int(nodexml.find('sockets').text) memory = int(nodexml.find('memory/sizeGB').text) * 1024 node = {'CoresPerSocket': cores / sockets, 'RealMemory': memory, 'Sockets': sockets, 'ThreadsPerCore': int(nodexml.find('hyper_threads').text)} # Determine the generic resources craynetwork = 4 gpu = 0 mic = 0 for accelxml in nodexml.findall( 'accelerator_list/accelerator/type'): if accelxml.text == 'GPU': gpu += 1 elif accelxml.text == 'MIC': mic += 1 craynetwork = 2 else: print ('WARNING: accelerator type {0} unknown' .format(accelxml.text)) node['Gres'] = [Gres('craynetwork', craynetwork)] if gpu > 0: node['Gres'].append(Gres('gpu', gpu)) if mic > 0: node['Gres'].append(Gres('mic', mic)) # Add to output data structures nodes[nid] = node return nodes def compact_nodes(nodes): """ Compacts nodes when possible into single entries """ basenode = None toremove = [] print 'Compacting node configuration...' for curnid in sorted(nodes): if basenode is None: basenode = nodes[curnid] nidlist = [int(curnid)] continue curnode = nodes[curnid] if (curnode['CoresPerSocket'] == basenode['CoresPerSocket'] and curnode['Gres'] == basenode['Gres'] and curnode['RealMemory'] == basenode['RealMemory'] and curnode['Sockets'] == basenode['Sockets'] and curnode['ThreadsPerCore'] == basenode['ThreadsPerCore']): # Append this nid to the nidlist nidlist.append(int(curnid)) toremove.append(curnid) else: # We can't consolidate, move on basenode['NodeName'] = rli_compress(nidlist) basenode = curnode nidlist = [int(curnid)] basenode['NodeName'] = rli_compress(nidlist) # Remove nodes we've consolidated for nid in toremove: del nodes[nid] def scale_mem(mem): """ Scale memory values back since available memory is lower than total memory """ return mem * 98 / 100 def get_mem_per_cpu(nodes): """ Given the node configuration, determine the default memory per cpu (mem)/(cores) and max memory per cpu, returned as a tuple """ defmem = 0 maxmem = 0 for node in nodes.values(): if node['RealMemory'] > maxmem: maxmem = node['RealMemory'] mem_per_thread = (node['RealMemory'] / node['Sockets'] / node['CoresPerSocket'] / node['ThreadsPerCore']) if defmem == 0 or mem_per_thread < defmem: defmem = mem_per_thread return (scale_mem(defmem), scale_mem(maxmem)) def range_str(range_start, range_end, field_width): """ Returns a string representation of the given range using the given field width """ if range_end < range_start: raise Exception('Range end before range start') elif range_start == range_end: return '{0:0{1}d}'.format(range_end, field_width) elif range_start + 1 == range_end: return '{0:0{2}d},{1:0{2}d}'.format(range_start, range_end, field_width) return '{0:0{2}d}-{1:0{2}d}'.format(range_start, range_end, field_width) def rli_compress(nidlist): """ Given a list of node ids, rli compress them into a slurm hostlist (ex. list [1,2,3,5] becomes string nid0000[1-3,5]) """ # Determine number of digits in the highest nid number numdigits = len(str(max(nidlist))) if numdigits > 5: raise Exception('Nid number too high') range_start = nidlist[0] range_end = nidlist[0] ranges = [] for nid in nidlist: # If nid too large, append to rli and start fresh if nid > range_end + 1 or nid < range_end: ranges.append(range_str(range_start, range_end, numdigits)) range_start = nid range_end = nid # Append the last range ranges.append(range_str(range_start, range_end, numdigits)) return 'nid{0}[{1}]'.format('0' * (5 - numdigits), ','.join(ranges)) def get_gres_types(nodes): """ Get a set of gres types """ grestypes = set() for node in nodes.values(): grestypes.update([gres.Name for gres in node['Gres']]) return grestypes def main(): """ Get hardware info, format it, and write to slurm.conf and gres.conf """ args = parse_args() # Get info from xthwinv and xtcli nodes = get_inventory(args.partition) nodelist = rli_compress([int(nid) for nid in nodes]) compact_nodes(nodes) defmem, maxmem = get_mem_per_cpu(nodes) # Write files from templates jinjaenv = Environment(loader=FileSystemLoader(args.templatedir)) conffile = os.path.join(args.output, 'slurm.conf') print 'Writing Slurm configuration to {0}...'.format(conffile) with open(conffile, 'w') as outfile: outfile.write(jinjaenv.get_template('slurm.conf.j2').render( script=sys.argv[0], date=time.asctime(), controlmachine=args.controlmachine, grestypes=get_gres_types(nodes), defmem=defmem, maxmem=maxmem, nodes=nodes, nodelist=nodelist)) gresfilename = os.path.join(args.output, 'gres.conf') print 'Writing gres configuration to {0}...'.format(gresfilename) with open(gresfilename, 'w') as gresfile: gresfile.write(jinjaenv.get_template('gres.conf.j2').render( script=sys.argv[0], date=time.asctime(), nodes=nodes)) print 'Done.' if __name__ == "__main__": main() slurm-slurm-15-08-7-1/contribs/cray/etc_sysconfig_slurm000066400000000000000000000023531265000126300231150ustar00rootroot00000000000000# # /etc/sysconfig/slurm for Cray XT/XE systems # # Cray is SuSe-based, which means that ulimits from /etc/security/limits.conf # will get picked up any time slurm is restarted e.g. via pdsh/ssh. Since slurm # respects configured limits, this can mean that for instance batch jobs get # killed as a result of configuring CPU time limits. Set sane start limits here. # # Values were taken from pam-1.1.2 Debian package ulimit -t unlimited # max amount of CPU time in seconds ulimit -d unlimited # max size of a process's data segment in KB ulimit -l 64 # max memory size (KB) that may be locked into memory ulimit -m unlimited # max RSS size in KB ulimit -u unlimited # max number of processes ulimit -f unlimited # max size of files written by process and children ulimit -x unlimited # max number of file locks ulimit -i 16382 # max number of pending signals ulimit -q 819200 # max number of bytes in POSIX message queues ulimit -Sc 0 # max size of core files (soft limit) ulimit -Hc unlimited # max size of core files (hard limit) ulimit -Ss 8192 # max stack size (soft limit) ulimit -Hs unlimited # max stack size (hard limit) ulimit -n 1024 # max number of open file descriptors ulimit -v unlimited # max size of virtual memory (address space) in KB slurm-slurm-15-08-7-1/contribs/cray/libalps_test_programs.tar.gz000066400000000000000000012417121265000126300246440ustar00rootroot00000000000000zM9 xUAxdձh{y6ij Bw2$LfIue\| @YW뭋(JE|]DQEI •w|3q8B~D@JTJrTxJI~ RU$*/Qi1:)X.#Js0.cTk QD* 2Rwc4NP,lI_R*$  #$E0U ?T *$*% I\٥^~kD%#Q.aDR-IRXK<4`"a-F p 03p֩uC`34G"XpݜUUm;!Qwu0GXqA `6t%9Zg 9`L/]Ҳ Ip xy9d4!8k*JJ[CmqBZ-@GprXt8OvɪՙQYIT*D@fi!N@l?^l*-+(^_e&N! lv'(XY\~Fb%u`./"Z/(R0 ) \ᳪ|6%D2` VCi-,fql9;9)체2K' m-] } ;)xIeQ7B${"`Phk`"0BE$VWs2 t`jqeK9g;0ΰU *K>gB$d~ꐽpi?TD%쫔R"HWx" #x"…30h\8EY@@ú$\rn* p L`lQ%]M0\!x"E@Ұgr5;Z M~p~s2E$qe>\O!KUDҞ??&t4$i9P"6[E/Gj?\.SJ2)L0v$Hm" Q9 9 0b ,Dt RIy9'95NUR8S_xc<8%\f]Jz[jǂˁVDZ));\K04FA_38Cc3(c0W>Օ&ۡ/eBu%Sk)dvâ꭮3bqD6ꭺ qX9.u%ZdLEՀ.e>ixC#98zg Zw ap/ "]2s[ks~j^ \V=b2E_R([^Pv E1(ѓ88$R)ƙ0"LJn0gX~/CTȳ Ó|I]JseC&:f tU)>b $H+ڈԈ$tKҧ4F}>[A-.n!4o&ZYQ[pk5s.G6j!D,hDqTc0Po iJsMHe4ZcrI"D.g]vGNR(]9ުPM'3zx dlH4Iz&7B2gAѢQkF 9D=IU񙫔~H: Vْt\[/xD:?Ǔ"MViջ=i)\(ۜuCHd!!jiꀻڍ+&B㗄ëB6]օS(ZD]^29D؉jO0Dy-kbYDW",&[-~-7SiOic.A Vda*mJ Q"QΑ&Q yՊefT$3 QB2kX!M#:Ѯl $t7NE}ERODjg]E}^ jA9mʦ)H@ӈ&$n:ĕ),[%@MgARe!J( z1ZGY閪FI ') 1,njPP8DƘz0L\vV+R1*]ĨÒ&ԭ%ț#ړZy+HRhJM%$+4Ѥ9L&o M x6Jd i<*+5.i*O&DYiIIOH}$n8&҈4Ph"KBchj^FT%m [[jâtL:R!ҩa.eќYdtA/794@LEVJF4$܄JxpkAQX (9GK2T;Rƍ&"*4%W4?a6QFBHxޤVkÆOj̫}> Ndf"~(KFN#6Hcw0KT&S̘(<RI׉lJc Z5QmvBaW;3P4Dd,n6,7$Ag ˵ 4pJy>AJK&sL+WϿ/=7\,/ОEr")PrDQEOw6_={(sUt;̎_\闓_2-X/;V :ڻFa=0~^/?駞'co m{nmv>raN`ck6ese;?~MŔ1sYWnݶU7u:CY;Pӳ /SջI^L熞 Lȷ{MxF3oCw~2=dOLIޔ?swk[eg'zT]/, V\{؎[crW>l$llI>abĢȂıWw~CΕqF'v5t<~3#_>z4%vu_t M3oYy?)?5^8p-/44r@S͝;3ʻ-Fh*hҗn鳽O;jLml9vے7|Jeڋ+_Oҍs8cٞsVfww^p|EC\(3gXu[mcLmWυ6~%Xt3Chu~쓸\Aw4V6.B_ 54eKBF2+6Ct́u,q/P ^P+:S]oM蛸U}ɕ`kEHCkpp?;gd'^^-~xZ霹[o.Bhh թ *ί]DOU>{_]z쩁]_A da;ŽwNnX$+p ;F--?:_-ɒ˄76+l +}mЎ`G0`'_W8o`_BIn~_iWG/p쏏Mߤ--=kJ{noR7_8 (Y=īp71lW׌:qhoU϶o;U,mkNǢ O~rŔۏs88[hؒn}fOc&֞(3_S+,aURzݻdsU_ITrTUwL(*V~h_ͭo$<' v־_wwytyz М/k^o[nK|}ۛgH3O|e^bCNh߀'7uc4R'ꧾ ot==ѳBݻ%{cO|ʍȀK{M~ch^x}_^.2Ȳ(m!{NL,}kѹgoʻ7&4D7\d]GVm۸գ\-Xmc[v9_XM(%!J#nMwwwHwHH# !Jwtw m}=sݗg֬fʙ}qcǵKZbmQc5W xw<<;9U2>uB ( (#p= t (5h%*0$@֨[!iFQY:m/4;86fSn=8f߷-BNY 0ƫWSTgR]&X:j2mFY*GmNxQdz.x"$8pT$LYhDu tÇ`gK}"\4B;ե1uB;ʬ/:LIatwJ4]gR 4\YH m)z#Gb*'2.o9K9*>%XOw,Cηǻ'?V Q<ؗFRQw$|O/ 0皛4n?E=>'CBތ&C\Cwg?S5͈Hm --~muSFw͌+ioq"{yLsK]Lԛ1z6:!7C&"7ش">.ϻZ[\/ȰR~!ji{s5~B J,wǓ=: AaH-SԝQʑV#7v b&LXJ!Styex_}ھr5Zs$q&on$IOw䴝&CE=yw6^L9;T~: Vf(, ]8gM9s'u* Pe2׉w"Zxn]Kɷ_mxg(1cUca )M߂?Xn66%H,軙pN;v-lX9$#'3vhw[.:U;?6feȪ;*>+9r ?K k'f; +$ @IOoېYQB9F/.%0" P&ՃJ%ᣭ=d.5@zML! &lZȓ 4iKI,FZ?xTr|sI.a`z*f Qv3[,ZxdD]DNYu#G4f9 _pf Un(lQ:=,xA^iUEɻ);鉑fv)[м]pb xpa@#9q~.r39gQ]{n TĈϩɕYZǛr>:tn^N*I`4-u7 JDx%!VxsjD xhCRu^Z4^̼Z~]n"thv;M GGʃfA\B׹lkJ8hWP^Y-п'p}@uû͔*j'>r?t3r뒜C>WJf=l*!~B[wňMf>bp[:8s]ѷN"9|, ~=ϵk t\Vg?6zIu@.6^K|,TU)ӃT %:1W痲,2Zlw?x>V3 漎eLx\3l>a{r}aiK|Ah 9s$N|Pm-gD6:{K ?hy/482~+jI6͹>d6 =b!{ DA{b8x)xZt[ P8Q͆܏-HM@Z_d% reGTP%OQzL[ FU}f 1l,j 8V#ɬE |:Q<^E Gixpl1;= oS^Bjim4?{`*^AF$ylƎg{W' a8-ռ{,]>x]L6߷7 JrŒ.Ej 70H',)C^8, z{n/NfstZ[+靻 c;bheYUK씱 i=Lΐ6!qf`} [+ omkMPC+zYH^k8W5Ni6553 \8WEhC?fF:;GٷNk#l(VtJR1<7D" Z@bq0#STcz (G&ѹJjo6uv'O9\i ?CX{/ft{SLGuO<χ?U+w ͞<݃n-+"]PxD'abSff~@@*ի6)~#0ȇ>4Wu}9RԕID _nn*vBω+Q}x53j IB-Żڶݜ{'o: X|j՛ܹD&;ݍ= sQ]ȯOIZ1K3h QeL8Lxfh^j~^]3F>փWD|h ϝ(]8Ǯ^5NKMD +[. 5]Tqc >C=3;$4=j;q\WcO-!,>3OA&٩4D^-\^q>w`*PS^v7#T-8\!#׸ERzQlmwT'dgut`b/[$R,3)\yx(,4*`=`A{zM\r*T,nG6zjgrw ^g֓@SzWwgOa){%(Pܺ?םr˪$6[n!Gvo55G j4~h| /]DVPrVc&v=Y`#YPc <1)h`T%]"FnӥrHPg˅_y˴WcFL!KSH2<K/}#,<Ϸg=_IGu =ws@3v FЦGmL)Dɒ$>gTwZZ Pl5Vkgἤ}S fEKˈP@Kt>R7 ~M]>q4="T>k;m䧿ưܞ_05SAV|%NaLnNmZC38 dinH@ENPQ8SˋpM^`ޱyZe0:07]g/t.z?291FZqTg@P.-JE䱪a|ET q"C=(!ֿ@.:r4YӖKm)Q%F4s\ʴn>.:w _& ۢ8^tQ^LXqـM|1XuBE $`ɀvzƻAb$ٯ(ͬ7+Ns ײ$#\VĽ#TWbb4[)>>kqq\Z …B> $ S>uD-p6ոT=­D_̏*R\-(e[B9J*]Go<3/u+5z ( TO Aj%hJw8L8~Ŭ_&'y=:15@=`%}tSTmpx ?!c:L*2B\=D^ OGAZb5u:4qwmD^X=;)kXUmʝE5AU0yl̵1ŝm@o`؃'\e`N(7I"jOЗIJxqz~ڃI#C6ՙz~Ch1P||<0kޞ8g\`R:ޘrڒDoyˊW TK19|ŃMWhvizizJ];3 k&Ύ! gg7M0ίj 4?޿"X%KɤRTkD$ IGV󙆷Qùbh\I1wHV"\GCi.LDT^:C2&?sS5=rlNsV"߱ nE=MmwpJ$W+Ar`,~3y}ip(mZɢz0SFwS4 Vr޹'M)βg8_`c^ > E:RVzrȞd_khWnm}: MQ S/GRb"D Q QѝcJ+񼫙륆%&|tM-:F7$|SGƌq} YQSȖ.c|j-"[P] XREU++k#'I<!T(qDotCj, \mpT<-^ ~Ӈ!GjRDH3ԥX6EFLKacK3<y18̞g'ͣANmddyeD:MݶZx%?43bž#-c?y3e՜W3߇n8ZOTɦ9-Qܭ\Q QuaޏZ]^)"m҃ĦPg, FTx^J EDw X|5JSW` Y3.nǮE8xBQ֦^ ,x<]/EyF80Ӫ=Z#;GP2SJͥ)iYjCkl;½kiqͲM=y@ w-[>Dk-.er ɪ ~@_XtӗSzRSq.3,a"ug]UO&9~Gsl05@uDyk!vp=ãA%u(y-v}a 7\tf55j'*u;+80J}8ƍDͽa=' KDأTq0@Jf QVi#9c{7Qm!Q,1x8Cr tg'5fgOm#1'ЕkwC,ߖpK広 T#g|vOU:kLɬR}I^LrJ s5\ċ?G]˓>n֣M^4XƏpf4Opӊ]˧Fܮ_c8Z>G6Dt)lP;ՀTYDx'y$\d:t5BgatϬs7ƲT{Ro.eϲoxp[G~;Oz:5|Ωi i0M7 \Hn`yD9(A;([ Hi ,ϯau5KtЄi(4މ^4V#eJ}`]VEw=~%?bIFt,X˫kJPZ s4kFQm8U;>yVjd6`@tɤLhKK$Xހ/).|酺w,YV!m.T[&\#\uLaVV8L&ȸwyecZt"s$=̦h7@YlݼSE#1}fʝwdrp[RT`v\Cv 鼲JY,-&l ͞W /u?]y"X'G8VHP/~ 9(=r VjDx*zP*}yJS>KfBhXiSOCm6:Wh4=V9Nn! Ihpuk:(6ݤpᡅ]}\^TEU{WZxEw5l(v6+ೀB|s\78Z6RU{9'3y5tef5ď\fp=5!-G7~Hf&͋7זJ|MgR- Qgc1[y5j;PkɺT9afչ[JXctĺ OyF㪉tGwj+VϫLD2 8^>jbu ]JOVU]m8bλv:mV/a $kB"qe 5>zD;8]2ZJu2YG7Ha@Sym'O}ȳS\ZO.XP&>ihn[f?*4ٖR .z8˝fp~m f%RtHgs}R~1!:=*"gަ[e~G 0f"NёR,=FbvaH-|^z_MC_?gi;8yc̅f2%qX u>c,5 eDIyH;QrTϿ@P6{:cNيzb# 5Y;SwV6:|W,^M{b6? $~Ƞ.u/+<'%dX : 7Zpvs)<016r+]5 s#Oyr'[hdr0Mz`F5 bHg`} KYMd9ܱ):?P=tnz1>,̓gd6it]}%X,Χ=mRH嘆M#hp&ͥlC⺈r? %QIpnܷQgzPϞ+lwQ&lj^)_lnL=MA7Y`)!ECF˄+K?_p B'7}$/˭'J ˸v&ۑ*6ZbmY -7-e^7Zɸv0,|%/w ΡȧXAO}.2-o KSe9\p~VƆO vbGHLzwdVt5 wݓB^O&Rc%?&+mDI+)[/6xƩ"+ 668=X;]|8Nkݚ>v5?0wi Q ƪ 3GXntǝ^~P髸'M"O1uXxV&*r(i5):[|VO=ẜjv&P/!ᑑ}6C"9KBfۖ X^mjoj ~ØdlWֳFڰrޮٰ๖I lFsr_ I7U+k⣑rÌ9v[2bqd }^mUy!/Cd=?W`U[law6 =!^C۟\BIIh$;I$Zӎ^~bkX؁m%yprz'pFj.~Z C},3l~J v @r>UP2jS.iXY+@'O9W< KqD';{,7')ŰDȐ; ;vI띒y"mT7Lz$+8_7C/k \,= GXVǛ$O%Sru5'^YzK;<]}I>+*0()`a  ^ ~+z(VjͲYSx2!C䔲zPAiڄ g6gs[#DŽ XN($g"V5d&u0MEUqpxRdp/&Jo 84U Y.w釚Ow %rv\Sdi &b>9tS=8 5nF˔SNQ`)QKrOSfD"x7;mG1ΆB/P!Ϲd_q>wk0#)LML +4NՄρWGxֻ;,C&|ϵ>(m$]f#}~c v>/tL%aRa9qQs&9̮!I,0Ӌ ֭E6fFs+V("<c}ܳ4;#YӈhB"־njLdC @hN2A#cQz%fZh kYWa}n!< m6aeD_EN /'l*̸._9v1=|5\CvyW*8%p\){GB<9'Nf3Or1PV#tu^H3F:>:%Z)N2l"w%6anPSC˵4ۈ嫞ܒ<1'ęH}" 'xϕ$B3.+|uAD*4}2/-C0Ӄw}y8zqMp8j ʝ*#-6n\0UrU Hy\l$GTlUŵ/ 33VLٳ3ukyN財X;?$JT="J~9BWbR A;&z:.슄1}ڝ7?|Bp300l /o =O?-E^ȫd=iq6;cR-!~^7;jF&]b3"D80"v <beh0*z(Jx)_f?NvVvnY nk=ؖ*i %GdA^vDazPqG$i07(JP25#cs5i>dsP[c5ݒۓ=۞f~ qv85vB+^^b~OxLW[W5Q&31BFbX 7{6Wլ F7m/u)vx .QIs 7߀oM^/{rPy"u1P`"G}|UޡmF9OwIhj.V(Q?nW]MMbR|5i)H|gH7/!Ogx<1&=$d.G_ IuێQv*Qlx7VuL  UR+j 1a W,׆ӊ= VLUbGBlX  ց%(]:Fۊ]Qo\Pm"DkoGq<M:k;kxE=OWVr'#"U_&i<~ѓFm3SǞ?V=Aǻ {r޷3b^rhF`hQS]+oIwioB}l&PCuP6T 5BzS!ry*bܾZEj3Xk=ы\.! tjT5tKv߄,knC:-5fˆ꛷)YF!f.-^]~Y#>]8DT&S <DEGܹ8: ȳ+:r<'t0h9#D"hԱЇi=f 6I1 BlszR%et.$h :սDв\r@Dh3qRKv.q"$|䮌nyqY C@jrHHl~ z_b&UE^Ntǚ ip bK#Ŭgrp tiFZhu;64 h=v|ӑQe֒aJhHOSX_P7V>MZ RIc!!3 )ۍ~ŮCch.l<Ԋnȹ͙ H~$X;|@b ir|S)[B:} o3YҒ? $c}v}ZJ6լ~Jx}a0(U?`h*Q/pyGN9H__k)"9HLrT~JbwA΀~\YLVj7f{nOwDպx"3M6sth66ko7$*̭éGJ> iSJb(+*J*eSANf`mj@o}\)zpE.V5;Յ%F?M?10jɣ>QzVZ.Z)_mJGH6 C֐!9VViM|2R ^iR #@Mw?y:]9]2:[N`! h4nx _~ouw3"p̈+'d,m|XBG}EE#(*ٍ,H`O;hg Wxv-֪#jj yIy=&o^):;Q96FPֵfdǦR ƸC*dӪo,%{ f׫ZID;a%&D$<[{}mIH9ID?cґ늓 RYeaOQf`W[}^ f+s'ye&r[#֒*?dIeLekΘ%J?&ۺ'`S:!د{yD.8YNFj+=hiE: t#+/J"x F_-c_u5z44Xu7E|n"kS5Y(Oa$B#,йB ČĐ)I!}$Ciϖ!]lyYo4'fWMmUQsŸhj Z:=\b놞;wxW2bE+uE벳 +v`"MkM#]pS jKo4d0 |Za&%Ӭ=)(FT,rYJDKH2Q"SC ;gHt9p c8.08r[7H,gf,.f(P+\1haHMtf )]Sv>ZSC>Ÿ xQlU<` k= g#:lT nkV0ShIdO /~66$AЉ(2J|xd/\}TF2ݻuLmwNcid#Xubfn<%ΞQ CF釕NƝ!K{cӇvݡl>+ἀ9vbUDX Ld4ό~6Rr La 7C!3Bρ X!!7Ul"QÐ&ysJg1բ']fg|n;"՟QDTĻleƑNQeXE뤒k9Ѓu6DJg*+4) }gybqjD#9œu ,!<* f!Bg me!'FUYa=h }jo˕#) u?xI_3q!A(NO|l@σ·YoH +[R^Qch)%~:!#a'U,x]A!v5/UeMfm%FoFvg~}_G&]xegWZM8=#"Z%Ֆ$я˞ßb-Kehд-u bipOOW?]v+sGDD[lEOz{VJvȫ?r ]'+g! pNLaYkT@;U[{֨49%0UƅG܇w ʶIP{]=^VsKxKe1yK ޷&i{}Jz$9t߅B,*1%y&˓X;Z5@[sY148Yg:ǾT1Zc_$>?13D9@$LCsRim"./ѡ(ϷsP/0՞‘e¶ȭ_*qFf>q>x<1-d3?%'qk$O &S'i$q&{qiĥpCڹD+QWM.:yXĉgA8ҸCx+Ri wV+szF !='W{MNEVCa*Ry(K+@#by|2zqۙ'ܬlr 4JrZ*S1#B*0lañ[ [ @KxpX|'^APnISxf)Gq/f xnĂI !*7Άù6!#f<3az+$yhԸQ (Wx3`];O%eˢp\{H73wHb  m1r*norՌuiBc>쁿if#].zY1Lq8avFK)H(ΖP ѻZ%v aƎ5!q` &1drN2*,1 MxS' l"@jJ]J#R.i(|]BxݜFRj;莸O u- cC :xUvhn6Jt #D]qn:LŜN\ăú!zټ%w'Dz;֬5U{遏ev7C_N/lk:Z 6/ѿp *,lMTyŗ5 hB*`q!#wULjH*eL3!0EC$Oϧ|D9#U>$vLjMO#oz].L胎}..#6 Ҕ8ED?( !^2~[xa)FuaT.1G ~W(GX9j ƙT'M||p[ߨT{V#jgԈyGn @#b3pv׶'2mv.P]oԆ(@qxne=.{V{ /tkY@'a ))z'~b?0 +$w`5P-cTH5ᏺ4(s/zJ ~p:ǪG$ծ%52.I0cf-vtGs.dg< |Gt}ʱh8Oc  4RKOŬ=Ϋ Τ[W)bd|f~&p^@ æ9kkC33 1O hc@AC9{L_6ۺJ4EB溏hTot?l*J1_2wPgށxJ[#5B H;3O +j6AO9<jjj6Y r) 0gjJuACƴ j*\'- )S]Ujw٪YvǜԲrTo"{Eݍ̸#ZE x{VjY7Lg ܈3f$ky [򫥟zbX<|(Q:>>p܊Er$k h^ώո(>Wce~ B=[ma:u{є9گ !^ԴOM.<;|#.PB*_ c.s9cf\״)])t{8&EW$ͦ3NF2RzQaRl&i粗w_p⢳ZWaP㽡[. wö,/=S"Ag* s;Il$]SKkF0/g7+ *LDQ CU*_m܅<6]1V&޾;| k(Fr dg1fh xàXb6ş{Xf[ 5f=&yؙgK/^~gzys=듵KG2C`}vaO"at%GGΑ&Wʤ|%C 3J_w2Z:C5Kh8 }KBrZK!yJѢ:e& }BG hIO8)2`[3+$}wf̺>.BpG?[@wy!ňVlQu"n#μ@iY)ZO 恴v[@z!H.Y Ha(fCWh|=a. -c/WvÆLW~z+.Tǡ sa|F\[UrVl9-[r\GGQh$+Խx,W>k` ԩ[D gR ;_2x_KaqӀ~:Nw5  D1{id d"1g袼\ <~feJq g;f18 leEl=^FsD(.6qة"F^H&=vQ۪j~滋J&qVc 7,%YF,u'hSMh SPϞђg$AJaC7@3A%aя͏\|}9T'uֵ+c.t <\r8U R LJ},5[[C]E eA /ML^fOENA'BF1?pR':1遰ޘ.u޳sY^R~ft2ri7'RƩnVc͑ӌ yEV'g"y xpVQPl$)H!^?M+ zY/?a60OCZ2\}ҍJJO"$%~2@=)#S6m 7wqv[rjU&^??q,!R. vYdd=U5>{W9JGV]eP#4I:G&lq-+Y?皳 x vRkYдǣQ:9S8v+3ȤsW:-swg-&lމzgnAvi98гR*[ִАx*sP:SˉGR--|E!':&|C"4>UenAi6R^2 32g3t?y1Q%/nmQG_YfoG6;w.ZBɭH)V X.)e-b>67X8 aW%&B Vڊ [p=QfRJՌn = N7uT^'tyi!UaG᥽K~\4ˤ5QQyI OҦ:vtÔњ sɬدM5-WQn()iL!4X`ɄB U#FQu"'3Ц!o^W-ifANÿPcN ƽ%f=!Ogj#;4߾(7k2 O\z$çxq*9L~IoyKS'bݦkl^jiVr4>'IC0csdo[Z_#xGP~2$%;$rفsv=Ļtǹf:wϚ!k(ZH(3ѳ&(9&][H;)pKzέ/t?U;ϟ>q>2.]FktbMȹO3R^H0tVdH;Jy [^5CT'5Ҫ5Fn%A %F[te-&K]ڟRsF)ٜIsp)6_p|Q'"@\geW$މ*_y}tyb? rI`$,>QA@AiP/v.'C'E+ѨwD8VF=Q3s X XX۲ O\bSi{ϵFkC L {ץS8_Y <FhXs=`O|]*mmXp0Zs0ЯrS"DqKXJ^uVT@LG0O{;9'␳.ÊI"<<*EB󸹐mzТDzCqS-\gȎK5M|TU/\:I>WZ\ #k/.XYT#5lg 4z1>]w7`":g%|,Z0Q~Л ]j\Oz2Xre_8#S|zΛNIN^c> N^4(==;TSD-x `c/Zf'uSzSi!BJ$ٟ*3t gfo 5BL&9=$OfD>78.w|׮B~q3Ty-T~2LL]zg30 t늚=m&8ǜKi;e2e_yrZ;fx-c\a46KqȆ4-~MiZ- ^ 5xV#\H]kC({pѺ븜 ߜTSiٳf]K;^{6-ݣE,M~fyf,waPpqgY,Kv#=#>݌G+O|[ޯQyfZ*/gρҺ:gƗ6C֜g-D&f(}}p1'r|WnN {04 &K8Qey$)PtB9;V>̬vs3KLU'l݈R*j.0}K^bQ)YGrHqR(*`EKf,)Yu;"= DpOAʱB4Yt6%Q"\A6$aڀ'b~N%#O41߫6sllmYPɦ,듼9{lٮK62zD\@8a.5ogrxbq;[yb[bFdh!H:^Ti2\v-+] 7gA;_#_hWٍw5L)%p}5E ԘaM\ovLWt/qes1U662ж1 ){S;Ҟ|BZZ[x^^ I66׷DѼ@Հi6 s~hD'Da=9uteI!w>0( L9PePBIKBt,l-Er {4<.89Pn9P"DF4Lg5'4خ8YK+y)^ 81GmYmKf~ ɨ ߪ7 zȩO4[K?njZGV hA *ȚcCnmֵfz2J;l$N8)pϞ:ID߯ŦF;Mӷ}'066ǻKh8.8ڴ E\Z}@ (=[2 3c/ᑍ],Wn)MQ疟T?X{)(_yu>hdi'N5/gӺ%#;.C'ˮA ~ɚ'K6jMgPD- O˰Bز[_<φL|]OV j%NL_ {yQK{;姘Ka246ŧhfAx$[p<ĸ*ڔj':d_ȉgE당4qsٙ)Uo̍3UP909ܽ+">nJQa}irab{ԪwOa20 ?{k 'Fw5 P= vnk|7*#\!웾9Bj0^8.95vg*Pr.)?KWDKͤ2dQ4J8_|>J!LЖ]_a侱.-!xˇ5ij9pGUˊ݈'[ʤL&)]w7ƬA9"5P ZEf6@(p%U; AC!* !SGD0:W:=^p~Ry(5`\KOB~$IpJ¹{'|z9lI‡o)cx`Uppyۮ䲈:+1 /PGT8ѝW/Qu5'#O ",HȾ+5eA%%*,!yp멄rqM@Ցُ\]9 a|SHXij8ϚV>}v 揇 ]{u~p.2Un\wkN4g cPQ,WxtV]fFZy>ıD^uTͫi1dadSJN*,ЀQ0| Pqc^#o,5\RmFC$0_`F~X她%>EA]&H~' @'\u,#kYH/iKOG}\~a^ Ck9ZqKbj}:̳#BO*V͏P߷Wxo_k7733+=3?[HAJmg( 3ŽqSWm)t>NӊU~pł;:Tz+?FskEZ~n_D'Ze?pM||]UQ 0:0QIE6j#׼a;;5`{ TQqߑtf$S^*`w6km}(Š{rLW!c6X[XΈ qE" iJYC M)io[ɣ+ό0߰}⠾48MZ&܀jOO^Z^J$Gkym_\zKsh _QeHEh;&9*"Ԙ!!6L.ء"1acñ'.=eBFEu#c~=q˜;C uj09}#,ߨK4NKA\JS؉gA) i$+AmtУ_8f &nJ3ƘFfK\OKJx|61ݿ-Dht[ט jmM99If s}i 3ZՍ`7`T 0Ra& 7Ny7p;_"0o@pz,zAE3g_F18}T24c +Z "7}xKu}+~[IǤZqaf{g ,R7e3tIAz׶[a7M?DE4DIy0FQ[ 7s y>\ijI\GG1IZB&ͣ7UtZ&6Gu <[t@]+kCP17N,l v`'^P:`dj_񟑉fY@)3#(?m\LMFzzFz((9A? MBz6SP5\HIH mj v @L^ډ'+ #,(& . YfgjEk '(th7гZ , -L@@AAQo;K4NԊ&ŕ$d%A}t?V@i odJ.C/ߺhnpJ 8 4F!5= 5h0mF+0 $/<8$ߥb@H.$DbЛ_?1cx YwVO[f/{gY'xl _WO$ ;P460p1w=+Nzz@(@܎X[\t"{W{=-IPdz@xHo4A8" h,aJHt1@ 6v@P`XRo&耦}%h=(!8 oZvz  ӣ';kG+/&Lz_(aʤd<(7ճ7v܄.7߬!H׳ M@kЬ"цpέ@lBY |]bS#7 ɊJ銃7.Ԙ9A~Ԛք&vn~e7x&-H4S~.(k52A~(gr7N%',!'_1Ux @C]PMj4`IcB iːc |YͿF[L_"hfO ж納-,~iNwl47fɿO(mD;28:ȿpk+xoL}o1@ipKw_ޠGc~L?iC{ã(!+15wMs #xrR*=+t65D02Y#Hnmd3 '@pٳdMߡoL ?bw*_bw*_bwX*_bw*_w8KugW~\$?F(m_r;F߱wcbmF1dvlnhh pPӕa%#x~BX{ Z,mn7tM@9( ;8Rp ?%4 jYYo;cP6EB/AD W|lTEQ~~ѽ=Ʒy|@  gu_ixo pv5؏,/A -ks俆kK_@U?/[/ur_VkK_nRny(HJ8sGaokB У_P7W b ;oŔzOil>fεbuTRQ"oؚG=Qu%5zk&~Z\gpo?J:{]qp/EnG))+sp<֙,cd0t#֋.6G%+Z8deM%<cL`^[%ۿ\]С|-RFĶz!g̰aFb*!̗ izD No+%%1RMB1-^9.4l#GŲJ~nV$T^L Ȝ0'`zH["@oŘlLc55iGK3^W\X~Hduկ7f;޹$Iɛ}K;\)Q MbF-TTfYm&1 lLuN_Θ2ve}HBh!FNg9j+gFNg-Ҭ>ʙYig>igll]R7p8r{F*8&JưD|q4c~k $2jpNgFϐ ;̉Nx؇9E炘rsRy}i5bNOwn_v$XNEw~9ڲ1sbNJ[5Ҡ-8ֆlC 5;m1C3LTOua\״$Ou1씦 SUΒQZWϔ/ӲܣVaq>SzƎ|EF]؁P@IZdWlź5|Ӝȫw$3PV}J/oNv .;U!A#}vR'*:XH [G Eqr=::\-kG 9-+d]|e8)W&8^߰l1q m VFr\}ǧρbGAM"v>HBi!$VZ<ٮЖ@ 'e2p֬WܲY? w1P8bއ$:R4^47P.)ZSH, vU`+ ɜv]!Cki`ԩJ55ͱ9j7(qj t%&emv\';!k _U$42V>i}E.s'~-NԸG>|7C{ګ8* j5G'̽Trp&% Jc13y4#kff2}՜̘~#Q1asX Ӣ)14]7\i ~Sh,]E5hȜ l=@;;AC~(9 G}ėl%mZ<0PD-\Lj/ iɬY(EE6__ d 4{F4_!vj6*j=+0 p( 9X9Ef4cь)|r>\*y0 3a%PpZ kDt2u/=0ui[LvG2j/ԥh_n]K1:j+Wq%*eh5Z:Z0c;L/h8ن+,QuKJpPĦUHcnjؖ.r-]M_opHa̡BS[]D˰hvx[ qi2׭PRϲet{PΖVmԾg  &=*IGBPCjqyяCyiM+,GvŰ{.0dt8 G wh+WWX(}l+gbjCDiRf`O#A[jΗ 8 eVܫeDBMZ>R'pnʘiE"y" Oɬ);9>ߠ`CRV ӐS\9vz1@1h3xDe ,!G5@!ba #oi-J&YʉqekI{%%gE XNa]eW_?і/+VBnW+pH },..dЁV(*D7Z*q|] }Cu) ./Ŷlag0Ռ.Ŷn%b(_P\k.SDVy;;˹Z|B4|}AvjTT@þrR!"wKmb0iV"DiNQZ 2WV)Vy0` #HaHQFC{}ܨy8 6Tuͣ`Vq$jPwQEKayt$ W#+?4AqAPAIaډfF3⇇Ut&n2˚!Pl|<- |DMZ 0Ot9 8 Ŷ2TW̤BzsuU,nːv\6 .\Bմιj5Ϋ*m~Rt]Y㬮VAs0kW!TH!x'.F[sgsݺTi a]Mw7zZk}}Ӻy}FfPn: oiq6ivv4wRF:C Զ#ӡnt@ǘZGnT=6vo67v=~޼^8U[:֖& F@n𾲥hؕpPr $dP\2vAᳶn  H,R`ZȾWԤi7kaoY #zRQJVS=c#*{VZ/8 wd)yKłP5 W*ec:b!pc ,:`O+ⱤA@;h*8k 3Z,Cޜҧsguv^yE.6*^]/Jm6L$3NEaԼ41Ϧs Ķ F:E#1_ +vy?0$oS}:݅5PjgM]ش0pROj9bѶBR+Gs$s 8yŖXq1#i`CGQa`7PhF`)1o0F_94czkEGۑߏvcO|$G<8&~aD@;_{2x/g-Tp nPh/XMУX$O|!eca$@ 1=ܿ89䎫* ݎ> M͇10dDJP6A@2vI|@''/ 6n=܉+x?6x5$bPHă\MiyR7@/S .i(A ^ʖNe+- G:<Fs䉻#μ‘LWrUC=/ЪWCYaZ6Ig>.,%|/K'ڨWbȇԠĸ5T.>,0\%*o4_[ȁؾc ZƵVdϖ1r0ƨ\Ǜ|tZ7%`njբX+ȁV宕DrQ\g@M ͝[BdWnegV6^qVG~jg\Ǿ|B.-H|4ѤZEBNWVn23gd9@Fdہ,9k|`:4vs! eNedc aeQ>#({ yS V֥[&|X"6nu%K:-RŬͥ^Xb1SRj44Y%QC#~}e,@}YCL![M&}}t3-d;=#fq">S u iN"+zt+d2t v`ycR9f֘K5>7Lj9Ԭ vM{7mpu*iQH 2ǪU?l{Yy#xX+奥UWW +(X*vzO) 年 .9 "D:Ph3Ϭ(- 4ӒR}wpjл{VQ"2s$fbNa)6/c)<&'}}|bS磭pO[M{Ji1[K(хtOMg{N2J<$3p0|Uȳ%VA3*plX7TPp\ծ+3-mz.H_M""b+bF6>&Ц^_#bmS$i1BLksu0_p Ζ%q+i۴GeΕ ɛ,@B s(/ h~~^KF*~Qglh/\+(hs(E9;/gTPX9x0`_|cԢTRm5዆ljqDbxcqP(a՗ɑ. )Yʷ&Ԁ=dߚ%K I^[+;d ;'{ںMyFY9$nR7%%᭚[0䰬Q]S$L =ly@Ur||.Ŭ%a@j`Yp"{XRV〼Bp``)V%z<[u)]:'bC;OOҖRQ;ke00> I6yk3Ԝ`GrgY#tVG%YFMsB;YG5g]#$'nK(]e1q.VI0*7m^ a.*( 29Y0qűKEέؿs@( q<T"rMq0 㮪n4h]85/10hpnC7uB#p"{G ?9F?ڻNȯ7F祌anS`V>IzQX1Ma,+JW]0dg܇e)  oVv *v*?%76,a<$9}>"Gχ$GEvwTj`a@{= b]JoPIaת@lLI.Q}J`m? 㙽XPR k`ヲ5[;t('}Ѓ6m,ަ0N,mi갣Ay@\WG7=NKb Ib_csKvm,r6?q)͛:\Js揶?6W?MfgW?kU ?V^=\qޗ-,rHFhYc),K-AZ Ҩh^g5O4b)Va^-ctYUW@WPКUh#%ۻ_]N}^8X ~?ෆ?RU{;~6E[Y ݛ~+0 g75ʚt} ~#_xu>nD #JI_%Ыw -rT?pynpzE tnW|>~w)to O,-@G&g ~Kxo ?6X!`_k(ϼ\+ޓAܿ5kKE / ~7cTZJmp1+ / tɀ3 2T|^_4Gt9/W"~0!?^~?nb~=j 4j˖Vog0 DvGE }`P7Sv7PDO }r;iQ"Lx{O1` @3ͮ[C`;}oA}xdIn도t\׋EcVYvBPkCҖX d-`}C=w,ς@bOcj~7BA ~}pgXHt;xG`loC CqoR&X=;!Ϗ4[ [g b@> LNw,PX645{kSc%|h<'8,VmvGkM_ ݀ogmBV܉ϲJqvϗ3\gW}N/\Kp[,/6 ^(\Y#? %l% #eq/[%x$QI$n ~'oc_,J[%! ~?,e;COH;%  >% ~R/\STe|F/kr \ mXc\(|I  FcF ^-F{$x"$V JJr{aR7Z|s+_]l-w 2 bӳp=aTe(|èNOP0a 3Qe>H`U1  èN0==@?0[)8QPxQnpèzN;)I 9PèjNSFsB j9}= ߏb*?0Oz O Hr Dp oS6 B0a*?^Lp>oS7mT~ ÷S):SU I0*?ỨV~K>*? Kg R) Q)% /Ss.S0lSq ۩ޅT~ G1OObOn WR) ;p?T~SFK򻏺vγerH=y|+&zK%5Ώa[s^)Ngz ˝YL)ę(p'}ƒ{ؽ{^v?\OOu}S@ r8p\XZϽ~IbwrȤk ^tbtdZa䈥/@vG ,hs']'NKqf!3!4=Z\kg )d皥Ƃk=-HND\!iKYzeD>+p1OQ ,[cAѿ<:N!RH:wfabх/(Oz,`,Nsy` SbW*9^Px=rΒxV:a)4Q_&A_ILmcL~|']TܣmG{V1u%7N%nmzxէ$GκuPSIёX%qӱ}u4#+ np8_~3;;> :5i*C) w)`;*Goh8{矤q4.pj!.P.5)v؂R{L[- gA=2oAѽ$fRX4/0phE Zȭ%DLO|R?(($ $MQ0:rlͬG͘?(FC Iu` \9k9V_I%TZ`&"LfN;:#]3|]%585=+TPqfL]8#Yw G-fH␈O$lZ΀*rZ`qŊvmDʞ5Z_ʀ,Ӌ5|6C<;C=j#0ݙ!Iz==&_q¿yg񝹣}TQnII;2P\&kṪ)꘾Fh [0y{ Iua4:ñ=#+jw]@#IW)p!K`%R֞nfВ/Ute,޺wY<:cor9A`%N:+~ K|`rp} ZkR?|kvTg&n6~Op{Va(\&Yw];U*QckPtM3TޞS}r2AέB`GL3qC9(7շ]1ιLK#U^⢧-OL݂,^8yGΥ.^=93qZo䱶NBS.r3i٘:UJycUA%NN6zxvdEEKS}OOrwi-GδOvZM tjϛė;<|Jz<g7m)OI}%Z$9r%QPzv ki5m6`RCNЍ8t[e1vyEݬ6oꖒ"uPR#R[A,b , R$cS_~C HTwQXLV;9k8Fg93umbÛY&a0Za ?1Me'< 8?TpMYPf$KX+`"q05ww* f딈si!ȡ=jS>8"49*PLSi|~^~y02; Yٳ'~ٷϳo/6va{ۧJ 3 <ړ k-! ЗӀ>ل)|'ӴDIajF4LKktx1 Q=L)CjG*`c9dBtp- cȔ%JUf&ԃU\2E[E7ΰp3uboEQG2<gL\`GI9Jquv_4vmvvti-{bo|?gG^CC%68 ЩTUKehԼ ͋Ԣ34/x ooQiO__!su(~ JW:ўbv4CX>}%iJH d:Y"a4{OӌT`[O&vJj,5HdHiH/.,cNŃz~&S4,?EO{19"*T-'?YRS5-d[FylPSb j /ѠAMjPGЍXC׏ đҀϼM!_ABE?'ש螷s67~y tWfHV/6wfk>K3,[3ĄI}OL֓/ѦHmSR!zݣ᝔j:񉴚r`3&s-]MMJ:aZ3G&̑Xk"fdjFD|FkLݟ#"8j՚TWR*1ɞNov!޺AXY~٦U۔UlOnv_m}>0;yy>vA5y|`@\\8?Kװ&ݯRMEP"X/7 x6O-YȦ-ziV[LuM%O%Ѕ0`JGG?&N0۽gK4I4:i썻_.l M^k`~9O'}VvO%ސʽ!OzVH/GR7,$L, xg kϋSq xe i9=@L,|j`Zѳ B6 |!c-!nBk~1Ps4V< Kz/91S?c^qO*nUQn{XyUϱeuQq4-@h- a[DejF'b bd"˪QK$# DʣaS1?$#$wˬmT1?$# I.J*fl"zXG70)u ߲Y${?ob|թ㊰<+ טI'brž hu](촘* xQ` <^{Q=^pgkAiP'摻J JFPnL\20IBKVy$FL[wڌtht@]5Lf2g<ߢpm{V fz+An3ѕc30 !j [`e|7nZ'3qiŀ" 5kPM>1@x5?nk \:aѢ}w@07,mQ!Ŧ?0ƽ8(J&ھA'5kܲࡲXg30wD>B'xHЖ j7Aozfglл|>ч>G^Rq:r:R8k7R4!h]}~Xz5G;be+ ֯#֎SJ8O~Ȣ'?c|S:.<I. }+:JW< \LP8z Č LkG4w+M*F0u[ B@+ҡ' -~ʣEThH?SOWd uz` 4tNSeK,6/k0[Nj|:-F wHq2m:+gg>F 0-Pv0!2Deaz2pK ݻ}?g,~T 1h?:)]q}}JVҋѺlT2"0ǗY#7`CklK18e@^]^5'>*W6OuO1!SzY¯4EL͚ gUϦm3ѯ{Ww*=7m+|rY9VߏdyV`A过E~17Qp7<\ ɼ Vwc~hn)67ɒU%{'B 䕗ۋ=,Y~_Hv<mek~E>au{>u[p yAS% }&H% ߌ8@C"I' xMA[l[ڇk` TE7zj$C4;E %p {;'uM4b siR=_/f~ Ge^xGy_ϸE}!E()jAE6٘MbM1-1EqP)-|Kf?wpѮyGӍ7kSM3W djTk=IT\r9z^tA㟩kk`uêHARы=O\Hw:~wm Gs$ǭ j}Ay觑jbr]%/ #U{}yc ڬ/$"KI"'DKj Xi_ PZQ:ם&^R.OrmUuΒщsh\r/X#* K:33E+Q8= ^,TK9%`T8Yzd\`=~{6AZ|}2:>jS6#>8D.'ɂ<jNTO؞7x,uo Ucj|&Y[s@cL:w+涝I'p@g_M4Ŧ{cͱ%o ={*#=he<# =vn'$Se4WOvNEv'ڰ'>qIRG{qǩ^oˆ#FXkQK".a;bm_O"q%6GV% Q 9Bԇ%OT}z}x>ݘv%$g@ނnU"3Fo? yh?=׋g ڏ{QsЗ6'nUdV\Z3)s! ,v8s&߄\yVۖ>!%]rNMOɣ 1$ n{L;փ! ^} "g~<"lp;k9P/FӬȴ{>SQ-'GC7掮֮ǧ\7?e9F$ n1Qz,d/ruDzdX#zh U3M[N'x0PŲu/,7(t00$cF*, N;p\<_/CT)y I7pPɡEAXe? +3Dx꣸2Z9h$-9{Ρ) ZbN}?5G _ fC2709A󻤣9ɭ& $` 1'ӫOiۢ$$G3H,k8rfĦ_qW?՚DڨKTT$d',x(;?$;k Тq#dCz}6>=Kb#ʇOR$ߡ1=6@6.yfntX ^{&X;[bMݳπPu}#0S>^003?nE>wxf-c ?rf-(MTa<ǺE/sN&E%G1p˥[kfP'm8mzpFԏwwrOA_b@֋,QpIwsu[|bs6  * eW $epѭ5+}YW$Y-J!oqZsYPb?(&HFQpצ_po1Dƈu[M I3|ېyZ;ɰ?O-W[;zC͹9=w ̶?w-)ww31(Nǧ;Mg`OcMxh5'V@{2W0MI&1v4>:?&.> Uvs99"cμ9sټ9w[b 1HKiIu H DxI0Ā K* [➲rH,%v(JEU?P]UT>*jJXiɲ,˕HA``J.Lq۝$Y"8| L *[Wn3{雱(.f%?ƥ(%C9<|-d}mgN-;]zS %SY ,8" ξ)KuVNõs4zK4fg&.ZwK!_4$[N9/B֍-X:h,Pyۄ_.?ݸGl4X 1p;иz A[/@[i Tx4lf<Mԟt^[6kh5ymuAI2AfDA14}{fBsFl;QB҃r h99W(^0+\tpn^\EQkp`cVpLqꈤ@rI0N"d֏;>6ۧF-+$B^Jr V?d$U؝3:b#.l?E<Umhe*)CE&;V)CrJ2Ԓ1%pqQ&P.Vp\1?&2;pxzSiq_;5~O0pbQ.O" s\ƴ輍\ƴx%mi:b% -ulոXm-$6Bikj%bJllme)\&U(ÀM|o-:ʰ[&jG̒L:M4)THFs0$d.\{`j,[ [ ߧP|I@…oL^HTNgFg~5ϩn73 /|t ў\ړ<}#nh?OC/ t+i?_JC:~JS>C}{E}hCiOlMͿ5i?mM~k2e~vdžB7?wK=z_LB#)@?w7/T/w@_??]>~gȰ>a}m@KG>K| Cgq/Kݹ5^8/}z<ңK=l_j;饿~<᡿r/GRA[xУR 饿_?~=/UA_㶇G?y}^㑩qՃc.o /!c>!p:?øqegr>/Kz^/~bV?̯y߫^KKK7^gsrO轋߿~rwqvzqe~8K7/Ӆ; A,8cv;fpM?{^gٕ+l.'mqTͼ~ aJ6 K9D"Y_ݪF̼˷Qo^ۃ8ͼ)//+|&PRM|DP}Wy9ŴeS .%l'@p;c _&Fec'  |*z~ofӯ%l"\Ap=ᷲ0DCKÄOdC $оVc a5k3WPr }-7 oJu z/v.=O#OS,ZWź21N0^g Bx5_dc5ӧO.)`LIUYeٸqWTV^TRZ٭=kj Ɯe:2}uk6ñʝ̌8x)Jt32%1_:~O̤z#Lz=&=u+Zcݤ737f_k%^E_?_ʝ~J&$zF| gS6[F{ҢL{Zig7bgc]ʹG58L (̆<՚;݉;((Ru60яev]z](] P- kP[:h?ړ~}TuUa{1g"y"fi27W~n 1f}N+Y燱 T/@EV{'콴@ l+1Lbk("MPXIb21/V=Fz'q Ũk3FhEֆ UE?VV\1-sy F= dP!s ^)K{`̤LT$hO r39(٘H i%X8tzVZ5wp6qVr`k.ܮih{qNJJslmN_-׃ d- Z/AӸ.ˌ(h-)"pqϓ(OeH0}/_EuvBTgSr)BwvvEKR4x < \8-A @:'31d*:dWuJj%U] +$˫\1{FyK[ ǕU76^Vޘ)on-2L#iu%*'M,(ۣX~*JEy3<@_~Û:5QֈR뢼zrBVLm~:ۯjL'[[p ]P2ĉjiTeSTZOFRQ YߠVŸ%eF@(Q1!lXʚ/SUY΄InhaE,ҙX!aڞy1/agBjҘ$޻#4l4{rɆO[s%>nr˄OHӴU ~r+ZW)=Ι)B)>2eH2`4e7)plUpu`6MFTq)wBn3B,ӳaQ/Pcun%S[PGQ[`.j{s_ԡ0/@|hUXLAԨD}֕6Q&4Bݧ?ZNj`GO@zvzz HZ#T -yΣ(8cg 9Np*v`sc3ڐP&#[wӍVoIk5se?Ȩ= M120,28:%%ݐ9EF7=E `ʺǡ0f&V.*T `im7%OeB'tȏCA|8E2">!PCg*锣-X_YU?Si?t -u<~'ETc>bd)d_mqbJǜ!ys ׊C^εX۲6"]x>?%P|v#( xlaNO$1]c(nxTI5L$JEgyF@Ql\[<%.>E;\PWpK7a!>AlkcW^G'֫9(;y} D:A. 3use͒-L>c,ӽRlPiKḩxg9 Kb=ۚ!ztC~вNƣa 4d&|dWyAZ 7$mo`EOAnzQ1׵_qۆ:(m;6"3ZP=%!B\q9ߤ^?\,-P_"   MxR痘e}SS _Ԅ/}j&|in9& ]55¶`7曃}Tœ'/D^:NLX96iTDTƅC2nʠ&,})[mSlOmS(rzbzʶLO Ov=Sٮlzvlrh/,vڮlgxvlgxʶ)ryQm)ۮlgxvlgxvlgxʶ),SlSnO7Ch] ,>S B|2T-2VEDOY $JEI1zǺ=))JH+SoS/-·|/OY\NmTg= Q):qswۑcZx |3ḳ+7R>A+ݓ^(õ){9+#ez7S#o-T2#eoOCqQ56rG?Wv}/zH򝑲w7ݸVZrml%Hٻ3#JZQ0e q,g5eȪya8vF;E56 3 lP ܑOjo!>vik 6w i2wp ۋW l^ܦέ`LqzM6kk!^0l|_-Nios]2},yv̻VL.¼O&W$, >@ߓ#ŒG K׀̈́+)_,݀W?b3> tq)`n ЙB JeS 6?w}( `/9I_E! 6pw`^ݴwG+YFeL6ץ|UIްt X*jT?$~c!.Y2gc촏=?pƹOƯK;nX7 Cu#C^-0 M\U'Le^Hꈴ/L#l\9uDZe6>VGdj_GQP] 5 ;jpCX-Q5e ⤪1wN%nۦ0N燮9]>*V k E*/rDGy )jFʋ]E3ur% "7ַ */|=D?R^w兟T^Vy'+- tm4=jy1yg`>uT,݄pXNރGSh♎tӈaAg݅^qԫAQg8ħ ҈?~#s<~| ͙:mEաCWx:elqANI94g%S̺|betJ&;wb-`z\a81@׷Nw)o?m!xs͎<<*(l{iQ%S1eJ<1 uvO:nlq7 e9I?$}IsOI1'[NOr~s9I?$cNOp~Ifs~NOp~I^9I?$}I1''8I$'p~cN0V$Nw8I?I1'YI$r~cN*I1'pI Nҏ9BkU9Nҏ9INw9I?Iz<'_j8ɉVz9ww #'XE_ >*2|[UDV?`fT,I r IOFez s>2|iD i"41R *g|$4#?KSk1Yb<,DZ@=r|d܋6 \$ ].Cj dJϱ\n:\$ߍ0Eȓ: \QbCEfx0"g\$j&Pgl'v raE2~49"L$m\\hiHE1d_ k vF`J, Z"nGCd DCvN)y{;8АťDCXM4$Vŧo I?*ppZcwb͈+ x7>6G[,I qAaa &wGan S]]SHp)T[Q U1+Ju WZ%T}SSf&lkƮ%D}ĨcؚoA4cPWi+wN.Ac}U@?JYEKYch{: @?'?knp߈y)A}?2{%L+o<|h!RVRb̖!+, K [? p6 9u4Å ֣\dgF6~`IZDs4КY0B[HfnŇ69܊77QyUE mesl'.oӠOp7Ӵ#fXt[˓9lM^w#W-\lLwwXvg,;KΨyӺ[-ֽUL9Nt~Vt[8mݹ"qhln8CǐgOmnp4X;`!ͭxM&\؍6w{S | B2rC~!߲s'= ^4>-jqO_'!%܆xay1Pa{%m^¿Oug@?v:@)n;d 16F)^7j6縀ޔr'MZWS@V^A ,pa=(9>?u =G.Jq#8S?tRr)8].bR H2.2e}qא?)Z.l6RD L_Bep &p 9Mk86./DC5G_lKG㺖IY Fh4&:-\XE/QP Y^ˮu*NΊP̴sXL;s`ڴxj7Lz)ոD uv6ΝGszBy4fi!})RBP 7;;[f-R>껥 ŸP՘\s&9t7TP}x)QkF>Ӑ(:\nѕQȆ0ض6üٞ=ik%}>pZ:y m_JwB4 {;׉*%0&Hӥ9bn ~-UUF몼Kg͙!*&_ *t*&kW,\V Wt ;ѱR!L3Q b6N5 ,[)8I1e f'2ZLP`Kśˇrt6-~8'̤.^~5'qR{s5GD.թuYHC&fnedH\M{HusՊn+BK3Q["W𾎹T m0I$F픚 :941}:~Rڃv%R3 񧨳q8E=Q,qQSDj&G=QY@ۧH0ԺbW g?DBTꗙMqD]ݕ ˊ|,Z_q@_VoR?_t -to?ƐGO=u> \k3*߹zg07 K+`&SV a\5.ת#5T%RCU"5TFj{ҕV$ Vѯky|hB 𧍚WhN6AY.;y_[RK ^K;{<0^YQ]ۯ͛7EA~f?f\QPafQ RLvSZZjPU6nY[Tj?k_muoESTAsϽ}{A{%aV^vW^жb/k^O;k.zPOɎe.J`iF\m%cYZĔ.BgR)Pvź%" u-݃>4/6]wjM瑆}8HC[VXMV0cNS@@|RG?}riFԗ8SRBV^iKyx)O }8 |FEB) )򘢌R=(~sHhU`k@[skЦ24Gl3~YRZ5з1²@PXpb[*x6d:wɒ]p 쨭UT;YV펢^,kdZ 7.F> )LkҳJ;0鷋9ԓp,СΈ[c+3XTXz\'\D6I>̊$g]>+a  I> N $wco]TgWnٛSA QѢ鄃g/A,T{ ld{ )$%߳ Tv>Q[YWCu.*;|Q 0t$kJt,`)nIyc$Bc5Bc5Bc5Bc5j*c5"Ո0V#Xa5jjdTL~(gbr #Sf(0B9E(S ~(0C9)PNarP(0B9EE(B9)PNarp(0B9Ee(0B9E8SDr 3SٻZzjBjBj@DeS<UK UU&& TMPr%n&TM0P5jٟ(T-]0P5BՄU˞nGjE U&B0P5Q UfGj"@DU!TMPv-TMj@DeLFj&*P5BՄUXl\@DUaTMPԳ(T-rfY iRjݹW鎑&mH6g+SwįF)yWpLms"P~4mRJoR9&XHT-4ѝ߅htZR!d34p>It~:/IUjߓOS“&z&ŨUR-vI ߥ{_Fj8Bt+/Awu09yasVo _|KDy!sϠ;'z| {;_\_i`){>Opth4ݔFT͆ H`emL4h`OwNƻ"Gf9ucMd#s"8p4 v0I{|32Dv- .(%&"MQ1h3ۑ=:Aڳ&\uy;Dn!َ{CLc<"; .p'Nf vj 7mԼ09̤yaD 虬?)``p{Zd@Zb.PMvC ~uL-q>d81N7;*ѧexdX=Sdk! n%:izBd4ën'Faj˘ΗHu饢is,y˲Du Ounsfzz6b#"ƝA#"54į'fL4:7Qqm4yV9:/9 DuD#9 ;;0WE"j߿`S*ӝ|tsBgV+X=n+FYox>HvX7&㚨k>!5Pq5P7@}\qèk>n%F}5QG}<7OQ~8#iIrƎB}Mr~$Ï|ԇ$w&#P}/n39UN_XNX ]:ԇRe7"[{*G vToPz'|M{V vF>.FƤhJŇXGNk!*8QeOJ;;|)|'!7w#PJr,~ %6C$CCIɕǕ4RdLԇqQUR;\Y&*a:/5"א]:6';K0>Orܻ rT`6F*)hw| 5ScvcM^"v΁dm#d?w|;\Q~oچ.P2b )z^;+4S׉CAq:5}œxBn/OfȘJqOa&+SSUE3?]R)gc},;VRu8Íw| p;~RE1fp;F09/duȘeKɧf;w"w6֭@}lg4F>\]7?=mp9^yPSiwG}Q~|qwG}(d('Y؈>b4ï!~?aiԇ_O=u8o h a.YU;~KyrH5RV`^/n"Q-^74!ws4 }lV/t}fU?aҬx}]%iVNfGY,xnGD>4+^sѤYZ Ҭxy2QfqdxL!PBv4oOIpYh8R"͊7=>5fS CTZZ efmvP 旀hPA^lxV=JɚlZp#m륋,V8֧SZϬ%Y#Iop¥Qj1)`Q[Aì~WcPnZ4t1_?>0W'~?*;_}SGQ(~u_e_~7Cį>~ua<_)qR*_)RR,J)WË_/~LƯ?_); RW_)~5/nWq.È_)r̟:~<|8+uaJ5P+5?~_Wjů4~#O Rj,p#|EѣnC ` Y૆PjQwe}e{[0X%*eB^ VM 4ޯ{L 'VO~nM!2cM*nM*zRPɴEFegV |+f~2D5.bZ=]5.DPB؊?-_*tL+*]\afѰ%F.esiC9h:1Tҷi=O֪"}F:fu>WYi[Kb}st@LlŀV\vi.Vl4Zh"|cs+,%@q!23kt9nxɄeUxf*f!faU:;Cor$#oywi$.A; Tmo[1ܑGpvJrK޴><'ϹM-mA+-k7GҮJrKHF[tҮv- $KT]Ini7ЁvU[ڣ>Dnif{LrK{,{3rd@ni*>}MrK{ ݊&?Y#-fIni~UIni`[s [* ϦҞOUm rK{jIni/fo{FHr  2X`e(#8 rK&,AnYoarK rK[[ڏ1-φ-'pՊ4t4'pm|N=2N^e0T5Ce ZmH2(osmߩW=nzե71Ŏ_v_|⥡mԔ&Ts ʟU 6+Mt8Z ,0mٴ7l9ƏBeш.I[=NޱW+yZ3и>*XXV]n4y Tau -LDiqBڗ-iET[SyW&^G]'_O +Si22*<`@ǕGX+KmXRnHv$ =-Z P7u9dpM_}R^uѪKp!^DQ˵ / %oJa*H@/`-vm'ڬK,#تYUt)4c=/zׯkYmG˿tSB3}=mY:h Hrޥ?<kA _SNKۼ|XVnaz6uðLU:2 &$PZ'X?9? ݶ5Pcu1 v)ADVC-6YT ]:^Z9 [evwwS3|.{,|gU;d5),=1hK~~ n-@Jש#,(*6f/_NIKJϼk2P@|[0%\н5_e.^%*!D_N/sR՘WV ,+L_Zlm.L,xhl}Ա{ ,ӼZAtN9hms΁ަfc5M= ~nXy<ώu2q0qVdfBup7"kO͜YPuf6&OݻHvo٧ռSBŢ<73UBNW;`>[9K xK縯g<_˔^IArJuq,.[e(辜)WNRrk[WiWeT3{;a+Y[S6 zF/3 +JM{Dsy4xؚ<5z+K[H[{~EO :b݇>+2o1b8)'wۉ4/mknFh od^eS%Xں+C[{7>Qj]g+Zm=֪Z!gM 8Y?"C_~DH(4MJ_ %_6oKB!/ (Y'jp3MZvhj֣IްVҪt/TV ;y2N>ۗ$C$1^_9H*dthP:R"},7#H YrĺdnaȿuA;Q˖tkn졮.,Ӱz 7HJ~sz(JQR_zo M*H)Ha cd?g Rl,UQcyVȜW:F͙!s^>#'Usa>UJBYY:7UO9Ӟ@V%c.%mpU*C2fe&T.ee'Tf/ !<2BMrXi',*d2k̲<PgERٲRӣ0%A(/`o_urfh{MT$?NIG=D:2D"asy.Ob$j#ɣI@z\O7Ȱ9;Dz Fy.¼ ]PY#U?ZEyqA7D A(}rA_.yRӴrSu2NtJl:FNGTBb~z; gHJɑWu$L%[_&$<ؗL Dé7ˊFF!Md{ƟND߼¬Ώ\&|IVI-)S~orhר}.z{Ӥ:쩩8Dk{0]- 퉚Z,G[U OґdC*?Cjt%pvK3Sq hw 8 /*WFOCvxM932 _z;_;iq9 6ERj]w_~?>swSob I?;~T j{oGzr7b G`\t* Ϝ:  j%G×r~67j27k|_\;ݙh,O->r&Լy_?'Ur>OWq) |_)߄_v_w.[4l?Vˁu1*ǭ/~N׽/.9pRopi>z2_y希wacߑ.Hޫ8/r\cNEl72#r)c[c`hxX=Wc|Ǡxĉs?c]PMn8qf^ڑD,u9.1FR 7w)^M~w& wtnx3+oP- 4A -xG ܅RUBIA -gzrB{_h'z-41Dr-?nk!N#h_`A_B%;Mt ڇ^ _ibO,;cZ,lrk41v^; h/ 교[lk, z-vAh/ 鵐[f;XOZN#hq ڵZȝg,Z= hKB{S^ k41f dr?,?Ok!;MC ~^ 41GDGZ11}v:F\w1m,] _ZhZhk3sӐ ih sڿBsA /-Ӑ0}HоcB^aB~aB;1guҵ~hv}KBF;M}vG[+ nօhG!h=c]JKB"A K,ZȽ$h.hGʅvD{Ўh䡅RsY3vrA h9"V/BF- Zh- kw|ߣ~\BB.z3Zȍ,EZ],h}.--~C -gzm 6ڍZȭn~c Cµ(µ(Yעe עhE~C ] Zhߐ2][^Dݝ|-F%tS%K6[۩vwhw)ٲiKM0mo7E*s!anظsݵ-w:{Fonڶfzn:M]4ecϷoi>auhoka'mjlMd`Cyu򐦣9@Y-? :)JzϾi#U%J_$ɿf,!0P[%d#r}ߘY[+H8qD&{0|Ӑo*Fl(?|KTa?엮WT-J+!?%P. >Gm&|J Ȯoz] Wە]~/8I2W K*_G`pX?:Yrz^v55NQK͟T!p/%;ӫ?1xaِ6ko)=%[||]CJmw-Ri<毃̆|r|51MԴ uJU`v?5fȸ6W߮x;TM9\w35߹O+'=G[+GN_їO~^GwLφi֧ n\dPc{\  >dj?}?>^?M)7uug?^{}!^7xYxħjb >MhR >]ir ># c3`gkB @h58[F(~~7ǨV>Σ;t}vg?Bc⇿oS9\W~ QߐQk*s?>^ߒ>w<O|B h? W.jtކja8/WǙWW+Oc\_ p5n p5z|o3k`7:8ˠ}q=pfz×zF˄J0x 7E.wkJWSp5>1ոWBy^qY;;wo4Qh:pnd'JX(ǫ8qak8~Ɓ- B :jZMO{uWn\ϵp5!ո:xOẀQ⫂]%ؿ]:j҅GkW:W㉰8!Z RoO_>IcZz\5Mx,<s:jpp5n Ygl u8KW}?^[+jcil8ߗ,w(7y?PCp\5NW>灌+"? :jـi8p9W*u+Wv@fCYg>;_VaP9ց՟h( <<" K/Q3˕5:t[xuPN5E QQw+:{MQ@G?|G|ח)NL+ѻ΁ Jz[L>>W_?i5'Wm^68ȷ$H՗%l=IF@ZY ڂBE9=quURT\\XKȝAE;AH@gh¥*?n]3f1%]~o+磞{ Wϓ[S??(TavKW/y4EGQLl);8㟻#,ǫ +Q׉W ^ݧBgaY,3 >>o&/[Wi׮Z؁v 㯞ӱQ?9z7z~x~&y~dB<|WeyBs |S^SLr=?K9|ĸ\9*yC|*|ZxVbu/ox NO 3/pzKv+'_y-縞݊zӫ|Hz[X.*py5.ݘHn˵+53)YsL ?nqUo.vghki%xn`(-ڶk Pe}ང-`}kHڽZ5}o wj쫜pk"}Ge-q>4v#dAݹGWeJ;:;}m-,<)P:}`"IH$r0-u$ePia.`EtI.8O^m5m· .K[dCiy.We듿(,)N4ŋ %_oHFz?Dj}x`?ޠQ}`m ֳ?Kc#TQ/`Sw -x0azAIYYQre $'0.X|m7o{ kn Ȳ|ß+=[6p 2&C4L(M~oRqח&Q*UnLr+w޶7e7ҽ;\Qs]nU7yMTTVy^^t:ZvQmoQAQAq_"kSKBO$Kz'~UhN ڞ*.Hop[W7!">ܨFo1 jeWԠid>s U܍R]`[I8UElJrĊˋc+[!mD 0v^љE''6 bYxCCZTvm0eDU]H>A) 8A'ܵ(-MwTjQ$eF:.fvmA!ֈNc!-ؖJ.@MVK3~w6+ūumJqW'I xuj@ Aڰy]a8_t 3*Ppk=j؂>Ezod;4ʁCB,`| ;w(NffiSWA~4\++- ²R]e^XXZRTXT\Xsחa,mX}t 66%=WmMGFi8ݴ>nӸr2P[@E%:Nż8͆s=bI| k"Ia'p%B1ICx</OP|=ȗļU|_ s8]D/_aX(]W*j~j2 |^S"7!O - ׮/)oD/<-^ imYAgŹ2-Jz=| x\[u0c9:F_7ӺE{5jl-1eůo֒};q3CNMj\p u׷s8 ~)*/%Q˥g{_\kl~(,[ `{ ^{e*bB/msX?_ Xޤ: ^c[u ^'*xIɧzNr֍]{>bpG+boB/oVbm/fzl@KnȧWul/+Rei|=zNH ;(xĸZ`|h:+|z O[X7et͖n3RKWqb /Ғ%].4%__ ]źЛ)`G]1 Mhay42}E_0fT|2j/s64h,14 ' [gD*faͰO˛iHÅW5/F~<؏X}>-39>CK[5#]s>L/2% e?)WHn ^~ ?,^RxD˻^^5$]Qc^5.]Q>GOJxy)O3^I5^qΔy)%J ϑX(Oc)_+%|wKxy#I/{$|+K%|/ _.=L3OFo" .Bf'zגB'rבv!\H,Sیzw=fjvoiqkaѝ+ݚDǺڗ95OP(m~^뾽q0Zs* ez Oy"? s&)@k{~_ܞJ: Z:3Pm3^}bwC7{/hڨNQ}o1u\s<6TxDZK~t/jX /@&1@Bt6zoٓI DQ c3Ο4AM0W]0隰ߪOn&,nym TpK7 |̮vl uwBcٍ-cgǺ.xcqJʅ|2a/C Y '07253/l-U @h Dz1:I[IV?L&`jy9X6l $s'2ɸb8)vhn ډ>svn- ,=S%|>Q||২3 ܰev0fT?T-Q'QFax{П<1<_R`ȜȜ-ՏԦ27oT:s3hQp8I+_v]#D'8bj>Ʈ G{t1Gm/`f:uå_Ij"+d C0s9S8dA^A.uwwq7vvG~͏`y{q4V;<}Qk2-kwԂ̟Fi["Pi09wQ3g >I 2`fXčzRJ0)*A!oP i'2 @Kit]v]b9+O :m{d:5}UF>G^M]Zzok,b-J0g!>G ol䀯<9`1r'$<\T6@uqXR+yA;]ӭH9JN)8"}?_D^<s9L3[NV E]2#c NQ3lPĴmyh{\RZQDNmuzzQOoW|b.]8rh &Nd&϶Jf0uau밒*\݈L&O?A䵕T|o+P?ja 041RBS6ޭ<[e0x?lfc1x|sf nDeL.+dFJhε=0>]c|^L7Z-C``ɶEUQHnv;(EPdK+q@/y~R,OH݌p䝌0?f6#h[i.yowa ՜p(\.FT䱋E9`EF]"b&>W#~Vi0j6@"}&`50=D.P25ey4mB(\I^f7^?|B| :E4J}p15WQMP4lb={R6tWlHxDe'kx 6cb:+֕_^.Yۂ:Bj9~ǙLl("#E}LXsQnNoNPfz9e'iO"Ki#gEk7m@r6ʛv؝4v7wp\w0:$3$/ndkɞyk՘\%E7?5^ 7 fTp07Mp aX'譇at[q%S=)WWU؄G`\Q+ , JH?0 . xծpRi'v`‰1b$5Npн/3v֬CB kvXvi-@}Ւ¡?dqrYUL@V &:IeE).B_6-˲;eNQQ0fC9 r e2 m EyMzm8'Fzo v_e$MHM EZ)H2T ufa2z,h s*w} @)q9)g%fD[Koir-rmr5WTM i.+[Tto/q"w<5żK5A75AAKXEs |uK Dbn;zW;J*6d1KdV1?m,*E;&E[Hސ/#ğrtY#%_A[ ĕ}//sZ@ sֱ5ᇟ[": g,`D9b$4 $138J1YI9 ɛI 1=*XG ^`DNYq);"PGqSJ  аJ良&b33!!=)63Йos+{8Qg:t`Ҋc#z|8`4F|mKE-z\Ƒ,N]jLM38cN ɩHNL29Su3F&gn9nŠ3 JNF6L3oa`z`oş ӟm~ݛj* 豲VMoڠ}SRC]+]۔Y0z/lQ8-2Q~fi!׵>K.\`/>U|9frrSq2 A !ʔ a I.i6袜5~c?h2cZċ,!!Kǃ ⹣Y SIdy]|:7r1.Z8RN;lMr-E&+NXiC^gG7-_R DK4 \|%AZ ) ۘF? I!?Ր"!]ա(B5-]}v EyJhX[q+Q m3*胁K0`<[Fe6>H4k1hBкxY;+EvdOhib&Yf (I/xG a\dDKCFBU r}gEFiG,22te!'RB%aH9,,Tk<@sOVOpNFÒ`Sy{.{𦀫*`J^4٫Ⴂpv1>.ĝpYL'1Ԗy+%JI˂a"Lz)X~ {ذ20%:1-oc GRwl+nP+&/0f*P/5Bbv05?\CsVZX.c3tҠ;;In`_v^WC{T9[X?3d#~#o~Cq{+xuYgvgp <2cz`^Jl*]W!)SNW:sgfB[8+~ͯ,ED;598|JKB q3h!>VЌ2tƺ0PVhYqz r;\ 2@|8N X2Ԟq4l<VoX'9+f=ƛ(hc`8a[]AuD<]t;` vm\C2(=xG/,O5b֘j&>=Tci8n6uuU(Y1qpʦPz+~RSWQc54-GAD07d{mz~״S p+k<셟P-XU.4|a{m?痚beR@޲]}á7KhC|ޮgxBuɇh lu`".NU"ޚ%pہ"Q"|V.c}kM.MtM6&?#M;̏Dz`QB?KMSVdlm܋5&2q =OGp! $`+C/|<ݚ6xzahy߻|)~7XZ*9oqe[ϮY v.X1|ڌ'*JÎJd62Մ3D^&ąbpݧ5]Kal.QayU{, d.*F=WQg7&2`KB3Ɩ 0Ÿϲ6X{fLO~oXsK À+̖gnS-dZ^!趀(jp޵Ҙc,z? k: D;m8├b a1xPz/r_#x\M=_C~Lp6 ֎r;\gHΐ(2VV6.6Tg<=nǨghm>0 "Q͇m(jhI xVB`V*(e@h,]oXvs, Es* 2tEe CLRKM iӫr^;d!gdH~|9 -W9}>>0wEPt"vө=h`~$Zr\ pd6}]mԫ9\w! + hU Wx NBGD0K| Sԋ [%rq خ5Zc풡!=DV>q^ҳ`FZ|ޖ jLpa^Ckě8eE> eTe/gԦB/yaP =&rhMQ"E4zf&5@Rz(52 UEƱ` * :abMCYSA_GӏS`[hu0_3m1-0_lܧWlu3|yoC$9!Cu7Tn,KQ$EtHd(–(LE!QwHdRHID!I!QDDf@5c|>3 i8%(%%爊 Ǽ8qJfo%B^$5">WMDZ`U'cy\FژY KZvmb W6фJyS)>}j۹_y%w7Y=dTtZkeB:<9)ݼVQGte0)H`' m)\$A xE`%ge Cq-[AIC7C̓'pu0B?T 7!o:}.mUxx9&nܜ 5OqyE*愍Nk8om+ܭ(9вkUc1ZmBٶ!(xEj j3Q{V9j&'̚FpYNCQj´7f#HKA[x->tKG^vYE2n`u:l癧,PlGS=&P#$3վroɉDR"YsiMNmGRX|Z;a,\F\u"6V.a$31)!Zӓ05s$/ĨZM7 k n(腴O#mܐcͱfyػ訪s4< H")r0C2$$$GDn>J͔˧M,lل%ީ.̼b_W5o*,Φl[Vvr37se)g>4XI0)~o2BIg[Go]1q1S~q}<3wuY\꺾i"cd /CɎՇ[}L= >?D>?J.n%H{STw'DCsSs( ܱd('S1$07 &h,iD|I dw QzeG^!V/C>M$ 'i.XIuMEuM`K.eaM6 @h TuYr^Iktip h dP0/ w{(_6 "JavFAq4:fHfos݆ h> M (LP4h8 >b爿5o5BFϿHƌevepI TyI %hI*'ލxIbu"'ֈ6jlJ܉pLqn|J B/4lt"6FQx*a!3x'*e& }^M &M-n8Jkhq4#rؽY2żmfj@1w}[fond -yZtD}a]l^ 8@HP =5GL\1OW5 ߝEE0{AL 9 Qx@jtt;٪xYSrG}tzQ&y[oc6 pvAԉ[fokh#A۾tqWt/gvoF~jXߢ ݙm T8 E0b~w thA P Q(ŒˆbZ[l$1߀;mEɞ8K(uC%]b$HRq"~; Ђnnc{:B )dծp$6O#7D[0e/ LU6$'+dؔS38ѥp,Zg,(-ɴ^ç3 1wvYUYM5:"=rQDSLW^b39 |IYJ>k;z}DhSc}ll~neۜq疛kjZTc 1䚚G4}\t\7kޝEx,/ztH9Ts*\j&eI׷S=") ˜BXˌa+M X\jtq(byɩ53ݲZL6Zymp:) ]}$]"p+v-e5?uZ pjPW+/U^r 9yS­*Sx%f73j=b ⾈=0ֺ9%9 Ff1fOTw#9 lHp̑!/YHȨ[L$9.vX"*'Q IUC.0+Q#ffyRt7VlC/IVfbE"% oU%IR?n'v|\x,BP16r%0p{1v3"%7[72%Zcʎfbg=YgYZNqgM)oFEEEH9K}V/6(9vYgWPֹ{2yMuckHq{%jNZtqE `77>D9r QBޟò7d閴nY-ncELbXC,GpKB.Rs⊵$I3Y erKIyH1d $gxJrH2^Jtt<_HF za,tVjzӥ%H }ѕ/CΕD/ zqBt @ct^X {-鼰P_ԤsȒCd-M}?6eBn@-fuɠg6FE@FE2(rB"u|Ct:D,t|: ]5LtkѭDcw_]Vk&}t:`IGCtޱ/J߯]yU2>K#_it/ٚM[s˿9tjp{qdfQoxb]-,p@:um=Sy*1V}uW䗘&U d*M꟞m_[ϋn?{n?SDΟ-6 KdF }Bwn[$c[DΟn?hyϒ????'Zbv_*UD·uDΟJ?;5YʯύX^mcÅ@pCe4VD&/Ol*L F#xw0J$,{y X|v V.g%2!82υ1IXKmroKt8tz&߀;5;~3;o獵 u }y~`x'ǁ} LsN;}΄~w\t }sAC*Q`3p:b`ít#t:tz+J pp)(;kG߇>NO_؂{>o>𮇠aw?/V>pc.)?>p8j잟C@^ww=}u\74'>p60>p?p<}g?X =}`z_>0П8%KW }A!W| ?/5O Akn~?}fQS@?ooHm [ u0ssaDW7\WWacti?ͱ:] kAKgضl~17^cΑ*z r6M>ޞQlcgj7u |N ??=[ml<<06&!66[? Jt셃)l>Fe] ~ kܗ\zM }_WBy~ 4|n_hc7jNBKo]=);ۏnުo?no^aQM~ܞnc>8J}|_r~S>?V s>wa_$S^տH<_9OW"y΢+s?VCmk^[W갱nlI]l̮?ߦ?]6v@7o/8Ʈpv?w'n궱]9ͺ_cc uǾ֣>}Ӄ?]gc16v?~t),+U7.>;Gb[ưBPN O񡈌s]#dNquߧqϹt*>TKFxSƪ?npUl9sQɖt"OS\1?1ּt_ w&| ~H9S%" 't%n&A!''DG?$pʟM0J ,r)?_V~Eufv Z%D_C$ &7^cIÄOK,/2BG<7o"" J O~aTO^_K8@6» ?L^WKlk.Iװ$N!Ax&a)a "[а(]n/3TTW5UUTU]}Qm: W(+ƃqspWJ_)}7B9r\R;=7S)&(E-x88|%ϹyQcK_^5IRqIT8E*FpΥf?NfHޏ4qUD=_!M 7 HJ$L~4|x%~5ەCwce)vJPC<ޑQj.P+E&?Ʈ#Yw("tjߏJn?qɌ=$q`_2g^imxR>CF@LC|`TEϻ},ԉ_1Qk21]צigӘz,KOg7+'Po`:UILSXX 4Ԇ*̼ݷoʲeeH%K$jec "˲,,KB+a> NH >@$N1 94ԇN-M HR(% %#@BH޹>VS'{{sΝ7sgn(#DilBWݗ-@ ѣ *f!-"Jhc1=Iaي\O1ƬUnY*aZAYJY|鉙mq jeK;|SŬam#Y3} iNQ!)_M2֡‚Rj.n9LfmZ A]G[5a–xtYb& tἉbc-UB?_I7͇={GՃ]~`jf]ô5a} WW')V)j3ksQ:Rtt ,dե!m}h FMjS5]QE38 kw0x# {=۾`>HKnJkw6`N&O 3g2! a5#x_}(N';tqXyjirph6Z-zFFh]44b|rzi<445=QgDw[W^ee3<ߏ?hfzhG%ݍX9Rmӈu,$ ap9?绨193^c3v Ni򮉩 ,"S2(Iل45#bDD#yP#n҅YIlRM J?:2>3_%l0KwK˰dib4˰Xv8c{uÃ-+H nD/!сֽʾ߮9Z`V6ۡ2 8Ϣ[(s u”mt|zAvwI?qIcO#[R,w{/0*FssG w$0 nk|!0y Ex?R /WY@O+b9Ji.I{C"y$XGTngAKpHM7AA5 LtTH . )GC9Kx`_2 G';ZP6s@"¶/d0 R>wlu0EÑƽ47ʷ|7mH߹!6~cؚM8-mq0Men襖y(S(KwLK*VYg^ȟYB#_"5P`eÃZ|kK:m@IOE) A_=8T2j SeގofMx "vH|2,1.z3"vPA纱ʚ(ާL= "?|߼reFmEGᅠ>$OrLr_`ounƶQKՙ31$uCtT]xn>)HY0sQ72 g`cY\ ;TD:L. uZ\o]6p N.s\oΘ'q3dQz(T|˥Tdr)DC-etM FH#{i=t+?HY+(c&~$cDgr{d+WVYmUy&/r5 {lEC_!brN!^Ylp|d 4Fs,wpB(ORRbZ)/֟ @Ln@`0[<(,lԆI1[K2#hf#ewO:]LTCF(iF_@ ;3@:'&Iw;-b8cr&m6OX4,%Znj1 f'qS 2DBD7%0YiS:fS{ gHu{9-6bbDoYj~oBv 7q3Λ3:o!4CPo:(hVөer|LSʿ@ϛL1<{f倦t?2Xky.rXBR .3<"opP Z0t?0+'|m¢~ 9g٫1W 'ld5D&hȉrLS85`7 }9.w*A"'Z'U”kn4., ?Yc(ȓ.A$Q)){){){1H9{Ŭ"e"e/RbH riQ )/{!RBHً!e/DN5,z!RbHً e/)gTC>/DʞE^R"Hً!e){1H9*HYBEA^ ){!RΖd,BŐE^ ){!R~GVAq/D^R"Hً!e/DEHYҘGѾdU,LcodRo[;0k }I[_ܧHYr: ;fF,)>1+O%8ߗUP_dB,UR)nΫ) r3gx_ʠcT)W) S) ER+6j47be쐨1^HY, u<@RZ쭰[)E\qXoPp狓FBK(ƣSQ Tdr?bZs8+Dʢ_EX,N $8ʐ) E eNӣ Oq,Zy;*ûx98mj"sຆsS}k9Xc@*n,i  }5MH9 ?KqfE7QSX ؀`e eQ*@b0)fK)9"eq>t"eU) (zą匱E WPՑX HY m4p^g G;"eYj~o"HYfjuBdioSŭ(" 5Ẁ@]= x2.?$ !)N% Rkހ] Y ], ;Ρ5B7t_A *aaA.dJvv9T uy)yZՏ;} ?愃Lvr!nTCb{,չ^ɨC0Gl\5${Uz=ǙpAaP%,e앆>A qk?&vs!5+s ,se gcέ3əyCq}od9RdSG̹ݐN )pg5j\˥LqԝowӮ{P?2o{򕘕_j$̚Lfmf8lk~K/!K8)~UYnČú;U[jWkyT%/Ώ둛.*҃ 2?2?o{c dOAA`c~{7-Xp ?HS5=ap舅si -+~l/4lױSI|ۤph\[%ۣ%aKZSТ銘rm.0Di%0~$׬g'X&S>:_M;HWWK/p%=݌_}tmKttOSPW¡ipPCBl17o;:n~!*( az8t-"2x9XDOS }F{Gcv";vP wNJ<,,IvR{yG.v.wQ) W3dzffxf.QLpLATgfby'/[Xw;F1q(^}bCL/q<3}ZS=9JG<5-|&qnz{JAoϠ€3P-j;{a|.ެoz晘קg>ŠקWc({P>x}~)=C5Sק^z}*O9^b >USE>USE>USYOyurSb֘ž6!sCuu t@H7 }Pv% a|"Mvw\\K4`yXҬ6d==g<# >1O8F Q[7y,vo 浖oZ@TϦN@vuFEBAWXcqlm1_k ߥUݴ(0}11H N"d(@@)KA\\d~p`Yɪ*\Uv=TP[UzqCUzzp=uz CUP[Uz1U}=T*"*PTP*P[UP[U*Pz CUd=TECUl=T>z CUl=TECUl=TnPPYUP@YQ\==j̛y330030 2 ?h8&LjY̊3ï_*wLܳa4,zbDM"Yk4f B [7#9𺪻oCb>,-s2-Ker(?YiСV9rѩzII ݛ>Z\" FO&UQE8voaJIwBg3 0œąω = HJ=Ь>: <(խ ]fC%OU8 3 ɀҏqII74txPa -ZBG-jˆO\lkcY(܈F v #g_D}b̂ރk9YZ J WA`b~z+K=ѡ@}R!{3QB!@1_a0*c/磈p]I%* QTl@)ٜ)Q0AbhR(@CJ}>X0nEWctFpqPav"zҽ]g ~kBU  C3^3TO*sQ5S̉-U=Pē]- wK.hzMjUbPCYa̹ٗVTk̦[C7 ̵šJ)TCN!1j:d& -gX^+_/.6]OoZLm fkdgh72 x~~Ot}wl/+i5DAvBvkƮsCh6mSz-yu:VL"mN3lku@JYj"rz v [kuR`+m S6څ8Y! 5m,:]JؕpŇ9.V.=`mŏVXOi1Ɔk{&Z&t1Ǵ"4|g2]Vx ,("o,7f~+d|/|ְ].a&|p110ZSL֮]*m;&&~lAݦN`.08-4^y~%cFRGջ.._',$j fK?`=;gu Z7q]dciح ϙRgB4|m)#vq.S*BkBD .d:k5u5t$h/"uDzf~GFp r\=D?9p^gkWWʠ&wnT8Ā]h8|q zx<^f*YULwշXv}-9{KTq8``ک-ڽIor=#z}`GؽX ^X[;4di{Bΐw3 yךFm <Mb jѿ!;[\؏C `7^gac!'8ɰYJjWa g:,CqM#q_z1ŇkV@8ƅ=< GC!:a w>~,HG ؄=QRswiJqyay{;3/$4t3.K*jC&Ճ3bEjߑ [\DwDЁ!0ÄM!Uz%.؁-u]OiX `@tM,YGCkę߭ Wk X0kpU+3_mC~ޓ _zw~![@~!_e^jV渪7ēZ}fAlZ~o` AEf:BQuL {@L7S#m(MBy("OK-co +կӿB_X̗R.<_h6-~MzGRל_)_)'VR:VR:VR:PMկopSA+m+O~#+"+%#+%#+%}Qz5At1Y]WjGEW2DFEWUt Ka B+<ۉ;/X]F:sH~<v孼-UVy$A<[*o6H(o6-2V(A bmP6{D{XIe4D+ UIl2U^[M}+ @EÇv+pe ~RΪH+{"i5zCl@(peUR+b,L_Q)pH\.ai,5!YI"*oVq#zw(MJ \eM>1pʾ6UGrN Uh\yp_GPʁ W5pA$ÕgX a3:= #\YC1_aq$ýVo?Tl=m;< 4y6' <*Q~?T#wc"H+8P&,MVtQW,*; W^mL_čEX+|gYT$Õ s\Uz,Õ߆eԮ]GpήS W^ jQOpl~z2\9Ӷoِ.)2\]>[wP$õwP$;(*#ĕW!T B.FY+r݄8*rx)fxyESEno^ENNPb-3Z܊{AD-D-?6˅|%vJ3Z\{yr3(;[-.Vh<iq07ml!wB kL8b'۟{>r40~ۊpN=v)6WQ5kgITXocq \)6cNL3DFY4\v{Y2|34V א1<ɪVci@A޶ËS!ګ Qy16zWc;ʩ`dE_&|q2}Ԙ&q:D]!O̩9skVkʘ[Q!YzB^, n }r dVcxQbJ8qU\mu ֤|6=2p+.bж6d!=3 pC XvAdFuFQ̈"4r3\bH8IxhWÜQ #[> yH7C,-Xe-g}=.U-w7cZ>gԴ|JCiGk]t'g!uZ Wܒ3M1E2^ ῢw]4s`n)/?s{nݘۣ="W$I-f/f{=K$˾eizcy!wbq,B,%1/޷6/kdV45wWn\Sm*$Z8 k?U[AK+ kO}wAvy۳sw}cA #HT0$V$'V @6ZLC.혘#$* Q !( $N>g~tӧO|wqnϦc zVa6p8o6$,pPRxJ|JWIn' R. R. R. R.btg)1R.l\Hra!EʅAʅ.9c|aD8<'9.ωpxNs"\򜈘D<'9.ωpxNs"Qs"𜈄D<'9ω9ωyND ωpyNDsRf.%N$1 ab4IL1ܐHb4NLC81 4J+xZL#U4DILC81 4Ji1m"i'!pb"ii1ܝ"iILC81 a4u1 4D(Sb"i+!pb"i^SbC"i'!옆pb"ijHi~"i+!Hb$f!ILC4vLC81 4b{~/ ){!4 ;!1 p?:(8yH w܋! xJ)1Tj0^oR&u1/ !/!4BuobIu!ow\t+eF7ڻy pCx6#T]G$`{2?,˲C:yѾAh!DCb"#&^h ZyiDB'!04yJh^M|4444Qz-!cGڨQi>B}HH|Q>A}H#-GtQ>E}H#-G:C}dH #]GڨtPYH6#SP>2F}ĥ<#G C2A}A}H#GD2A}d#GP~LP>B}>N>Ҡ>2A}H #P>2A}†ԇ #-GZtP>ֆԇ #GڨtP>4ԇD&C}H #QLPYH 6#GZ{4ԇ?UzQ-ixFD}E2[GĆ_* C}xExX%7bԇ70T8b\)nT@^]tQ.XdP.H>L{n݇IyaʐU::]16QLzMA5[9ED}؉j?1;Xof/O᭕r`1TSc#8=A|Z=m8NfF}6LiP~rAF/`FLcX2 jFĿM~ h3YYOYAnAl^X ӨxJO&ՑڀOmt>QZIxKF籉1, c̄H_H%Wi\itBŨ_FL(6zϰVlԇ_L6E6×SF}%T=!Z8Y4vC>E3Z"hl4 dQNC}Jv̟>4b'ҨQH>,0o*> U@}_22ʂc[VXwE}ESv2-g;ceZ϶wَ]5~5 6lQ6je'lZO6eMeͧ(mQcj5lv>\ yD}8M&`("lv Om^>l i 0cTCCKݕ 6ymTKwQPwnUeߎԭ8v/mەN;^t4A-H>?, Ou!8da5o A{xׯC~ɪs]K3;y`̣@vݜ2zC{, 3ٯvlR@e'iN, 'šX5իt.>jI;cەT0|'S5:cQ8tO3cV-҈U7G-6=+jyM+_魓{@wuA$25YEU-[Z\ t@H|_]/vֳfpiO4KQ8} nWUkvSm },4HBz(WEiVI-ϢU x5xc7m{mI?z}ϧpx pK2%q}1MmDP cRl؏wq|5$C{Z[XQaڕGuKlՂ< 'biƞk׃#1n q&z\P [[Zsmy`7m)F(]Jxm#Ӹ5h],;qpSϮ-9*vh5:BmՃ3ۧq6oQnzc6`:dFLpnB#Q2MׯKt%\նzzlP;0NYi-]vSW=?iâ.YwR[3VE3V%w;V<БȌb]Th"M*"-iꚳ5g{2:L42̔ucBfKY̢Vhkk#˲뗡;["M+V]7movM޵@GQ ?BNCIAqj IH`ҝΈh93JǺآlPgֳN]W'8tele{[aEw)WJ-Y@Ƚ׉__ T>j Th=ZgY묵˾i88YWֹ<3Ï%4t P,Ya5#0٪^n$+/F_n$5H VF>)Nk#=~[`YNO[Tc2-qEF jt鿖z\ݣ' %;R.W}{%ʥ%jFvsxy"M)1L+wHlOm;,- 鉿Oӹ,(miihd7HT6חvntD:K6пӗϷ&;(]Gj4ofok 4vl:X9eKkhfXeg:4Po!xz\e)eu|Kmb[:bɨ/rl{ve^MpV E8D=P_0Ai y ,ω-/P05@>ψq8Tvo` utj Zp,l><m܂Q+ .IDHN.%-#y"PvgOOϝ=AAW\ I6 zsZ)9@߫e Rޤ1 s)_r(ݦCG!3<p6w?r( B >ɞH=|캝أwco3h/:չˌq3$1ž$4%bEh>IzD64ϤVrZhhn'\W*'\]Z]-]q6;]C F'!Uǡ?J>xBza {VgT:9l] EhF:+kWO1Bϒ ݜDZ[H23ARC"#@1u0"WB_΅Y<BD=# xȳ žy:Iz8H>3\]# ix2WȇcYme~{I$bZG&ۣŮi:4G|g;73F'SR'VGB*l6fsN N2ķ*N6pKUk/ ʹZ/V߷rmB,h?imB9fZgp 2} !j( k }<YϮƗަO~JAz8 ?qFcP4\BYneL'KAVR1/?jg]j_[x6%c&>))t.%nf7= k`G̼^4*{!p;\LSMMij EBzsJ&Ff++"ġ: vC?/h~:iSGBkS?! t*ۧUϽ@g@Ӿӓ STP',싐a^o;]\yG]#7ʍ77Ʊ;n.ľul˛=YM`;ޙ/v37_/%OΏS_(x:@tF[8_\g+IMrJ$dg_|I?Й_Y|s~; >)YF1SP;0Rs&?5ŌE/ ۆܞur=UMeL_ HT??TǬ,<~ۧ1Z㓚{mEy|`P@ި6]yR;_y)x|n͔LJW*ψ)x|nGa\xlٺ͒ <+;d>=GMZK_bA#۸ɅKte-G & R)Ԟ53jkW^;^|>]ҷ̸$_de 54n 7)}2K$-}o6N_42.VD jQZ>nL8˴ymv {- h!i8ߴywBO? ZѣB#M?fڼ_qBUA ZumxaBA qxr A _;h!/h!Vk& 6/'h!78h!(h!n?砅-G/A qB{Dr\mڼZHGL1j-i%͋xr Zw Z7Sk!HěLƫ{H6/g'{H/6/';hL/&p}ذ/̀xMa|DØ܋˜x0!^z ͘=F2qlރ#cL->Zt=n\x!7Z pBKA W8{O 0@|ø܏qaFuqlL- --+rB{D<ˡ="W!G +Z 6^OL-䒂Lm m1lZ`LqCJ(GB9".u=reB9"%o{A->,h!ҡAVsuPrwށ` Ö%զ͋y-^-TOGXll|DB\-h!>OuFⅸ^*h! Z]Ns&ιz8K;{mw ߷8Bnp hK[)h!TBA AA ZȽ#h!> h!>a4ax0A\렅UzA }s&u u pl^Q½;h!SBx+бycnهپ|&W-Xrv*hhv6 MF<%\{:M4-3LXO_hoghl]^#e;9^Jr !r^.Vv?Ob?kt_>Ϗw$?T0dJ[Z_ |̵пp#\O:;Wf&Ƛsl_ʿ\"8U?W~%g iQ <*bGO~*?}YOV~SxY_CWr }u7tV)|~CR|j׌y1?V{1ڇ cb5V;}1ڇ>lc\G՞cV(c|ڽƸg>,v1X~c\j7}ѿ[Fm5c}\+,I dPa/Rا( T mY? {>GaTؗ)t~zT,TСùۦ>3+=޾z{:.pm<޼埽w *t{N9nf.SU^1}ן˯w4W׽HqEB=^^g(K ј]^Wb ILvy.vy݈Zn׋vy5륿]*^? ۷J]\_^n-Hq^k\7xzy݁Pc誇WXp ~CuFpM]܎>#86Fql>߅HD ?{w|r{s{wDZ{gٗNOwls=ݓ9&!R(X HBH(Bp $@$#P$H B^ի}ھW^Uzޫ>iI2!C?L~ഌ"_&tMw|=_ ɳr OvVY"\=~!Q_) G|e"WϵRϳz]C3CsKБӇO}\K #Uϑ?K9RύјzBQϋK%}\?1pIG=c_=-'|!:\=o\=gWROk/]^!?_r8F'__'+I}U~9?Q@%|\ Mx]#zDb>@p5ЉGtX"|5ӆ/Cj?:𽹏+9W O~_SoSj\W㡽Bp5+3WtԸ$8g?Kp5Wj3I+$l7YBKtr>/UജGqAzCո8L9)r ;>ԏ \E\#LV(p?>9_+%27rwyQ)B^X/So[jܝ?GHW9=Gp55 _]/~8F҄~g\}8!h$?|Jj\*zߡ4!A_U"lP\]RAwhחvu *W(FHCNGWO A1gHp5W=hWL+R!|5>͆?/^=}1>5NMo|WןUʕgDոnu'|5 Cy#ߓ?"j?OaAG7H_=u8U8L|ZϷg':r}WjܽΝo|I \gO8Jp5OƁ=|D88g +lzX_ɛ[?"|FWs6:J=uV/T_7S~}L.%\haܮw<-RimO;JQ"O'LuNtN8|`A uyycS_]ٺt.T3z0nϵUVfs=5뷀4 9gt7dbr1U3z`.`go5t譋@=B{ydb gvAcu^7~ի6i!dJ! % Ѳ$ ;|\+x1̮53z=' ,2-ܼu 8Wɹ-d#cFx-ə\Z6j(XJaʍ 3ʸjPe'7chVw9k>[9>lA跉Ls'g2?ڃkA7]# +#Z1qlk'p6 @ mzdsN:Yf}Dpd}#3t2CK67R忣XnCo01~_Bݳw1ۨf'Uj|XNM׸'Ɗ~aa)|㫅UAsY_Z[i}T h/n]Z7ΟZ_^:фj,@mDGQ3 ̎w̅ՕMVaD4uٰh,g u 5#𮏐&rb[s&3k_vmTi.qǽA[ibH* Zäb.@(fdpB*D><Ǫq&G= 1Zq*mMQw :^< #A:Fz ni`aC4,tNwCi 6OTVVJW|ԋqVsC_P-T͵ң0HC&M6d7A΁T8?$j 5,XtϷ+=a>yA>TsW00B@0υ^V6G8~A`]z7c-3Ittr~jљy];.xaBV1v. MSj)DK<MLC}PF) $HFRH.]oA:aEr:y9yƑ͙ > \vvSja06h(1@mǵp*퍘{r̹7:MWS]ݷ[Aݿk_?5\^~_dg^\aM=0tP/2h3xgVV%Τ&'mC' Ac0?Ztλ&kوzĚ0f3χt /kSӚ6=Vڈ<Gpmζeť J&ߧụg989bbua |}4QRiLvaf&ŽVQ, Xg7.᭫ry?-;4a*O<{oA߃-mi58V"a>]L(F~Fh!jcVj$cY<1p|elW1dv;8,S52LVF*m`"oBA݂遾E?q.ݖ3uqLjcyDwI ,=5c4pUb:UF@ƦnQzSxPHu)Vͬ!%V/VPQNeeZru\핳:زWHF5ʙW VǷ8c!ucQD` _1z\;_ZZބ[KGO/onQ*$eUDqܲ^-=;*-Rңm?Ep|D&N>ƟWy {Q*9d{PQ;rvZx9WG:ĉZ@ +˓T`:%{ҩ-_~?6 .IF,q:8iXcM[dm6h'FtNFVϹY cxޭcc `^|-QwE.߃Ȱdz}/`.1ێZ\=>KL*>MԔ,`h!nA-Su ϯ.)S,Hph;|j H,#xS> X•'Șvs,7(Ïݞژ蓄J2I1SshJGํ'q; o 5KG| ^^,BN[Ҷ\d2Y/Y*V(cTbؙVm\Yp au;13-?en3'dPPȞ_kNkˊnص~euUcdB܀Txa`hH"+ƛ1,5Pqi5=&aBnpyk+.e pr3rU$q'bZd X>EF複0sIZ<~xN{Y޵[& 7^\SS|qӀRG_N̶+; Q* # w, iLL" %D{> ūDVܤ|| OUK@FEiS,H{=< -ň7XhIJ|sɍܥH 0 $c w_;$Zas1e$ |\Gu0~W^Fpȃٲޒ%"B8(ݕjWö'%BB _QJЯR/!I!vy a K]QϐI29(N 5 }M} D2ckjp6EU`)|NrM6.!}%Ų WA\pnNטWkrph'43+o!9V_LkHG6o]?ͪrpj-8Uf/iBub`ɷKc@6ɫIJhpHKW"AVBČNBx~3ρݱLxVRǃ o'@Kj_ƒ*Gw ᧠RP܃&_*BS_VvZðaX,zAAЬ@hƯiD0AWvJZBE2naZYC--|Gb't J:p1P^xUJ]ŜX4F5a &'-Plb`XjU;mvMM7HMun3/%5 G7:1ubW EC}sPh=Cַq15yi@h-ZV-ƍ!Yd.阆`CӅ̓tZUUծmkfgk&Z *e ͌ue>ǹ,u\`KL-g*hK37H+zn<3nY~U7}R{gwqX׭\ܾ9};ҟn6KG^|{bDҮ҆ݐk!-U44+BqկM*BJXvYKш+f%fuxXXOAW xm-RouSѭF.ۙA 1?#vƤ孭Ν9?'+5ȨS{?xdoRJC\FQm;Gw,dwݰ3mHv6ÙݡTQ twVX"q nR6%6ީ1;B QGev=%nOb4Ɔx0BRBbSW)EluBx6TVU@`PMK;խaWLa>/$.)&2xɕ\l1&iMn<I| AЊC TCj/Zz),3I6C'ca <2GpQC# ApܚaIS0FGKŏJ4'Na4Xm 9,*/S哮 7 JB`M+xЄ}Z Av,e]N1;--UIha4+2c÷Ƌhuf[Sj>t>9DUsǗ4Bl* 8k䈕AɂE,!L1R E2 P@'[Q[@h|[P<[<Po@)mX;kjLh8~8$Qj58R "me΂6uL8~4tQOxkge|d?Z?V߭kAYO&ް*_R\qW]‡6 :쵙-tA<=Єgf A3W<64CJ)N6GRkfߢ bVqw vst<mz0M42-FBo1++NfMTݽ{nY5sۮt m#Ķ {k58jIR}?lpxDB=zZOs_:OLG6-mm:+?+ dV* >qx@[2ơXߓ9A|8!?z{/NLO~vBԂ%oq]X<4ķeq4ڗKৢL|<NDl.MD)| Cfh\.kY#Pz=糦J\T= W[i2߹Fȓwh\([U`G @]=b(!līյkV^m,5^3Z@N`tV)tBeb bJ:#r?:_,^|\ri?_]޶WhnmjYs9yPVA4{f$k!t H>L&:m7lYgHDǀkYFC +{-k1rıd =MD`аZ +>It:ɺ0>$|2K,4ˣJE:ON"x2b%5B0>M:Xq%dё!`f!N:RnpzsHE$b4)TD;m6勆C(Ŷ&FjWaFD4 Z>Ȇ[nU~m* pa.!G zC PjK9>X2M)"Ȏ p(N$8*f/MGw]^~VDw~B$[`b>YEx?.ZG4ژJ !{ d/MڠL [{-y ^qZLJEԚJ[uIe+׽&ЬQHx+M Mht y4 _yN`^tҀɇZ Fcqͫਝc*_TOr8L

huO^MzKQT(Vb~B[7&xT2oI' eu\&dX!\0$:HfsA@~B;{F 0mlr!ermr#NriuO؁1^B,TGvml%Ԫ|H&^݈裢1Kŭ3]X=6 d'"VӬ^Ż5Sj=zګxV-o$$@oQUψhD AXgsg:۵S[ FTRV(OAE2#r͊D,Obl,ace*f(djg4b+BԙMo,D|2p FBC )P_'96fݏ yvuccqqǭIxv)*Vfi3e2*ƀ#@=:|xrs, qC 6ĵ#APh'ȗ^+Drp\ƒIi _iVUu(zVUe[C2R8179嗛H* 'Ƈ)J6C} e~bFB]gYa@{IR6 _7$z"-"3]Yht4GDzeSל8؞@V:~h ʚc1;> #W(gjYd\J !(3𽆮win`53UǞ b%Нbm͒_A64h=wV~g^vk )R vN'@#ȹFіa-^*߃7 cсWcpoPT3Ivw7|O5x(6'i{=^GA0aH>1L=wmm:7A6o$ݡT ],ntx`uȷ5 dU巈6#7mWnnhh(-.K,~ŋC{`G*t\ 3ʄ;8m #Z;2?j=W9ߎDA)]A %JVAeh)YSq;l%"kOO>mv_&:|Fw֏R5oi ʌӋHaʘ[ 4] z:Mf/ u)` A ,,`ΨQr`$$f.SlԋrU2NӖ D1O_FOSLT0NEŖ+," 9g.Y;^'2 O t8uFOAR]dM#!*"H=8WUw),UϺ06Rk,sL"bb{\+M95K<]=m3 V>sgPsGƂMɃ(smtc_^};vxz)d%_+yâYJN"4:ۙe݊`,F-֠B]b H\ qpnJA$%$SŃD.H{,"kcD_~DAXqc(8Ҫ b^&A"h.'=V&]A=*[MFstt, 猕Yk=,뇚^j*qUg^W%x=˕ F C. 8_<|kIKգ&j5;8i tM(amq ž1DV?oW `UUZaňjb$C@HđZ^]`YPWKƂbșs^ŵ 5cRl)'Ӏmhd@R-6vB/[ES"zE2T3X.M2M.RSJ3I^M-oiUu[+jB|=2p܌jCknTT@qJ,H/E5^טJ1HU^,ǂ\جyW^Dcr<jg!xUb7ݺt)tc2_FKڬPcn "9ANph+*kv4jdXxLd8MzT"8lQО<sc,r^MXP] B .Y>NDA&r*v ZS j1tUv$@mվjiT\ϝNηUfuv%ȉtyڥ 2T(Ct*:B#dJzjI.p%Z:r{26!h0w<\tvt̻LP9=4,NsATϩco-X |hR[y8KH1u3o; DF&`iLi*dԖܤ˶5U {f{W+[N5ubW߾ֿEoSӠUVh)y)Dl`?~t,םH+E;43*;`?n}uO1pC<` Ii ^*|ɘk(JU5`Y[S V J?{+2$^CIu9w,:S횊Wis*b빴%N葖]Zh-dζQ|]tc_6a99bLɣdT"q l0 3Ha2zOBM".6Uu 5L[ #(KaAQ;/A*UwvxsuYҖOMTԖSpG157<֑PQ>Z.2W<:1)ppdODA>%{59WR{1alr8OY n qr,U'SXs1ŪgZ9hvɓHf- WM2EP::DV*qGٴ9:ksM%ƫ4iuXn.;HkɢU*/o1 (b!QdRhiywZuX*3B!:b /!8[ˬ8q1nvp ݬ΃n˗k*;XN§ܳkǷ'NqS{.hm3[zf4XQ捘 a(ZaЩY3x|7 )o$yťì֍zمZDaʣ>2(]?7~TR&1Ū#IuBc`4*P$hݢWÑl)U#Pp֣ 3ɨzsnD}x/o 7)z4'Rz ,ѽЎf;q:n%[TtEf\bE)l׸bBW Xr׀'[捨CS< ݾhk p$5p'Ҩv8ْM %YΫuuhDZ{LLmᨷkQ[gɍjw"ZN1nj *iRޏcN#V /~i/1˳n(fP K]/!ڍJl8r^29/Wa-S +w3_b%խ<.]QUlF14Ia@Bu8rc|Zuk+֘5a5uݣ6.i\`"ᖁHWg֌Ԙuz DZZ+i#:)A O5Yg qUK2L"jIs$8P-E3:e-*ɞR%HEgG%Tcfi';F϶'#{Uu!3\aT^u K( 1EՈ57]&#@,E \e[0]\梕CgxVz^EQU/{kڪ/So~KRטoy\`Rs]\htX:1, &os۲mr'C9Ug;h01'⁑qnf`_b[Wml]kgep<8u348vGh-J݄~c#/zN;+cQcCu{tc]jTSRƦ̍%:M˩( KֽӏqtYQ5*A:C%n]uŎ(:'gY7M3Hhun1-m5gY@a[D7Q NbK7*.D:42K]U~z5M1=u0 HN tXzsǮm zWqe1i̊pJf0t )]gh|֧VD|PFx}T.F#N`"k6q[";nV(|J@Kmњ .H1 be* m֝հ.i$P^5z\fW,Ք,6)@}_?fK*YÝ4 vNO;'4.&!PQ^%AÀz/,)B"ͦ`s,}{3S,-m˛n=otAe4ˑv9[єu7M!m&Ϸ!H&3ظ{lC21jnbnj&#q>$%c9Xȑu>Bgvp$aoO룙L,J-.*?N|P_90cl=jVmAa6-kk싿~ yi{zfXf::d sz{ U^q 0g@"j/̼=̪E]1L܎O757MKze;9sk*\Z>ɇڍay-s5*OhJLk=5'w`60 [Գ^6YiڃW̹aKa.fo :WJ=i :Zdi=2j-R H)*ϻ[-jk|-_k."Ϋ&_=JST[ނk -6ͫOw PWgT-kv¦!"JIionT TKY,SQ>{xT;+JG{rwǨfGr&F u+o0=`l3N=]Ƴ[^w \P"מq{Mk9 T $QT;L0 Aaw b(Px=ǦkHa(?@vaݮ`a0(tm}Y0onbإu1W+YeL=lz 6>| RC!|~d63̠zJk(휢]&Unֵ[[Kg]vR妀婨Ȍ-t괺u}w zqH9h͉(Чn 3{rL2yI\AE]>(hĶ]RZΫ;QDž8,TdFq֩mE:1:)׃)* A<]_7\3EySnZX-`B#E3UnhU!Lɐu5]RbMǞİ@uʝXڋ^Şwy$޾'nbKoa?8LXƭio©zOs%ЌGwi^";Qd}y_^ǾMSe,Xi`Tӭ|Z1t`p' E|i,e?'vk[>g⭆stDd#֍&fNJRx{УV w_ P#jCtۛkckAɂE,!,vbvo;Dtw{?X$Ǥ" p-xx&OYCoW,*uCq5&v26д TG:~s#RJPP,!*Ɓ7KFLJAna:yWy)?EI*+1H"fc `k6)*%WPl&PO%zٺ3BPT" gӑ!a%`oimk@o\FeZ'/[VCUd'wt_IdJ*4KT4_>MXTJAG4M#A >EG.4V%EIJǣ154]~rjm07N/5ٴ1LGӈqވcЧbQ9da5Za$\gC2m u2o>NOۿ{w$[MQlXQ,#: Ab8EYh"fw(S[uú^o#畜uVDuӣFӝb?LiM--B9lۊ\+ZdGcG3PҧzT=pEhF5_#"sC`]K15]V(YîZZ.c=]q[  FzC$B" ,G;IQ U)-rxm|@Bk) ͚tM%t ߟ4, ŊB||j)fM A%"T;& !D#[Ȏb)5j]삽*_5D2 t€$𜱄9:4Z[k!M2Ήz hh(5qf \ϵBɇ@Qq|Qpa.]jc!Bկk oF18$jWMPq[aFzv2yv[ҥR[L"4Sݤ99E"^URt8o3 MoQj6 0k ȱqY4GY[Tp|lj/2Isuo UX#[ #u$ZcVʕON'hP㗛@;PD]:-t֥:(]S9JĪlw1.&HÌ @ϭu>:-o*eQ~mʦ/0,^Vx!>>/ Ub+Q|2bմ$Nӯ^Pǵ i<#O&灜ܸ] &РΒT$1qɖI"YgZm. F:I8 NYf ]G=}fV.I닍FUiCY(. `-CDӑpO_,O$^Da9%aՅrҵMO%ɝx0,@၄7r(XsIh[|9lEa.;g0aQ@1ۚ/iin99 b;G2xtu}KSs=4vn= '09_wN6  ͞X8@l"eM}=Z /l]׿s؛ʦͣ > 7H,A&\l8LG`* aNvmHBY}[6<ذA7]==m%a0㺭] ! =U{fLs5ݱx$JEa4PAnL@2wCg&,9[7o NvaX4J[W=G2[qC_ז*!Le q(s<DfHj{$1dGfmDw5:`:P){3*OZUf$هHҬB#}5VאpѢ@ @Q@%T9;jK-x`}\]иh5$E{GM/`VcQTiP2AI{@jG&0Qa9j(t6m]AΛoDTl뫩#=. ~^LrnU5_m/f-5}`p[5Im,N*$Dz@_-> ;EuoY` r[ \"6-9u09GL Qҥ8r(˜H$ij*͖zy2[#m\3=$0N4A' zu&fمM.kv w6~؜\x3X&t]gɟ5sNi̓%c#ɚzܪAݴ  [:' rL/cBJ(`-glݸ#Q8mUA 9M"ٝL9 `\7u]?n DC0inS/ī:~m*[l+;'H8LƝx^Fr܎2ō.#6Xjh@V2\@BuxP\3u\trEt5gKp6Z*R]d0HԌf>`lVآ=?9msEǭ3hg 㝭Բ*\6dS4kei6K1 46γ".mEp]zZ'fy ͭ˖;m˖Əu|Ώ -//B0+ ՜BhG2[/7Fy&*dF0 T4e~"p )QN-U=d'3p&q1 qٶJrN+:aV?lSzƖziKJ9hnQY(.O'[꥔#. ORI=BEg?67//B^)" pnEXe䃺TK>6[ Siӄwϴbf&i{5۳`{ڳlqE]QSn3ٞ _,]?A偟|~-5MfdB e`dcT%NRH1tM!cNS1U~qW | N4w@,RG'.,ΣDlЯ;M.ȥ8r+Q#ya=#bgUl6ExU(Ya⦘U ,R:ûpMmtfѢ!j'A{5- c/1)kΥb\yGߨ 8k%0baǯҒ#b(b%g|U6; ~V DB+#U²vӋёIۣ;huSj@lfhGλ1 %7nTvjLkvnm;eY$ڄ]vVtA+cxKA X`-Ê= ?gaH=hd2#(R PeALLP4o^wӼ oOM8IZfwT~#QW#ò5B@6M%FX0/JGfPvM7?NKQv!0Җq{xKȟ"bD~l 5ҎwR6{A4Clዎ{4?ݠD2]֦0Y4߱iG#i$I.'tk[k33hFK~3ZiFbͭXC\-$udJi(n`cHӍ IҚ-ҶaيVsko:P?`Ciomi 5ͤ AZQB$ UŌA7؛*#elnR[]2|aO-ckې EԨQ(a1rMqHr4^0*w ); 83he-O.5vOVc8#k] E̞}a6s!C?]A(6=2" ջx+Xleӊ.Jwo>j-f3f My[镨Tz]vKIu08 Bk CҥVRtڑZk,x3F`**i@+1O~DPq a _A4C(x7Y$]'`UeDmzyrTm0g{%(2[]kz0_ M?±P@Riד.dzlp{C $ufz^дȌ+u!jX}*S@01Hj ͦ=6Š!I`R{j]$AfaJO_[g\} H@*VJJˆQ!ݭf*[Zӽ}yW0ٶmEJ6mJׇ\ dzDX誑E#n.\eAonL 'M@5\RD`b졪I?ʀl窋(c?ښZm+ZZ_rϳ[xUc6{@>E)t 轗zshom'ª߾q,ꎴ(Slnh5ƦMfsS{ۊM1 f1(ˆ3pim\ #Ҽ(f "C#:3dai}۷h:'PjEf`e֬+wp˄o͓H d eDfElhx2HABxuG -24dNt]fKI,AZ_,m-f(v ٜ0"х 6RQqkhY-J XGͪ xz$g2k0CMk":~%~ *ۍӻec'1UC͍F"XqsH*:uj Ga GBi Azy WG#:焱^D0RcKD Ęz8L3'Be3\ӌ.?=BPEbi'fJυs:3Wy_[KsKkk2|g7?+i^Ѵyٿ565j NBm}fh&)_ʇMPoEa'kx|\p<6@HlxجOhA^J{AE~ g)#C$ϭuAN|b(ǫt߁U/x&/0q QpommkYXWX~{[gϵ%>k "I+M( 90:l M",,鰹til[ :lw60*U>q~}6wkW"-Cgs>*ڷUKHgj|/#KwH~9;;lte7@9F?>\ޖ5#dY~y[C:BK6n1np>_zVmuhb[K)!rfy.EWA EZo1~j]1ˣ~>|L|_ }u)3 ZUv9ݎWdx#jQϷS_AJ+Þ}#\E/ '8ޑbv9³b?QC>wџ騏\Fﵧ_/?uI"[83(' k.Krjp)l6of:t(޶RצQ|HLHf6<ug> ?d nWږf Ǐ55 Dh n=206o;ˌ"F:"H'Xn 0>cr?njDc-(0cɱhX؅( >pg p6*Y3#)BCQ( PmbL3Ḧ́!*-CT .~xw*={A wxϻd {At * L}8H;,f0 %"&7 4e%ꗈo"}=ާ4˧AE- ՅE e_+Cx5"#<w𹆝9h9-\矎k~-nek>W KZxnjt]|V^u>C wiZx^ou:u~>_ik{K;hwk {K{pO L @ \ P B D ~P 7gp>/kዴ#Zxkǵ-nAp])—hZR-|^Z.j~I oWj-ZxުwiO\OXin(I_/\A)e?Si>,G_я$! H$WЏ$4OI{~1#I>Hjvя$h߅~$-GMHRVv#)9j?~$!G[_I'SF_@'?/迈O还Ov_B' _C'/ RZO _N' %?GGoR}_E'_E'WпOϡOOпOE Zj?GO ~ u~~ߎj?So|/?&ߵ}[9h*Jg3}=ۺyg K) egםBX@ۏ |UA“Ǧ&G0ǻ鼯flN>ZvVl-="ٽ{)~q#{Rv.~]5:xqNLu\>ykjSwD_SuMeOt{BWEݓ]ȿc?> \_2N@MXTꄑ`\jg?- ]ʽBk.1lZ3$"nXPPP( ByPP~u>݅(a~ c2͋1F03wp!CC;[@_k$ O?-|h;MXL$gr?6aI~,}B,9N+{~Vϊȗ+L{03'h!VUÜSN *vbd- ބHڳ w$^{*sWaqXM]"|'ܓzG>YDUĻ<IMzya˳  q4n!.TzUpJ/=9#8館܋܃7jiP_VZ꾷-< F|WE'}#.:D'~Ӕ+M5cHB*r=5= '@ÏoWļ%qw\&mdoQɮԒdR~ Lcq9{/8{TB 5յ]\݃lKn@^梧xw; K<~.gpw2hс³*{_?g:oD ,D%*(F~Lo@vm?ܾLTsϾ8XD㲅Q 'Om~qcP?\|ĄURHkJfWyl*L8WOLu6"mOOpʟ:Bd)V[T?צb?{L+=Nk"tzP:]@p.%(9gTߋU[ ikxS%1G~ }_)ъQ_9N`܁ͷ <ExOH3s/ut䓏`|{s33p,)JB9ߍc#ψÒ9;}iflv 3ڙ1Aٞ<[}'fg._ HGYP|;1[JT#]/AO0B2׋>1~ȓ=E?qJXy:?o3+މZS?eu'8X[ w$vL$kk#e^>% Ժ~౎#S4+`o#w5uߵo; ZLU vHb)~M—L@+|RQS[U@>CyFjZ|52|$3Gr3W}>?eHdݜ< ~Gr{PBz/ʂGˊ W@gxQu??k17ʣYՊ/K?..z &cs??<3" P5NvO-zk v w]@w=u~nNx="&݆H@<8ngۤ86ދ2="=G2}d^S IkB7 gZS0ZזF_1{ax9v~>> fѾ9V?&Hޔͯsy̑o˹~A`>32gtU9Ї633 nyxQxr~nOOGo>#-;\mɼ o(ӷxoygX g{Vm -2bfץ b`M C&NiFT !Uu}UUݭ1FŝU<\V;JW>YWۨx*do[n`+je Xb&^B簳$Ay_ -|O! [' 12 k+zG>GlJ׺,Go7RdK/Nu~Rh[ ֝C'<֋)ԭGvw/.@;6j^7QP5ꑟ~0T<;>V-kgM$ ^Ud\zW}'NfrS!ˎnbZ{Vu2?OhTVě~`8ЦLQ:%8ЭҾ/CˁojHε"{W f\!*D.4ozThͮ2ݮhJSز9B%/V5/wU*j%>ݱkFÒHPk <;o[U^GD|6:Iwaa3w)V/lMO J ulIG IW`0\O`߯}~#:i'ɓ e\Td0{O≈U/쾘 *GSv?~AlL̴w>0./։V iH0ʢTXЋ{ԐNVhXeD֊w1W<]ehs8uzbcتȮy݇z|vǹv'r//wA9\H`u]gs笹.a)&|Xa'^\$RW<,'@j*l+ rd3y(ib{f|D~3N5W=EA7M>h t/z`U] v^g=`5Mgy< |яۤL0j%ON@rg8|r鶗'j~Œ(LkJyo crS˩_$z﯂N`v#G~/G~*ցb<^?.w>*x-썽`̈W䲏jiLW( I*L";2VԶ|pW5~U1͘:O7k@ih\Wul!Ϣ-zv>5 xx$-칀*c <:K/pgFy -!]>emRe}ӕe:ogN`!5#90̀sOØD~jwy*{"3YT3wQS׾Ϭ{x/"H{wqqq_ zD7>/?inymwoF#;fpx8Πg hnp36'OWփHQF#N=aevunǬ̭7u[ 6CCfT/Ϛ W1fJ:Maȷ\#Q(sޙ Nxhmhnh2ZCmu=&t nѾP*4. ӄA|GrMUu0c l"X2!XF"DX$3HrL}'i4 s(U#x-(Nn|%Fw{BCѸ-T=T 0 P@d$Ս/@Y_5vGߜ39fB6ŧ cK~s=?Z%$P).kCck#{8͌$#ZsDP|bS _ᘖmnqlmݬNcW_g4nbTW]כedGKowGw1䐅1ڹ!?I&oJ)" c#uׯY:w˶ݽm}wݽFcԼRw4Q~;]ЀbA/Rd(Iņh܈eyL0Gju&5wBkciE+$\w0̦Q Ԉ 0{a9ǣ;Sx i%!BܖepK~NJV[IGIQUH#N7L*7E8gtGHZi@4?Qv~-15fm$$p@ · ܮ#U26-~-TP s _qak9%$8:HmC!rzGX 3@ma1GƑ04>PM^8N pBcc\Ft&~w ?-`|t Pس0T'%0{YJJx ΍P"bgg χ7fG#s-AVOeك`{B5p^nH 7-ܽ½ 9mT4ܙY5̸ %d* jNJWgǒtN[[<͌h4 =Yb#k6O[(:IG,kcMhhOl4;JaGPI9M|-?q@ȕ&buHo2QO+!JaaKà.pi.P:`xźwi*ۉ l`.Do1jiv3;b-r8\!0vnzrW"k޶JؐL R,R6P">r3)e?N2JC"'OQ;`-w_珼p?E|mq%"}K#\=GK=-KRv7-6o9|upV:kpnjcpڗ?$MpF1?mcL{* 0cp?%yDs~=!"9#\ίyw1wίHoP˷OZg_w&88r~Efw%D=?_]8ܾg];-Pٲa]gzp0*t\jPKX꟔䀬 =J,gג~vcO:<k'CQ6Z=e  V"9sd2){jW2N-9N#@WY{{7#|n/8;Dxwғ:."}wN&8:HfI?g޿s-P;D~>~~ߣ?u__{Կl\a%҃@S$}ln~\DGx,IB&T7-AǨj̺XU `"Q'!w٬UxCNe;>td?7n'O7ɓwɓρ{] n9tցup{ܫ}7o~ ]|? wp7u~]>wpQ_1p/gKց ? {?C~p}2 =@~t>I}' ?L> 3ӐB~p? ܎ ?[]qp>9pO{  w@~p*w~A~p1pO|8po&w7!?s|9p{}7Ǘp'O.gw5}u$on7|{*}-o#'O{vUYy יo#S'm8_[ G.0>`%ƄGtmAGe/Z!;eCuW!..Co//~﹢X~h?oW!mWyz⯬0ӫX[!uy;ػaSoc|M^36~[o3jW,y~=ʿ|6?}Gق~ϫG'g"g3=_V]k: GCQ/<~׾Yy|Lwk?x.9;oCo0ztu Ww{C? =?93j?}~y.oj?k߽;R#y=^/ej|UO<^/eaj?𛟂^o蠷]CA۽{:{<'!zykow`}7x; w/x?_pc7v~&w/yƇf*g= -yϬ79GwRp_|`em`)a|~KV8oRO~P.QG|FuGy-VBWrdq/)o}?pgJ6wp{&&VGDŽp( '}E. Ak+7 7!ܷ =p?&DžE~W?+w(_ ]+^I U1>./ :ƫoopۅC7*Qf;p}"NW½_P?? ws؞Ua{92~Liuw w{) + #|dBzGw +~D_#iÆvvSﶠb45hjn^fl4{bI7x%x.ӹLg5οyÐQ7._q^OC='9PϗQL_[%ӣ K js'_W(}2=BsLr+}ezi'`e _A4<)s//b=32$o8-L$̦!D!`(>fFU~4tCh8C"|8kÐ=P.Na*CNE"d_#rI6<'k+E*vC_X M8EH2p)w|c @ACF:q)dcِDdߠfY2|{+YA4{ `ミ_%î0^oybP] oҚ/5&7ְ+.2u{ ݬ+ M]de6~yN@ԡQ݌񼄱[xdzRC2ݍ/ v 0>oak"I~OP Ϗi(JbH6Hoք26⳵Ũ*Oc4\;n &%%56 []>y̐ ڿWI\`elg2$iz0s3iz˻D|yb lM$l }s?!W5䟚+Cnimz7<xP7\SOe^iCL/oc`h h(F.P[dY_[f=N=&ұ <7JA#TT%Z2~[9"#٢A1/{GtN)n=9UAtg"$-[::+/ DNcA8o`rx8ayya󒰼Ǔa섿hRҒ祢;ӡ]Q{xQ"Z0{3nr#?[AZ1e.:$ +fdڂyWxvAvVVG1F%B)}ob_|Ɯҕz3V9K% I>ﯞz$͆?Ze̚'7͒0qA~8ދ06el:Bv| c<adfG4kUaf$;L>`˄ cD+~inUsc:ZB^yBtd~aȇˠ5Ty{"蓎ЧV힭ql@nU}3%(Yַz =6`u>܁A:Za|w`vb7<:|tВ+Rj"WB~t?oK[~ٸo(ĚQxb>ksCo;-!O EªuXl}' bۿYۿY Wq~dk3X6=fy0sfcH cچcLYԚ'OK$HAyXֱOC$H?| ta)JSQJ\cr cz\I W%{SID!~a;rI"~>h!_ݡ7K e^WjFw#J {$[tRoxM ,*R WqWm o|yVuWr~|(.JEi YP2Yo4uoY+D>҇0&6Qg,B_!s^Cj_[q r)pA_+ŘM Fr ȂW$[٦:=%bzBc|gѧ`]qn&4-o|o*gJ\L _ɜv忕;VTþLq">As T7N߿KϷ'~vCΠ)fPBTO1F1_˝`ӾySBHFЧjfE;}X]ЌGͳtMiw $[{y?)^)bHDHwdޕ> 9U 1ڸ`=ԩ Q/La!;__/|C;q>ORUX'-Ve G|SMI;ߗ1dž}1D; G}O)j):i\JSm𡱫bhW.[#7}'& vCwh5Ccr<0.eJjvCkVҫg h.,}-#_йdxD .:K_3J.BY(z% .!ƣk}[( AmH 77XmGM뻕ʓp| e>olRWeOF4O )6L5EXYN,W,jb]51KWM,M,jI5qoYqg99h~g7PWB~gDRTlZB2̿QB 5s.,І),Idܒ˥\.%r\nI罽UR.y_%)KI.)$r\nI}MR.`%)[r%)$rKR>o,7IrKR.I嚤\n-I7MR.rKR.r%)$rMR.II&)[y~IrKR.$rMR.I喤|ޠER.yܒmr.)$rKR>~I喤\5I&)[yΗK\nII\uI&)[rnrw.F飳$Iɬ,$\.yZRuJ4I䇏DTI$w`6K fS%KKH%/rEx:()d;UR.y!D/`֢&)$Rd4_q) 'ΑM_1!)NEi4K~UbH,y9w׼yr,_oK%)qV(Id+\Y$RQ.()W(V% %)J\^QkK `/IS(V QaI%>y H% %)l8,:ftKR.%D^^K%EYK%7EFM̒sxF+sfV-YkOY`l1 I/c,#d[6Ʋlp$#\* 60R].gB"rPOo &uN͍]7(DYʀ$.o&&NOL'`ĵ!nS0&R~ SI0n{q~]meap@I"`AeK+YUK 0TMvKE.(u{`8Jjw;wD`q&?4^s{j gҺ0G?14~_;/.[zN]pп$ 0i)*ƣЙrp_ұXTGZFM&b[\=зfHdsSݷ-ԇGHG+D ;mZI]8bЅs[WsNu9MGk :w}+ש>U]]QDq66yEzݢ2A +nlȍlڹ"1 I$8Qmŗl]??܇u ҷb?L1]ڥĩ~ID 'SpōT%5_UzRx\*+JUZN_ٷ&&3zQ7rOå"#w&;{ 6zLYB`C%/dB 6*­upX7 Q`oF,A?eYAoFoFooѪ2U"(jQTKr8(j`(jA1D+QTyoQ ADQE "(D5QAEorQTPQP AE b5(AEG2U>D5(j`Q FQe(|:(jMQE "ߒ,CQADQX5Q X*E J(j`QAES(|g׸ԗQTy$k|tHEorש|kĥ٢ $栺.4A"3#2UGBG` o򃑔 ČRTQV7hU"FQ'&S]ħR "?pliTǶ=*K!d-]TL^*`OI{ڪnʌ|$FUVuC*%` )t:*<M"nU,L,q#*=LT;%F~F՗tc)8N$ )ǺZHfNnw]7"eST.1 Y#Ȁd# &pU&qU|3SNd '7x(u&g)lPTyR d!/i3a|e(5 sz*|5*U@.NQ\ tc{wa ߇398'#ςrl'w-InHQyZ$>(9) O1kz_FQbnXQTy;^*RTm sIhofڦ[lӂkSTѓ\K9zZȿ'D_ʕR s E}B]~9*n븵 s -W G|mp)aUi*F-?VFu|/,E7s# {CQ\ע6xۻn;QVjLQZ5,Sy?"cõo$?mB>zM?2\y{׋>h|䃀=5BLB޺NbQ8<,_f_kVotp{&sdGsR^DMwW40\B=0.SBBWP~#|ިD[]-"fP1zXR%zTU:k\KVVɆцhԳ#';V9Lh1 ^{B= kkD [=ɥ ݛ[ o:=:%J"m-z7E"s\ó*؞|`RQ=.x%v/pY(~Qd}WF0j n<`bM Rc:"^#}QOʁFkɹ I>)V,CIQ9i!3qS!1vl㔇Qh6|SOt_px>pZ_x2DUlY!M}gCZo*gT2hr&D܋?KkUmYSL\kU" U2uGSRPsTdK6a -%KR-%5Zrz:; nrm0 wђ5 v[L\/T;d>fHOĦe̴L0˚S(̷f$w=-r)JNnNLL BL;0fZ3 1gc*T< SyqLL0UgۼN5\G|$0p].[ Тt% aE^RStiL] WT#H6-/ $n!9)$vt\PdHt'3{q:/XAGaX\ڑ7j\μq)Z ]8Աd|(hwsE(Xhi+ht=9,fȂ[b!^qm^vڛ ^ pVb4* `7ZE\X-_*ṆopDrE33ӋN/B9FuzuzqPC^:sVCν^:uzbӋPg,txbӳPC^:se9J*+$3Vj4U9V|΀ ΢3΢9*׹ Mj+ON\$l9mݼ(Jb(J%Q)O)AN(|Q7By(:ʨ8DE qmGQB2ō:u`p0Sq: EYXMd(xg4q07>?G_ 3nNQ[2tәy#5#mQ࿀f3J}D`>¼n>:ftә :yz}|Ѫs&.+f}%}D1$_Y3³t,}ts4>F1Y'se観(5_6%c>IM Q/iy#YNM^y鬄~-or& )?Les%䍏(G&]̭Qo%t}@b2;F =mOAԜdE#t~~&+&bEL66W4զfue^dcU&!>`zgτMW]3܅AUTߎ-\!*.%t5^޷),>w1QsܞIm*GvMwk 7UGu.!|<&kݯMNJx8Vģ">,)bCmTu͚xu).I*xhW@ xSMyY1 xq>K]%Q7]pӑHup(P*#^#ljX80?YaS>1 &Zu{.|kVW&~w&)x ]h`7l)|8f?R'=[WBL1tIf14K|IS? 0nëHy+ӴlQߧp^m+;QYL?^cN:R_FYN#xvVܽ/$j/[u,[`KשKslӠ]NrGBP#ђk&4 f?]1`}ܾ٣}_V O.k ^/#6\OMg[B#.ᦕRb[B))˺;sֽQmbb{6=׈5=$sfk,Kϒެr ״Hje뙶`-{^Ɨ= ?> `Źe$?ÒX3,͜Q_`xZ)gQ3[m#k{xojΆU7˷PeoԄߎ7wV杲s4oF.F`CNt0;ڸ\JlSK[+l' \5pw/:8 %>-|NQ09`ViSϾOŠzlMס'6u/f4 ozsfw$W4ku΁ RW:T6 fv4U:`ZQیBG~*HSw%@VizW/EDnVwAAiRn.Ycugw֍ c%xEh :p9ElI$x5Wqs8{hP?(8F-=\vF*LJ2ڻVtv.U=jk[h*X3t=r\^Cl<uj ڎu>gfcך5j5[Mۧ sv7?k헤%Siĭq'gFתc:{{o29VDPRt˃.={oTh7 D=L^&!U/S  U/S=7K ⏢iBs '"n^T97/"n0y ' nyBs+,ppVar7OX*b0p` 'b\ /1NX 8a2ŀ1p^&p' p"f%⦸p' 0pb}&Ās__qTp 8r~VȑNL`*j\*`Z7T&>'܃;8ɖw͇N_?OY7Ȃs#q0uDL\@U.>=Y -.ӬnVW2)y,D ,Uw9nea+21'?_`i]BK\Mq 8 MqVsDz|[f¹]6 6$(Z\`jP :Kc < CS\aֲ 7&H?G䓙pn #q >L&ia,-%PK `E2Va#\(C_II ZZ`+5!-#0(7XB$-6{K 6Νu{5B֔̕iwBA3k -i7<$Å@ Mi{ 0>E&>xp82;z№ZJ_ d8w Mi0GnI tLZ۴1,aa *】0q O$.jiBCZ7rOW~*p, d$-YCap6/L?:}+p,_w^i)"qaEP\` ٢q}T1ͥr1.UEdT1'p/P_G;tXԇǒ>uzG~`:j#kBD=dAgP~}<,~f`ɨ!c}jG} UG図gXG^n`?Go:,>>J7>ܐ.Kccؾ$0 ؐ7WJﯜM` mԇ<`tE II^B!kLKQRW2\ %%y[ II^Cq0kٴ<>>|^`@/ᑤ$oփaʇ~$(ɇR JaT#^* q${PϿd,ɔQ Jl5F})QNƲX5xrLWGI^8˖@InSQ[ȶ_<Q"dp VK AI~&5S.(R$?G%USIH]`X։á>3e_ J&-䦠$O=cԇ_jq %(Pڂ|!ۂR*AI~9>?VnQ%y7-Ku^6z\ J+kAI>xD?rcr|M$_OQb%6xVɨuрQ~=x$(}owR$T`('U$= $1ˆ$'Š r^!LD}BϿnMm4-w}lCsWGh`_8Տ 2]_[ 2PWAUt=hdzT*tDL«R 2ݸCVĄ@LJ/1eQϯUPqdz,v5ǼB}kU$ԧl tZ j[Jm^H8,7^rnf,7sW ʒBN5nml"NL+\DgN5%#$Vj׫|j499deܺfC@r҆G>թG_W~ ¯fOW' ~:+x%~~kjǯc_)_~+ů:1~BƯF ~8"J^1~l´_c_j?vJį:JUh+ſ~:_'+_uW I$@n^X`oY#W m7d8oԏY6T5O"ǟhUk Vi/OOF=aR/e7ŭSIQcE=FO7ی5?K+Ԭ[Al 3v\+s(cQQmD=aS+m^{ =݃We>&U)S\̴}]oUlUK&aF "|_e7ntɴ˿ UêVUݘJlH=}R6-갞iZ'*gUG7FiܪhzUbCo_Gm1 *F; QPXRȦo0w蝅Un_ӧ?Zm`ЄWK6PGw1'S{P͎ޑN/\mנ RŶ 7rU&O`=k{,WaT3PfPQ]@E 7b`oX*lUk[؛ވU_WU?H0<嫁AG[*X܈}{ p#:a{.S iK{SۃTT>ES,,=CQɵ@ ۹FZK+&r&ti"ifs=ۙ 4^N;qox_JUd%=0-b+{ye* ]ݬ ]5ȬRUbkb*XŏnAhZ'p n(NcJ6ZU:S+T  Szv+ގ X;;H/lXWXS:v)fᝰg|kw4zh׿!<Q?ޜ&4 g /Aü91-AGrkؤPӛۿfWBGzGh yMC k7-TVqVwlj7Ri$؃6 w:l-|WԬ/@ۊ5T;wȂc훗nϙ A~Zz!b[-_6X?.amvv=;ց(!fj/BiA]"lթtpI]g|mY*ҥ0,0e]yw\՘7%)a 1:k/QDZ|o]|ކQDGzwZBzX1e\PGGidZ2bYR)gHTlX!=I8Nv '9!jF#1+z ' sט'%tv0('rռV^KϿx֜i k殮k֮[շwݚyGjV3uްIMOo:RќͼTA.#tO%YzV )(g5; 䃎[EY.-H3]IpjCKWѼ\7[9[ٻ(< &LHx*aDTO/y5@IB.d`w +{7L|쪫ߞy;oW6ֻ㪺]U } a׿կ="r.sy˙r.g!\Αsy!]ʙ7VμAyb^BA)TʯP}sA\Gٕ{4=ٞ%g{9H[%$@.J`\aĊ/VbB\2!FaAi ؠHWēPpq|J*.̚Gvt%#[.\Ơ\Br2V-abʗ*5k\>ݙe ;^r]ErNY1Mnlr;-vLbpcO5 >c}lwfdU)YOukî*49≞. ϐ冞qKKhfF}<q$j,.Qr/vCpdv=?RN??ʙ{q%rkϔyȳsF>Fn!犍|r#y>yǩ٘QȨqvFzRZ7Au@Ɲz_'PCe2 uD%4eX'8Crʧ>V׉cq˗s97{X,G;{؞(EQT?Hf]/Obe.ﰂx;+làۺ>|J&@pF`ڒ%%fe?|aasEyVXt0>JY$RR! [YX__z2vN{ackt-eTц-v#75Fq;֭kXOR~m̥ E[ZaEhniOHik母6oܲ!yHJ >Ry*pZK:OV񧹬U[K͖--'H>Kl)%eDMe*]N/S{pƒ$Ŷ}VX.Ȕ_εw\&;.vH}}F;/CnE[i0>)$H'6Μ&$E$z5sə$F^K*I,M#/"sz \%~0&%߷Uoa,$BZbHx%I2o4Oͫb_1S'GnmoFͭk:6w4`~4NȳkkaE_^VBͩ@EL`hA:Hy-)!t(`vi0+(a6gU !SIS̰9'R2;s^>U(ZiۏW ~/QAh }ZII /s~~4 Ӫ$+]Q9Lʩ,ϣxZ-)+:4Y6#G@L;a9a*Ϝ[q*Ƞu,(LB*UYa[JpfLL5EH7K%+XObqj8L T( ]*s^` )}u%悼>[ z;N#V3kvsvvn!NKqih"5T/KAw2!א$7޸=fi~!]YKI2ARX뫱J_%XAlh~X 86p ?JuI+E/Z)J "̀)k^'#ϲL>v5ЇwzYt,xd7?v:hm.mlmW@mRc/I'hIԽCgNJnwsC?:D≶}6_ ?ȶ/kVc2E翄_1'3Õvn0].=[KlW}gSm^>ym2n*Of۫5;gz ͳlkN*oر czM|ޮcX Ts|?Ǝoر9>V&>رRsƎՒرX MH" c܎lg+ "OEF>.8?f ;aq~̎5@zF%Ǝwyof^x2v_;/0;*w+JxxƎG~5xWBox<=Et=kK:dXK5[ ~ lXa]~M='O(x2._bEm݅_z{U]I,}C+tILeLOgNN椕VFNzWZ^CNJ?.>Cp^ `C[k` ;abBG}`\A Al?r#.~G } Zh3rҵ`Vnh[kA 8k96'Lp9>Z+hW Z|`k-}`{F~NG ?G&xh\\.A VA }->?>m<' |`[(h/who_9Wlct G c3Zᄉ\.|Hlw:abx˅cDp~åkV,hOт-U.G -{->}!l[B_ħ^*h<ӧ6Kzu vт|Lnׂ #׺\۳B{N O{NBkv #].*KەNs-n C8s!l9abx˅kqp˅`[䄉.0O01˅k C/rn-Z&`脉V Dlp m^ 013 f^ CN~B/Mk3q{åkL^r!g|`{VN Z|4ئ c,aL~G W<4v1c 8~ҵ`{GgM9>pZ ZO\.] NQIlp0._ uLтmA =A jy!lu¼x0/\v Z->s9|[w8qo Z-yOΧa[-#pP[}`A Zߵ;rxO9– srSZqX /тm|G |l-Lkf 0k:-V Z|`W3}`ChGO>SeDڷlXn;mri9s[ˢ-6"MkڛȺ--TvƖh?q;uְa Qvf~gom~44խo[i]gD;6ڛk۬\Om4stbo܍kXe/}d/jnkt-EonY vm[C'68wa6էeWUrgYͷ~>쳻]#+x|򭽯?7Ë?ㇶ{W_6CB H׫R?>x}<]JKKX)b%=q-/ߜ+W`ibeJS~߆VүkW[mWj?BYI6d5z>lMg M|<ϵs E+j|>ŷ!2(>j_|O|JMg$Ag#μFG8?řHg~!ę7(g> q^3[泜qY泝VsqTǸćqOsL:̇GqEB q@9:̏wcD O5 _K4T ?MO38>3O|/s5|HóI9^-wR<`sKOY|z}-ޭWްpѪwx KFN1%6kF(+/6zq 7v~\ΫhtR<*UΫ~_p^_$ͫ~9+yOq!U8 63Y91r>uT?ݦ]oWyL^˚tvyt\G냾w1-c~c~2鿀8o2WrFǴ%8oK;8;ΏCIN羠=|ebS>_ Ϋ~gp^7;y,akmVp]'+.vxՏb%~G}P5ӣaAGWoU?8WyկMΫTkkW~k޲1Kz1Qgt!UZΫ~Cg3}n|GʸeOքetU~~wowGg_WF"EUrΫ~UW/V/F%<6^7 OϡV~~IEF~^~q\ůW>ϔbF1R)i~S8)xd3^ͷm5 &e޸U OG|?އp:\yGuf|g'&DGl}CLR9 %~TE\m9:_zo:jvfy'W!궳>,;;]NWUv;ymgwCc;:qD83VC`"f +` `BL/ gy>+ڕ_>qb/y9ON|q87SOntk_\_Wǿyxu_7__~E?xxp佫ܼ/ׯ7y7Ɵb՟߼:>o,^z^7n|՟{7enz2wBOnW>?Z?Wn=?:O>f_\}f/ȿ^_Z~ׯ>f_]x?]On|xͭ{!s/σ8 ~o/(w/85xp(ǫ}3}g_|1>'?٧ݾ=OO>? >xzq,Lr/<|:a;~]6}^~ӗ?w=Ih_߹?_;t//|wt|}Gw/./t=}ݽh>ök:<=;? L_s_(x'Oۯ̕si;izj/_ޞC<~v_hB_(*[?銫/9ɗ=t/~kœ.Λľ۟}x{iM5}z{߾=s{&?ɟ?vNNz/kQ1_~w> >=|!-*?ww\rk듿Oxu?=޾ߜoo~spkmt_yg4\o|/<^G0/۷?~s ?qIgOL_ﴇ͋1ur Lo;o}~sgtӀ}?.=Qi^tȋڋ /Xz0 f6_<~-syгgo"դ]sލwҝ'w^i<_v;~ )"dtyɺcr\~ gKFᇁη'\xyӫӿ^Oo^f{wNrOg}Ixv@/}}ZL]>|+ӏ?_?;{W?_+O?>|ޣ<G?o&_'g'/|O~SӗO?/_^7"l-r/}ه~pDo.^qC_Pczgo^}>zk-o?~^;_lf柞.|??_>O?֛~_/~g꟟O/Gz}k{_ϵsៜ^wgW>ӏ8oM_)rOtg?Ggx^{v~{eyyׯ9{?< O~{e7j{.O.>{ÿgӫ>?onPr^s3f~_?3E[ϧWw{g5&}>/ ?<ܿ8D;QOq7`?_}x??ߞ][nwEߔ(> /zzݷ"(˳Gg~sosg}pGݝ/4Ggo~5M߽x84 S7;>SΓpOgQ]~ub>ȭOgzgܺ?P3?k~^Ͽsԯ[O?ߞ@?-W W'?8^}i=_XQ5}=\g?·~ǏbY9?pqO=_ŷn}un_=:?|G:~ۻ.t]siJO?N?9g)|f'?k̫[n[﫿4rOyx9_x''׾wy04~q:5oam'|z/|ًuz\૳ᅨ+7|OsiUk]4[sO~pZOoyY ^.5/Mk̫ o\Now;ӧ={vp'pg.9|zo>}o~p؛qˋy7_95-}_٭w{|xֻ:Gѳȧ&og{o\38{~Q7_}_͟^ݏϗϮӟR5sEqy[]v ?;-m_8q8!_7_mrOᳫY8DoL#_y)WZ(*GiV9JQUүraVz&z=7ػ??lZl%??^>dbş$G5G>yhCCE*  CR"h  CrPPo CP9qTeq)_s8Urz/Rw]u;\ m,' =YO}z? .4yyo}]gy; {)YXkaXgYޗm݋}_adޒ{K/-Ydޒ{K/-Y7ͺP7vGq>g}; {)YXkaXgYWuъ=w㭗x{x;kq>np7{[- wmn܍WVC){>% =ޫwߏ{8YS=ýz}#==ܫY}EՉd{Oɂx~{Kv,~zYdkK׮ů]/=_^zv=ì 5x3MXa-жmgGm;uщd{Oɂx>N,xx}_=~mgzYد~/pq=ܧd{{V}/.l>gp6[9U>={,,YާdayayQ}Oy6zYx6޶llxy6^U~޳g}=,0ﳬ﫺xq=ާd{{V}O9|,{s>0|~Y ~޳g}=,03}_d{Op jbҾ/=h_cE{S'MY7eqޔI{S'MY7eqޔI{۬ 5x3MPnܵ4rwe-Ydޒŋ{K/-Ydޒŋ{K/M.4ycq?ݺk];xxe{OɂZ} {ײ=ާd{- g*bѵxZ,x?T͢kgqg!,nyo!,nyouobge{{- }J}Xts)[׎![׎!,n];,n];,n];,n];,n];YjfnܵS؃86Q7eqޔI{S'MY7eqޔI{S'MYͺP7 !];ځ{>% =];٥\ڽ{o=vnPثη{>% =];٩{Z{_- o={ѵx' }J{׻vZѵKٯbm=V~mgޞe-vv={YX߳O^ >~(JѵO ý%,^[xqoŽ%,^[xqoŽiօ&o,w4kG ٜ^6ogsjY؜ٜ6%\ڽ{o=vnPD.d>'ݳp2='9jPg8_}!vv={YX߳O^ >*Jt=ɂxAt=ɂxNt{_1 o=ޯ{{wt}Y٥eYYߛ8yo),Nޛ8yo),Nf]iv뮝Jt+poŽ%,^[xqoŽ%,^[xqouo9];^͂hh6%[1 {6垽z+fa޶ܳWoO{v];8U,^{,YYxճdګgWϒk%^=KzYjf?ݼk];={,Yߧda}a}e}_C-vd{Oɂxz(vd{OɂxzN/vd{OɂxzN+vv}4qndzdϒv% =+_k];դxYA(ڛ=ސ,^7Y=ސ,n7dq{!ۃ-.4yCiqN-v=ޯ{{½sϽ;E~gy; {)YXkaXgYޗSE~g}; {)YXkaXgYWeY-v^˂x0x];ޯo=Ͻ_1 Ͻ{{ޗC-vv=~ۮ~UÄ>i];xxe{OɂZPWr{.{gsVtڣ=ڣ>- gѾ{ѵK˾SkWϒk%^=Kz,^{,YYxճdګguoCͻvFѵx' }J{׻vJѵK9|,{Ns:0ӵS}zuU-Y7dqۛo7ߐmo!|C,n{-.4ycy?ݼkg];={,Yߧda}a}e}_m/v=ޯ{7^sjY~=nZѵOY߳}JZYC]8YS=ýֵӉڣ=ڣ>- gD>gozY؛-_/ {7枽ٛ_c!v={YX޳O^ >D.[o,tm=z+f[o[Q^U٥|{o|>W ,];}!v=9z|&Yvѵx' }J{׻v:ѵOV([xyo%,^[xyo%,^ޛf]iPdqN#vv}4qYxm׳dڮg]ϒk%v=Kz,^,Yf]iv㮝^tYhhfA{O YoQt{s= 7k\_7L(s\/Rt{s; k\_5L,﷿v_vE-vd{OɂxzN+vv=|z{.߯&.Wu_8YS=޵38YS=޵Ӌ8YS=ڵӊ]rx+fvMvp;޶s;ގnknq}qY>]p;[ٝ^vo=;,aq~gy; {)YXkaXiibq==kYSx_m,x)Y^ yaq==kYSnq~gzYد[/ 6z;گW z+fMpsjq==kYSpߖ^|+=Wzr{ޯ&9(ߧu{K,nW Yܮ]7dqzovސ!{ˬ 5x3MXa7w{ウg}}kq=},x)Y^~w{뭘zzv]o[ٮze?{8YS=~}+ߥU;vQ7eqޔ{S'MY7eqޔ{S'MYͺP7 !w{8YS=ýz ==ܫY}ˢ.+faw޳;,{vhw~6>N,xx}߈w=9z|&WuYw}]wCMYYY,~M,,~M,,~5̺P7 !w}z|{g|s>}Љ}7Y[xyo%,^[xyo%,^ޛf]i~>N,pp}?ŰeY7Q7eҵdҵdqޔūKגūKגūKגūKגūK4B L7}_w=Vnm[/ 6^޳[oGڦw=ɂx[q=},x)Y^w=ɂxGq.gފYد[1 z{گW ~Ӷbv]Ӷ^v6^Ӷo0[Ow:?=,~ӳOw:?=,~ .4ycq?ݖe^1i_Nڋڣ=ڣ>- gѾmE~* ,nz,nz,^[UUUUUYf]i~uNW,v^˂x0x];xxe{OɂZ}U ͢k,x)Y^ ybѵxZ,x% ka>u-v^˂x0xh];ppe{OZs~|;{{{wY{Q'о*0[xio⥽%,^[xio⥽%,^ڛf]i~u(={YXܳO^ >V+E},x)Y^֫E},'{~('K}('{('x('v('t([r0'1˺kƱU1*m{oCm6 Yk,n5C!z͐mf^̺P7 vRT(ޛ8yo),Nޛ8yo),Nf]ivQT{ݳg}}}YJ{>% =ޫUiJ{>% Җ_U}5CZ5֢l5]5K/,YXdb͒ŋ5K/,YXdZZxbM.4ycq?ݸl`G{G{5 ڣ}ZϢ}l]s֖,^[xqoŽ%,^[xqoŽ%Yjfn^ VJz?>=YO}z?ӳy,~ާg0B L7`ۧ% {- ,vD},p)Y^kD},x)Y^D>0[MYdޒ{K/-Ydޒ{K/M.4ycy?ݼZoz1,nO3dq{2!׭x,nO3dq{2!ۓ Yܞgd<ˬ 5x3MPn^׈j}z_Yߛxyo%,^[xyo%,^ޛf]i|}_]%d{Oɂxj^[j}z_wC-Yg;|~zYg;|~zYjfn]7j=8YS=ýZ׉j=8YS=ýVWj=1w=Y/ V{^<&g?iZT=pjVT=ɂxj^T=ɂxjQT=ɂxjVT=ɂxjZT=ɂxjRT=ɂxjQT=pj^T{B޶S^6jTU().uq>|i*G9٣G9YȣG9ٶG9YG9ٓG9YG9y=noV5VʢIs%2N,xxߦԋ۔x)CNrpw ŝAFJu<Ŕkg%HK,^;#-YvFZx팴di34B L7v;!Jq'},x)Y^bwBK/-Ydޒ{K/-Ydޒ{Ӭ 5x3MXa{q>gy; {)YXkaXgY޷U%!=瞍Rl 6˺kƱr<;[{HoܰͣV£6]mk1,^[xyo%,^[xyo%Yjfn~7u%{>% =ޫwSzSb 7P.pq7.gފYخzea{g^[nj1,pia> eQ{8YS=~}/ߩ} Yܼ7dqސ{C7 Yܼ7dqސ{C7-.4yyg~w=ɂx[q=},x)Y^w.P!,nyo!,nyoeօ&o0ﻭj,i}o%,^[xyo%,^[xyouo9wߏ{8YS=ý}Z/'bm Yܴ7dqސM{C7 Yܴ7dqސM{C7-.4yi? mN[-vv˽́8qoĽ),Nܛ8qoĽ),Nf]iv뮝k|k,x)Y^ yo_{{- }J}b];xxe{OɂZ}4zϓpW“p7'ᮘ'n=Oѓpˡ];1,^-Y[xmϷdڞo=ߒk{%|KYjf?*];xzxevv܎7âk,p)Y^ YƗ=s{޻gs }M)w^͂hh߶'}Wޒŋ{K/-Ydޒŋ{K/-Yd4B L7sͫZQx' }J{׫zQx' }J{׫FQx' }J{׫ZQx' }J{׫jQx' }J{׫JQx' }J{WFQKpoלů9?=,~Yӳ5gkOלouoCحzQp' }J{ת*Q'A{G{W=ڧA,ڷm#d{Opz^'d{Oɂxz v}Yak5ߒkk%|K,^[-Y[xmͷdښouoCحBT=ɂxjNT=ɂxjFT=ɂxjJT(ޛK_/=~,~ҳK_/=~,~ .4yCyq^Yj=8YS=ýZ7j}ra曲xqoŽ%,^[xqoŽ%,^ܛf]i~q^-ĬG{G{W=ڧA,ڷm+d{Opz^/}7vadޒ{K/-Ydޒ{K/-Y7ͺP7vjQT{n[1 m=⭘[[vt+^UۥT篘mT篗T:^T=ɂxjVT=ɂxjZT=rǎrGrGrGrkGrHGr%s|󻬻f+=J.8ېm|oC!r͐mf\3dq[-,.4yq?O6JEU>8[xqoŽ%,^[xqoŽ%,^ܛf]i~-}J&bڳgq=,,0,,ۡYT=ý>% ka> e1\JۯmQޒ{K/-Ydޒ{K/-Yd4B L7s؍aQxZ,x,WeU,=9z|&}Jۯl_/ eas޳9?ˡkUixxe{OɂZ}Vz|{g|s>XT=ý>% ka> u5,^p0pz;hhfA{O YoVTjpoĽ),Nܛ8qoĽ),Nܛ8qouoCͫzQx' }J{׫FQx' }J{׫JQx' }J{׫zQKY1 U;zOΊY{vT3tۧl_/ ea{޳=?˺E},x)Y^+E./뱍½),Nܛ8qoĽ),Nܛ8qoĽmօ&o(aE.gފYح)[1 z;ڭ7ZOYܳwϲѣ,ѣlѣÜ<οŁ$ Ǫhf$iNԶ풒(]6dq,lvYؐ!eaC†,n Y. [f]ivڶAԶ{ 9 6m`l[3L0ݕm{>% =mm{>% =mm{>% =mm{>% =mm{>% =ޫmm{>% =kmu!jd{OpZm[+jĬG{G{W=ڧA,ڷC/d{Opz(}[Tadޒ{K/-Ydޒ{K/-Y7ͺP7vjRT(ޛx=Œ),,Nޛx=Œ,^XdzĪ%#VM.4yCyy^-v={YX߳O^ >QT*xo%,^[xyo%,^[xyouo9z{>% =zۧ}g%,^[xyo%,^[xyoiօ&o,[WբZ>N,ppV땢Zoܳ;,ߔ{v篘rofQS\^pp_u(٭l{o=g9nPg8_Ͱ({{- }J}﫢+e;P篗<?U]vz~('K}('{('x('v('t([r0'1˺kƱP4oQB95QB95QSZVOS}ݍadq[ YX^zmzyo^bQ}m5-YƷ%r͒kf\dZYx-,YkYjfn|yejPm-T[j덹z?mՊ2 a2M c,al=e*E.&|KW,n Y6_m2dq|eʐm!+ˬ 5x3MPn^12 8YS=^Q2 8YS=^ы2}zOzY({.BSvr(ZQx' }J{SjQKٟbo=WmgC)vv=|.|UÄ>ne;pq=ܧd{{le;]p{.{gs}E=8YS=Zl>N,xxTl>N,xx l>N,xxtldG_2deddَddMd ~('㯺^ɯ^IuPF4z`' )YXD`6zTcܛ(,^[xqoŽ%,^[xqoŽiօ&o,_# OYݳ}JVZVYVmՋF1w={VYXݳO^ >,Q4z{,ウgy}FR4z=ɂxFZ4z{ȱ^ȱ:,nc Yܮ]5dqkv=֐z!XC뱖Yjfn]R}rU-Yܖ,n{C!ސmyoⶼ7dq[--.4ycq?ݺ<)pq=ܧd{{VbҾ/]hϹ|{g\s.s۶[T{n^- Ro=R[[ws+uYtâ* ,x)Y^ yobQc9|,{Ns:0t~Uբ*m޳g}=,0ﳬ﫺UiTZxyo%,^[xyo%,^ޛf]i~zxzxeff܌WUۯ|wo=9jXo>(E~gy??v| = O>OXTw|w]֖,nWc Yܮ]5dqkv5֐j!XCYjfn\֊4@{G{W=ڧA,ڷm/d{OpzU(d{OɂxzUZ)v}YMMY7eqޔ{S'MY7eqޔ{S'm.4yCyyUZ-v=V[]skzYj=V֪nFQKY߳w}JZYP* >N,xxW* >N,xxVբ* >N,ppV땢Z>N,ppUuZOhhfA{O YoAT{n[1 m=⭘[[vt+^YۧT뭗jmZo,Tm=z֫D},x)Y^kD>|>p>[9U>G^[j}zwS°s>QS>,nyo!,nyo!Yjfn^׉jz{}~('K}('{('x('v('t([r0'c뵲qP6*mm0ےk|[xoK%r͒kf\dZYx-L.4ycq?ݺ*UiWmgV w_5J~uŽ%,^[xio⥽%,^[xio⥽iօ&o,[W* >N,ppW*mE{K,n' YN7dq;ov2ߐ-i˯Uo U_8ۧkcYzga:6% X- ,ت[Q x' }J{+JQ O9oyk,{[sZ}5ꫝUe檍]ղpsXnUe] >N,xxV*m܏q.-Yܸ7dqސō{C7 Yܸ7dqސō{C7-.4yq?n$EU>8,YdgW%,~+͒AqvY="Ю4SL7smEUZ5i/^hhjG0hEvUii0[xqoŽ%,^[xqoŽ%,^ܛf]i~Vrx* ,x)Y^ yjQ!xZ,x,We,*^˂x0xz(rxxe{OɂZs>E~g{zY؞l_/ 7ٞ_m{{- }J}YT=ý>% ka> u]-=Wzr{ޯ&_{Q'hhjG0hEvE},p)Y^+E},x)Y^֫E},x)Y^kE},x)Y^+E},x)Y^E},x)Y^E>kg,tl=];ekgcOдZ>N,ppVբZ>N,ppU ZOhhfA{O YoBT{ꭗzrVUoc٪zeQj=8YS=^׈j}zߏu-Ydޒ{K/-Ydޒ{K/M.4ycy?ݼZz;*xo!,nyo!,nyouo)z{>% =ޫz}!d{Oɂxz d{Oɂxj^'d{Opj^#d{OpZ(ĬG{G{W=ڧA,ڷc)d{Opz^-d{Oɂxz^+d{Oɂxz^/d{Oɂxz^-d{Oɂxz^)d{Oɂxz(d{Oɂxj^/d{Opj^+d{Opxjy(^hhjG0hEzppe{OZϳ/bQxZ,x<޷m{{- }J}b_{{- }J}XT=㽖>% ka>zi0[xyo%,^[xyo%,^ޛf]i~-}ٌadޒ{K/-Ydޒ{K/-Y7ͺP7v[YT=ý>% ka> U_-=Wzr{ޯ&ZO|'v={YXܳO^ >⾭FQO:ޒŋ{K/-Ydޒŋ{K/-Yd4B L7sͫJQx' }J{׫jQx' }J{׫ZQO|{,\{.s~0ϰ7JQx' }J{׫FQO9|,{s>0p{Qx' }J{WZQp' }J{WjQp' }J{Wދj=ڣ=ڣ>- gѾ QK˲\S'MY7eqޔʼn{S'MY7eqޔʼn{۬ 5x3MPn^Wj]z_w{S%ޒkyo⵼dZ[x--Y,^{Ӭ 5x3MPn^׈j=8YS=^׉j=8YS=^Wj=8YS=㽺9,D.gފYد[1 z{گWۥu_Źϒk}o⵾dZ[x-Y,^{K%4B L7[WuZomI9eI9sϓr󤜪oD>lg,l=e;elgc)ON9j=1w={YXܳO^ >⾭KQp' }J{׫jQKٚbo=[Wmgkm+d{Oɂxz^/d{Oɂxz^-d{Oɂxz^)v=V~MgފYد~jE},x)Y^E},p)Y^kE},p)Y^z夽xY=ڣ=ګY }jXT{^-<o,<ocy*~W}{{- }J}ۡYT=㽖>% ka>ՔrQxZ,xN,ppwԢkgs2Y8dabyk];={,Yߧda}a}e}_c/v}6adޒ{K/-Ydޒ{K/-Y7ͺP7vZt{s>; _5L,t픢k>N,xxw팢k>N,xxvk>N,pp}_ڣ=ڣ>- ѾD},p)Y^iD},x)Y^D>pszY~c~G v={YX߳O^ >Ft=ɂxJt=ɂxծ];xq=ާd{{kg];pq=ܧd{{RtYhhfA{O vjѵp' }J{׻vZѵx' }J{׻vzѵx' }J{׻vFѵx' }J{׻vZѵx' }J{׻vjѵx' }J{׻vJѵx' }J{WvFѵp' }J{׸/];դxY=ڣ=ګY }Zt=ý>% ka>ϵh];xxe{OɂZ}Wv{ײ=ާd{- E=kYS3u ͢k,x)Y^ vjѵxZ,xL];]{{- }J}aѵpZ,p,%==ýsϳ/EN=ڣ=ګY }۱zpq=ܧd{{Zzxq=ާd{{Zzxq=ާd{{Zozxq=ާd{{Zzxq=ާd{{Zzxq=ާd{{Zzxq=ާd{{Zozpq=ܧd{{Zzpq=ܧd{{mu_j=ڣ=ڣ>- gѾQp' }J{׫:Qx' }J{׫Qx' }J{WBT=ɂxjNT=ɂxjFT=ɂxjJT=ɂxﵛBT=pjAT=p}-ĬG{G{W=ڧA,ڷc+d{Opz^/d{Oɂxz(d{Oɂxz^)d{Oɂxz^/d{Oɂxz^+d{Oɂxz^-d{Oɂxj^)d{Opj(d{OpnbQW\^͂hhբZ,p)Y^ yM{{- }J}۶[T=㽖>% ka>nXT=㽖>% ka>u,^˂x0x|PVj={ײ=ާd{- .E=kYSxâZ,p)Y^ y.ݢZ,p)Y^ y/EN=ڣ=ګY }ۺzpq=ܧd{{Zzxq=ާd{{Zzxq=ާd{{Zozxq=ާd{{Zzxq=ާd{{Zzxq=ާd{{Zzxq=ާd{{Zozpq=ܧd{{Zzpq=ܧd{{JT w,hia>m݈j=8YS=ý^׉j=8YS=^7j=8YS=ZZ>N,xxWuZ>N,xxW5Z>N,xxWUZ>N,xxU덅փ{>% =ܫzփ{>% =ܫDhhjG0hEnE},p)Y^E},x)Y^E},x)Y^+E},x)Y^E},x)Y^kE},x)Y^֫E},x)Y^+E},p)Y^E},p)Y^+۩z夽xY=ڣ=ګY }/j={ײ=ܧd{- ٪WLG^xZ,x<޷ð{{- }J}zeQ,^˂x0xzxxe{OɂZs>hE=kYSx_բZ,x)Y^ YbQpZ,p,Űփ{{- }J}ZTڣ=ڣ>- ѾkE},p)Y^E},x)Y^E},x)Y^+E},x)Y^E},x)Y^kE},x)Y^֫E},x)Y^+E},p)Y^E},p)Y^+iDp^͂hhuZ>N,ppW Z>N,xxV덅{>% =z{>% =z{>% =z{>% =z{>% =ޫzփ{>% =kze!d{OpZN+ĬG{G{W=ڧA% =z{>% =z{>% =z{>% =z{>% =z{>% =z{>% =ޫzփ{>% =ܫzփ{>% =kfQWMڋڣ=ڣ>- gѾE=kYSl+aQxZ,x% ka>SN-v^˂x0xkh];xxe{OɂZ}?V{ײ=ܧd{- |?ܝ_===;<ܷZO|',hia>m݋j=8YS=ý^7j=8YS=^Wj=8YS=^Wj=8YS=^7j=8YS=^׋j=8YS=^׊j=8YS=ZWj=8YS=ýZWj=8YS=ý}';hhjG0hED},p)Y^k Qx' }J{׫*Qx' }J{׫Qx' }J{WBT=ɂxjAT=ɂxjNT=ɂxjFT=pjJT=p5{Q'f=ڣ=ڣ>- gѾGQp' }J{׫JQx' }J{׫jQx' }J{׫ZQx' }J{׫JQx' }J{׫FQx' }J{׫zQx' }J{WZQp' }J{WjQp' }J{vE^9i/^hhjG0hIaQpZ,p<[XT=㽖>% ka>X-^˂x0xg^Y4j={ײ=ާd{- E=kYSaQxZ,x% ka> uY-^p0plz;hhfA{O yFQp' }J{׫JQx' }J{׫jQx' }J{׫ZQO %,^[xyo%,^[xyoiօ&o,簛W땢Zo޳g}=,0QT=ɂxj^T=ɂxjVT\7eqޔʼn{S'MY7eqޔʼn{S'MYͺP7 !zփ{>% =ke;A{毘j7[ٛm_j}rϥ{.{gsK}jJT=ɂxjFT=ɂxjNT=ɂxjJT=ɂxjzxq=ާd{{Zozxq=ާd{{Zzpq=ܧd{{Zzpq=ܧd{{lgzb֣=ڣ=ګY }RT=pjZT=ɂxjVT=ɂxj^T=ɂxjZT=ɂxjRT=ɂxjQT=ɂxj^T=pjVT=p֫&^͂hh?E\T=ý>% ka>Vzxxe{OɂZ}W5j={ײ=ާd{- )nQxZ,xN,ppW5Z>N,xxWuZoWO}z?ӳy,~ާg>=Y7̺P7}} d{Oɂxz^#d{Oɂxz^%d{Oɂxj^_j=8YS=Z7j=8YS=ýZ׉j=8YS=ýVWj=1,hia>]Qj=8YS=ý^׊j=8YS=^׋j=8YS=^7j]z_u{S'MY7eqޔ{S'MY7eqޔ{۬ 5x3MPn^׊j=8YS=^Wj=8YS=^Wj=8YS=Z7j=8YS=ýZ׋j=8YS=ý[oXTC@{G{W=ڧA,ڷM-uӆޔ{K[-Yndޒ{K[-Ynd4B L7o+jQxZ,x<޷Z=kYSx_}{{- }J}ZT=㽖>% ka>*zxxe{OɂZ}] j={ײ=ާd{- gE=kYS\E=kYSp?j=@{G{W=ڧA,ڷM)d{Opz^-d{Oɂxz^+d{Oɂxz^/d{Oɂxz^-d{Oɂxz^)d{Oɂxz(d{Oɂxj^/d{Opj^+d{Op u!;hhjG0hED},p)Y^kD},x)Y^D},x)Y^D},x)Y^kD},x)Y^֫D},x)Y^+ QS2,nyo!,nyo!Yjfn]7j=8YS=ýZ׉j=8YS=ýVWj=1,hia>mSj=8YS=ý^׊j}z_C-Ydޒ{K/-Ydޒ{K/M.4ycy?ݼZz={,Yߧda}a}g^яZoC%,^[xyo%,^[xyoiօ&o,簛W뵢Z>N,xxWբZ>N,xxW땢Z>N,xxV덢Z>N,ppVZ>N,pppjr^,v}9VadޒK{K/-YdޒK{K/-Y7ͺP7䍥vjZT=ý>% ka> e14j={ײ=ާd{- En/˺),Nޛ8yo),Nޛ8yomօ&o(a7ޫ7-z.",~Yӳx-YӳUgOWů:0B L7[Wu͢Zo޳g}=,0ﳬˡzxxe{OɂZ}j={ײ=ާd{- g~En=9Pqv,Y.ڱd|kǒ}Ӭ 5x3MPn}]vjrս{VS°ϲJQ'hhjG0hEE},p)Y^kE},x)Y^E},x)Y^E.:,To=+f:[Qu~Uw{>% =z{>% =zۧM;ޒ{K/-Ydޒ{K/-Yd4B L7sحFQp' }J{WzQp' }J{ת*Q'A{G{W=ڧA,ڷ}#v=ޯ[7[W­rϭ;,NT{,ݳgy}}َۥ؆ϔ{S'MY7eqޔ{S'MY7eq6B L7[W/zxq=ާd{{Zzxq=ާd{{Zzxq=ާd{{Zzxq=ާd{{Z.D},p)Y^D.np7ަs7ފYo[oGwu~ڷuF{K/-YdޒK{K/-Yd9YcG9G9]9yN8L,owMoENYk8fƚ!k,nfƚ!"֒kkuo)r{>% =r{>% =r{>% =r{>% =r{>% =r{>% -V]jϵ.EUNYk8fƚ!k,nfƚ!2֐mkuo)UiJ']gkŽ%,^[xqoŽ%,^[xqouo9܏բ*/ G{G{5 ڣ}ZϢ}W4 9Ot2Gr2Gr2Gr2kGr2HGr2%G…înjyMz8e-;1N %ƒkmcf|d|/AʺkƱr]5,<@;=N7ӳxd:7eunʒܔ%׹)KsSYjfѼCjE=kYSx_ݢ ,x)Y^ Y/Y!=㽖>% ka>u_-^˂x0x+=Qp;l6ݷ[Ͱo,mZo,mZo?5c-zwbuF{K/-YdޒK{K/-YdaZPm6nU%kضQX3eqb͔ʼn5S'LYX3eqb͔ʼn5SE)"6B L7o1*^T<=ɂxJQT{.Rs= 7\^7L(3\ʦ`xq=ާd{{`xq=ާd{{`xq=ާd{{`xq=ާd{{`+fa{ܳ=,lߖ{h{~݌B>N,ppU5BN9Ϲ|,T{s.0p.+:QO ý%,^[xqoŽ%,^[xqoŽiօ&o,簛W Z>N,xxVՅ{>% =z{>% =z{>% =zۧl_/ ea{޳=?˺oD},x)Y^֫D>o: ,^[xqoŽ%,^[xqoŽ%Yjfn\Z>N,ppU뵢ZO=hOY(T{tWBS"ݮE},p)Y^E},x)Y^+E.z){K,^-Y.[x]dxouޒiօ&o(a7֫E>gzYجl[/ 6z٬W(d9cG9YG9أG9YǣG9ٵG9YG9ݒ9x]]3aEU>węߖ,nը,nը,^5KjTCjTCjTCjTCjTˬ 5x3MXa7JkEUN },To=Ue }kBMբ*mWՔk%vkK֖,^ۭ-Y[[xmdnm4B L7[W* >N,ppk]i{ߣq? Ur2%Gr2GYT1uD)V;j$ZqXxl%Oʖ,^?)[xl%׶Km/YjfVs؍mwvhWc[Eޔůf0=_`z a,~5Yjӳ gLW3huoCح/uõ!{׳=ާd{- 񾭚kCxxg{OɂZ}QVjv=~z{߯&9߷Eq ,x)Y^ YZM=YSp_U {{5 }Z}_ע>N,pplg7},n6dq{&!,^[xyo%,^ޛf]i~yN/vd{Oɂxz(vd{OɂxzN+vd{OɂxzN-vd{OɂxzN)vd{Oɂxj(v},~ܧg>=YO}z?ӳqŏ{ì 5x3Mh˃xo׫Dَ^pp׍(ہ{>% =e;({>% =e;({>% =ޫWB=ɂxNxo%,^[xyo%,'{~('K}({0'ϩm΃x4Tp-N\õ,ZAVE%Sp d5\Kɂk˯kXŹhƚ!k,nfƚ!k,^-Y.;f]i~Q-JBİ{{W=ܧ,u+JBd{OpzIH/JBd{OɂxzI(JBd{OɂxzIH)JBd{OɂxzIH/JBd{OɂxzyJ+Sd{OɂxzL-Jed{OɂxjN)vv}ސůC.=_\z7 Y:ҳuȥgKƽ!YjfnEN9q/^ppj0p(e;ppe{OZϳjQxZ,x% ka>X-v^˂x0x|8zqdޒ{K/-Ydޒ{K/-Y7ͺP7vcnX=㽖>% ka>c-v^p0py;=p~('C}('{('x('v('t('Sr('#p(9=p U3֓L| 11:4dq[ - YH,n $C!ɐmd@̺P7 vzRԓ=b^k7wYw8VE>w[9ev{!),nݞbv{!),nݞbuo9uS{>% =uS{>% =uS{>% =uS{>% =uS{>% =ޫuS{>% =ܫuS{>% =ܫW ѕ&{=))}Q](۔i|8oS՚)j͔ifZ3eqZ8l.4yCqj*ѕx' }J{׻:ѕx' }J{׻ѕx' }J{׻ѕx' }J{׻*ѕx' }J{WBt=ɂxծAtdbd_q|s𫌩&Jt>|gQcIÉpZ`~.R7e HJQH#u?G1U9/jQXa0V+VЊB s\y{qwxg(3NLuoMTU=[_Ox]ؒūؒūؒi)׃,Y<`KX[~6jѵfl-kqdҵrjq+#Z,k)YpmtKqZ,k)YpmtmlGqkZ,k)Y`mdizq.Y`,l+&9 nqwp1q/^ppj0pw=ý>% ka>E=kYSxߍբ,x)Y^ YYt<=㽖>% ka>MW,^˂x0x| N0{ײ=ާd{- 񾩻EW=kYSx?V͢C,p)Y^ YojѭpZ,p- gFѭp'ܟ;P==8= >?˺kƱr\C)v9oSz=ːif@/Cz=ːif\ͺP7 !]iJ{>% =]iJ{>% =]iJۥضQ7eku=֒k%+KW,^ۯ,Y_Yxm2ͺP7 !]iJ{>% =]iJ{>% =ޫ]iJ{>% -VYjƏE'.YcWܳ+mSؕƮuÄ>]g :N,xxw C>N,xxvuC>N,ppv5[>N,ppv덢[O {{{5 }Z}ח[>N,ppwբ[oSZ7ղPT:Vt=ɂxn^t=ɂxnZt=ɂxnRt=ɂxnQt=ɂxn^t=pnVtr~('C}('{('x('v('t('Sr('#p('G9}?L7jǭJe(.[Ye%ƒkic⵴dZXx-m,Y6,ZPm6.R8k}5, ` `Mkk;`jQkkZ\5\ہkU5u$ᚖp Z["\5\Ӳ\+aq;iYp pm-nޭkeUQ\3eqr͔5S'LY\3eqr͔5S;LYͺP7 !7vn{ײ=ܧd{- gōppe{OZ}U@|+,pia> }58YS=ýޅQ. 8YS=Ҋ8YS=ޝR rWQx+O>_m(j+)F #ЋF #Hޔ܊(L& zc-{d0Q(LUJϘ(L& 崻7BܽDa0Q( Ha0R)=؃Ha0R)mnHa0R)mne!1R)Fy 1R)Fu['1Q(L&u[#1Q(L&u(wDaͭܘ(L& żͭ)F #żͭ)F #żͭ)F #żͭ)F #żm)F #źͭܘ(L& źͭ܂L~pu_U׾YcS% =r{>% Җ_oK+Ǿ]it\um]:5שW  ש]ixq=ާd{{+]ipq=ܧd{{+]ipq=ܧd{{+]i{{5 }Z}7C>N,ppvȕ{>% =r{>% =r{>% =ޫrc!:d{Oɂxz d{Oɂxz^'d{Oɂxj^#}׎adޒŋ{K/-Ydޒŋ{K/-Y7ͺP7vnJt=pn^ta 9|,ߖ{Ns20ѭ7n=8YS=ýޭWn=8YS=ޭWn=8YS=ޭ׊n=8YS=ޭWn=8YS=ޭ7n=8YS=ޭ׋n=8YS=ڭ׊n=8YS=ýڭWn=8YS=ýV3uppf{O y.ޗ}({{- }J}aX=㽖>% ka>Sz=kYS3ݢl,x)Y^ vfQxZ,xLe;u({{- }J}X=ý>% ka>ON1,v^p0ple;[ppf{O vzQx' }J{vFQx' }J{vJQx' }J{vzQx' }J{vZQx' }J{vjQx' }J{WvJQK˪poĽ),Nܛ8qoĽ),Nܛ8qouoCحvFQp' }J{Wv:Q#{{W ѣLѣÜN,xx픢l>N,xxԢl>N,xx팢l>N,xxl>N,xxlgE{K/-Ydޒ{K/-Ydޒ{Ӭ 5x3MXa.۩E},p)Y^-)E},p)Y^ۭW-&x]==ܫY}Yt{ꭗzr^Woc٫zU-={YX޳O^ >aѭxZ,x; _5L,3te{{- }J}Yt=㽖>% ka>~fzxxe{OɂZ8n={ײ=ܧd{- gE=kYS\/kѭ'xrxeff܌Wփ{>% =z{>% =z{>% =z{>% =z{>% =z{>% =z{>% =ޫzփ{>% =ܫzփ{>% =ܫz==ܫY}zМ-4篗9?eU[>N,xxv뵅{>% =z{>% =z{>% =z{>% =z{>% =ޫzփ{dϏddbdd֎dtJ>wYw8V~ ѕ]vqƷ!6d9y|}72x{ӛd9xXي*m ,~7=͆,n YN&3dq;Mfv̐4eօ&o4A[zZp' }J{[FZx' }J{[JZx' }J{[jZx' }J{[FZx' }J{[zZx' }J{[ZZx' }J{W[jZp'SByJAkU)ZNߟ_~g3{<7'fgZcE'S9eS!DDjQDa0Q(,Q4f F #Hynq:#Haհ?Ha0R^a4CHa0R^a3.F #Hy bq##Ha^fXUf0Q(LW>[wL& *;gKsV}(L& ĝ9Da0Q(杳9Ha0R)杳9Ha0R)杳9d8酨ȼ4,^4,^4,' x5Yx5Yx5Yx5Yx5љf]iƪ2n4 ־O1,^[xyo%,^[xyo%Yjfn4?>N,xxVd{Oɂxݚ[3bG;Sj^ܭDa0Q(ֻ5+qx1Q(L&ʉwk6nM& D1߭ى5)F #| d0R)ݚu!d0R) a!Ha0R4➓ #AN}DYxE<6BīrE:%tJMW{Nx%K-!ƶڙ(L& ź}ڙ(L& ź;Da0QNފmL& bދmF #b>mF #b^mF #bދmF #bފmF #b9[F #b9[L& b9;L& b] 4QėDa0QN9L&  ?}9Ha0R)0RY 4R^=j}#3?אټ,n5d9ynټ,n5dq{6!۳y Yܞkuomm/ZUb[;=kYSxߴ{ײ=ާd{- gbqC=kYSxTnM{ײ=ާd{- g~,ŭppe{OZ}S4{ײ=ܧd{- g:?roŽ%,^[xqoŽ%,^[xqouo9w])={VYXݳO^ >kQ߅x' }J{׻Zѭ??vC_QG:4솪}zQOϑٌMiDdZ"Yx-,YH,^K$K%%ɒkduo9d'{>% =d'{>% =d'{>% =ޫd'{>% =ܫd'{֯qζd㻏S7e6 YƷ!j͐mfZ3dq[ YVkYjf﷯D},p)Y^jD},x)Y^D},x)Y^JDW},x)Y^JkDW},x)Y^JDW},x)Y^J ѕx' }J{Wѕp' }J{W:ѕp' }J{WJѭ'==ý>- gjѭp' }J{׻Zѭ/UrGrGrkG>^q}j#zQHa0R)憠Q41R)F%$F #b9E#Hao.ŭԌF #b[swk2Q(L&^d8ȉ e̒k˘%ז1K-c,^[,YYxmd2fe4ji_AꦬalbX܄U]6n~hbqDaµ[ğۦs<%yKF-v'k8HYw8V~k̲ož@ S̩HXUmطHa0R)0RڲZ4RW0cb*`b0&߰q_Q,(41ŘOe3 ; ğf'i/??oM~hS)YDa0Q(rg[w뱱nX1K{YoMc}UM{u-YǬϬkueUb;N]v ;~]veߴbxay1cjŪ{ǃbO &n~~nnlZjswaǰcح=wbWN]S0v f[xf~}YZq,YZq,YNdjǵdjǵdjǵdjǵdj5ͺP7v}/n9%UQ7eqޔʼn{S'MY7eqޔʼn{S'MYͺP7 !7]+{>% =k.Bܬ){{W=ܧ,U%W%\ڽ{o=vnPgv_Vs!ʍ[d{OɂxzkE'Z+d{Oɂxz *ed{OɂxzR#Zd9{SN 6~c8EQC_/;vLi7a7. Qa7֎0v +@m%g;fn?DENg]=2u̺̺ m{Y~Ŭ n낋%K ,^,,Y6XXxmd`a4B L7sm/U-d{Opz?B+d{Oɂxz?B/d{Oɂxz?(d{Oɂxz?B+d{OɂxzF-4d{OɂxzL)*ed{Oɂxj(Zd{OpjX/d{Op 0,vʉ{{{W=ܧLe;E({{- }J}Z=㽖>% ka>SN,v^˂x0xf(e;xxe{OɂZs>E=kYS3ݢlgW];Dޔ{S'MY7eqޔ{S'MY7eq6B L77Ҍ}X4{ײ=ܧd{- gᾩE=kYSp ( {{5 }Z}Y(٥l[1 6z+fa޶޳YoOe;xq=ާd{{le;xq=ާd{{le;xq=ާd{{lge;xq=ާd{{le;xq=ާd{{le;pq=ܧd{{le;pq=ܧd{{lg,DَgSbv6垲P-le;xq=ާd{{le;xq=ާd{{le;xq=ާd9cG9YG99yN8Lpתǡq-D.]c[ܳ-mSؖƶuÄ>KL_R8YS=^*3R8YS=Z*ӉR8YS=ýZ*ӈR8YS=ýZ*SR1,pia>E },x)Y^/iE },x)Y^/E },x)Y^/۩E},x)Y^/)E},x)Y^/E},x)Y^-E},p)Y^-iE},p)Y^=|}1q_L܋=ý>- gkE=kYSnXt=㽖>% ka>P,^˂x0xk~[,x)Y^ yoaѭxZ,x<Ǻ[tz-Qdޒ{K/-Ydޒ{K/-Y7ͺP7A<ƫE=kYSx?բ[os:Y8-tabq[(zppe{OZsiEV==ܫY}zpq=ܧd{{[ozxq=ާd{{[zxq=ާd{{[zxq=ާd{{[ozxq=ާd{{[zxq=ާd{{[zk0[xyo%,^[xyo%ޟ;R=?sab6jϵEZ,k)Y`md)Jѕk' ZJX[~ Y+Nt ָEG,\tܖ5.:rq0 6N08YS=ý  >N,xxwU >N,xxw5 >N,xxv{>% =`{>% =`{>% =ޫ]iJ{>% =ܫr萃{>% =ܫg{ѭ'==ý>- gkGѭp' }J{׻Jѭx' }J{׻jѭx' }J{׻Zѭx' }J{׻Jѭx' }J{׻Fѭx' }J{׻zѭx' }J{WZѭp' }J{Wjѭp' }J{v͢[=ý>- g/E=kYSիaѭxZ,xE=kYSx ͢[,x)Y^ YjѭpZ,pLe;( {{5 }ZTӋ8YS=ý^38YS=^S8YS=^S8YS=^38YS=^Ӌ8YS=^ӊ8YS=ZS8YS=ýZӉ=ý>- g*D},p)Y^-۩ Qx' }J{v*Qx' }J{vQx' }J{WvB=ɂxA=ɂxN=ɂxղF=pղ^appf{O vFQp' }J{vJQx' }J{vjQx' }J{vZQx' }J{vJQx' }J{vFQx' }J{vzQS&,nyo!,nyo!YjfvZQp' }J{WWr^.,pia> ,v^p0pgu_ݢl,x)Y^ âl,x)Y^ YX=㽖>% ka>M-v^˂x0x|5{ײ=ާd{- iE=kYS4Ţl,p)Y^ YoK{{{0pnZto=ý>- gZѭp' }J{׻zѭx' }J{׻Fѭx' }J{׻Jѭx' }J{׻zѭx' }J{׻Zѭx' }J{׻jѭx' }J{WJѭp' }J{WFѭp' }J{WkD^pp փ{>% =z{>% =ޫgBtz-Qdޒ{K/-Ydޒ{K/-Y7ͺP7AvD},x)Y^D},x)Y^D>=ウp~[~U>K^#d{Oɂxj^%d{OpZ^]n=8YS=ýz7^+İMg%,^[xqoŽ%,^[xqoŽiօ&o,n^t=??v>z=zq<~'=4HU= }{GS1H,n"d&!H,n"-@ Y,n Pˬ 5x3M`Oa7[/Kx' }J{jx' }J{Fx' }J{zx' }J{Zx' }J{Wjp' }J{WJp' }J{Wb^.,pia> ][-^p0pgu__kzxxeCޟ; U}5CZ7yCnQf%k,^Yxf%k,^XKuiօ&o,ۛ()a!xZ,x<7M{{- }J}˱r0[xyo%,^[xyo%,^ޛf]i~]gU{{- }J}bXt=ý>% ka> -:^p0pl~):ķ^ppߵ萃{>% =r{>% =z{>% =z{>% =z{>% =z{>% π[~" *E>]k.k,~Y:ӳxfů#>=_G|z,~Yjfn!798YS=ý!׋98YS=ý~6_t x,pia> ]ۈ98YS=ý!׉98YS=!798YS=!7C>N,xxwuC>N,xxw5C>N,xxߍWn=8YSZEtͽ3v, !S8U,nfƚ!k,nfⶌ5dq[Zf]i'898YS=ýzֺrb==ܫY}r0[xqoŽ%,^[xqoŽ%,^ܛf]i~y\/:d{Oɂxz(:d{Oɂxz\):v}ސ{C7 Yܼ7dqސ{C7 Yܼ7dq2B L7Սu׌˟XVE>wjޒ{K/-Ydޒ{K/-Yd4B L7s;Z!Oٞ^o=g{~MU98YS=ڭWn=8YS=ýڭ7n=8YS=ý[(zeP.,pia> }Y-^p0pg^5w[os/zYo[o,܋܋{[t=㽖>% ka>~Xt=㽖>% ka>m,^˂x0x|{[,x)Y^ yobѭxZ;,ѣnÜ<^&kU_6vâCnmYpep/cy~״ݢC,p)Y^ YRtȉo=ý>- g/k!p' }J{;Z!x' }J{;z!x' }J{;F!Oy&zYx&̶LL0y&L{>% /V]sǼZtԵ2k,nf!k,nfⶎ5dq[Zf]i٦P48YS=!798YS=ý!׋98YS=ý!W9=ý>- g/!p' }J{;:!x' }J{;!x' }J{W;Bt=ɂxNt=ɂxnFt=ɂx*kPVq\3dBOׅ5C.,~]Yӳ=$=߳O .4yy?o۶C/D},p)Y^D>IeIrϓ“7'IE-:İ{{W=ܧ,e+:d{Opz\/:d{Oɂxz(:d{Oɂxz\):=mgszY؜l-z!x' }J{;Z!x' }J{׻jѭK˶xo),Nޛ8yo),Nޛ8youoCحJѭp' }J{WFѭp' }J{wâ[=p{.޻gs }P,^p0pg^9Vn={ײ=ާd{- /E=kYStTe{{- }J}Zt=㽖y~('K}({0'ϩ ZW84~SE܎]8f!k,nf!:֐mkuow5âC,x)Y^ Yn!_yzYx׍瑯ykS59{ײ=ܧd{- gF!'ppj0pn(E},p)Y^E},x)Y^kE},x)Y^E},x)Y^E},x)Y^+E},x)Y^E},x)Y^E},p)Y^kE},p)Y^Ot x,pia> Pn=8YS=ýޭ׈n=8YS=~6zxq=ާd{{[ozxq=ާd{{[zxq=ާd{{[zxq=ާd{{[-D},x)Y^D},p)Y^D},p)Y^zb==ܫY}zpq=ܧd{{[zxq=ާd{{[zxq=ާd{{[ozxq=ާd{{[z0[xyo%,^[xyo%,^ޛf]i~{ wǮzxq=ާd{{[zxq=ާd{{[ozpq=ܧd{{[zpq=ܧd{{le;Ľx]==ܫY}eW,v^˂x0xe;xxe{OɂZ}U͢l,xwcG9YG99yN8LpתǡqtiE̎]k8f!k,nf!:֐mkuo)֥2հ(ٯƮm4v,n=)-[=㽖>% ka>c,Je^p0pz2ppe{OZtxQ*#ppj0p}ٕT>N,xxԢT>N,xxʴT>N,xx픢l>N,xx팢l>N,xxl>N,xxl>N,ppԢl>N,pp(==ܫY}J=ɂxF=ɂxN=ɂxJ=ɂxղe;xq=ާd{{lge;xq=ާd{{le;pq=ܧd{{le;pq=ܧd{{le;b==ܫY}Z=ɂxV=ɂx^=ɂxZ=ɂxR=ɂxQ)xo%,^[xyo%,^[xyouo9̓4cyl>N,ppl>N,pp^Y[LWC{{W=ܧ,Űփ{{- }J}}Un={ײ=ާd{- 񾯫E=kYSx_UM{{- }J}ۢXt=㽖>% ka>aXt=㽖y~('K}({0'ϩ ZW845}ۭkӗ)k,N8f)k,N8cMYֱYjf?ݸCkrppe{OZ}V9{ײ=ܧd{- g{!'ppj0pE},p)Y^+E},'{tR==:= {ouu-:v9붩oS{ҳC,~g(==DYNwϰaօ&o(a7kE'},x)Y^+E'},x)Y^E'},x)Y^JEW},x)Y^JkEW},p)Y^JEW>o& ,^[xqoŽ%,^[xqoŽ%Yjfn~^t xv}_ޒŋ{K/-Ydޒŋ{K/-Yd4B L7smoBt{Vguս}}UU+ >N,xxw5[>N,xxwu[>N,xxwU[>N,xx ѭW@ާg>=YO}z?ӳy,~f]iFݾ[ozxq=ާd{{[zpq=ܧd{{[zpq=ܧd{{^ppߗփ{>% =z{>% =z{>% =z{>% =z{>% =Kѭx' QNQN6QNaN0ߎ"U0o()RTaDdɒK$K/,YDdɒkjdZf]i~u[/d{Opj[+d{OpK| {{5 }Z}w7{ײ=ܧd{- Yݗðh{{- }J|WۺG)޶K]q6dqڐMjC7 Yܤ6dqڐMjC7 Yܤ̺P7 v޶Z=㽖>% ka>M7,z^˂x0xrlEo=kYSx4͢,x)Y^ YjъpZ,p,7UhŃ{{- }J}]+Zķ^pphŃ{>% =xh{>% =z{>%ޟ;R=?;coV8cLU֢CnM Y\3dqs͐5C7 Y\3dqs͐mkⶎ̺P7 })!798YS=!׋98YS=!׊98YS=!W98YS=ý!W98YS=ý!׉9=ý>- g!p' }J{W;Bt=ɂxJt=ɂxnFt=ɂx*kM_}Vc,Y\dr͒5K/,Y\dr͒kkⵎ5ͺP7vAt=ɂxNt=ɂxFt=pJtpo+͐mW!ۮ4C]i,n YvB7dqۅnuo9r^pp萃{>% =r{>% =r{>% =r{>% =z{>% =zۧ}#ޒ{K/-Ydޒ{K/-Yd4B L7sAʺkƱrizѭx' }J{WZѭp' }J{Wjѭp' }J{բ[ {{5 }Z}4n={ײ=ܧd{- Yݗm{{- }J}nXt=㽖>% ka>U9n={ײ=ާd{- 񾩻E=kYSfѭc_yo!,nyo!,n[f]i~ qnSVn={ײ=ާd{- g~,E~Q9eQ9rϣr£r6GQ9u-vķ^pp5(ہ{>% =e;({>%ޟ;R=?saV8k]7R}Ʀ46ygaSڶ)MiҪr(E },x)Y^/E },x)Y^/iE },x)Y^/E },x)Y^-)E },p)Y؆ZEdz݈1`wش~}՝ ѣLѣѣ<{('̣Ga؅vvq)flDsNg]x')YǬ[y,A6a ;n7nzn;캾Ruu ;n?îvUPI:FÎaa7nN,ppWҊJİ{{W=ܧ,wM/n{>% =wB8YS=~m)n{>% =s8YS=^08YS=4ҋ8YS=^ӊ8YS=ZS8YS=ý~:QS]^pp͢l,p)Y^ yVU-v}[adޒ{K/-Ydޒ{K/-Y7ͺP7vcjX=㽖>% ka>5]=㽖>% ka>-v^˂x0x|84{ײ=ާd{- EΎo8yo!,nyo!,n[f]i~ qNW,v^p0py;{{{wY/Zto=ý>- g/Zѭp' }J{׻zѭx' }J{׻Fѭx' }J{׻Jѭx' }J{׻zѭx' }J{׻ZѭO1,^[xyo%,^[xyo%YjfnޭWn}z?ǎrGrGrGrkGrHGr%GrGr>z'$zcM 8HݔU3 +E>g]1u̺̺ kQ*sYǬͬ߰κrlDxb Ȭc127즳/:ѧYsYYls]UL`ÎlFfWi,lFxUbf<eA?%,K֛Skj*q2#Hao-AFJϣs `) sc76u ֒%)XKS,^`-YNZxd:kuo95~}z%W.zgsɕK\ǮDjl\&2Wv7춷D%r;fYY7a7nӊ["Żn.YǬcdްً["w9몱ofn/QaW9Îavvaץlîd5ÎaaWnӺZܩa-Îaǰ˰߰f=tv9;n7n~n}]/簫+~c1v37î[q.]W% ka> ]_{{W=ܧ,ܗЊ8YS=^Ӌ8YS=^3}zxefmfp3s3~nƫe;.A/-Ydޒ{K/-Ydޒ{Ӭ 5x3MXa7/۩E},x)Y^/)E},x)Y^-E},p)Y^-E},p)Y^-iDَ^p\/N=ɂxdϏdtc90]jƵTf2tm0Yxf%k,^Yxf%:֒kkuo9֥2e!Je=%reDn[)[/ %r{O܎J䚮2xqS'K}('{('x(?˺kƱr- iDI>wQ6~{('̣Ga1v27 uSV8욦 vP1v BY7vʘu:fݺn~n:rsֱɈMFYdmMF*adeQ׷X1,pia> e9X8YS=~({>% = a8YS=~p/{>% =s8YS=^P8YS=4R]r_.A7eqޔʼn{S'MY7eqޔʼn{S'MYͺP7 !7(vd{Ope^1q_L܋&΃-YNy0%8d;Ԗ,~'ӳ7̺P7 !zMփ{{- }J}˲zxxe{OɂZ}7u{s= W7\_7L(3\׻^˂x0xzxxe{OɂZϳ;E=kYSxâ[oWWcG9YG9أG9YǣG9ٵG9YG9ْG9YG9y~=a_ 8HݔU3 abumX۸gam&% k- kLM#h {{5.g{cV8gF)5vwخ;l7e;lWۮbvOwؖm-5d{OɂxzF+5d{OɂxzF/5d{OɂxzF-5d{OɂxzF)5d{Oɂxz(5},~ާg>=YO}z?ӳy{ì 5x3Mhw{?({>% =k2{{5 }Z}Tl>N,pp4l>N,xxtl>N,xx l>N,xx4l>N,xxTl>N,xx픅(٥;>QNQN6QNQNvQNQNQxwiwx.R7e M#h嬫Dza1u;u7즳KQ uuUu:fnftuM-1YǬceom+V`aW Îaǰ˰߰݅vYv#XxFdڍ`k7%nK,^L.4ycq?Q]OǾdqU`qմ߰.nnŭU;v\c1v4춾DWU-++1vvvaהTaցn3dqΐ&:C֖,n7Dgv!Mt,n7Yf]i~lGq_ },p)Yeka,go?A᫋yd=O?g/?hU֋>œJvqO<4_f7{1nɣWT-xRVLœO_퍜∿y/jmA/<}e?]]{?zrWT{.n?vՃ˧tвVZAf'_}p藗w>Z? y{}o<|rqqj-oWstn/}_bru]~ݰwW?ЭoLUTWvкuHzcѺAOLv77V2+%]~c޻9=@~98W3&ߤ^7GO?}tW<2Im?Γ_7q+dצ>.A_ տknX7Ny7xvc1O?fqj*$ꯠfWPsœ{C| =~tyO]{yڶ7_klWt},;|j?vWakq!{'FAf}8>.ƍ7w;q!go{~ ~s{ˋyzǷ=;o?>wx|?e]}[>~keYeS6jUkg_k_>x'ݽ>/?;W3op4in/׫ΊG_=={[gUQoWEY}0M'>/}pGݽ?/}t~[wwὋ'gO8_~Mk?wvvilw߹{O>=Ǐ?;?? ;߻> ok=?s)/>_{pS_~?zf~ٗw/>??|z|u??;_|qgo})w}oǽ<[?ק;g.n}?]||~No[O_o>^;wgg|ч?Oogoz_?yO_|_pvqG޺u/o~ŝ_M'?q5=9}}ݳ{ɭ[wM}֭9/}rw6}^L?|{g=./>}?~ug_<ó~v<~t#/7_|~-|V7=x$o}oo~{)[gq}'߹w;{n|!oQ(ó={zo_S'7LOo|w}>ѾHOWw[./n+4`oݚަ?:; nL晩jho5Û~rLJsr~ճoۇ|A6wǗoٝ'_>|/zˋMtgNsLwܻ޹{}Φ/[_<~t~웋'QS?x6 GOMϿ:xգiMOꍫ?~1}o<>|nzjɽIr|֭w\Z};_{~v~j ;rߺϿ}L߈Ç_F^?ճ<\yѳg wM?'M_/_^Ϳz?W3~uٖUQ??u_?Rl?O>X=]x1'߸7n:gO?}sgO7#|>xGXyӳg۟?gӐ~v8?Ϧ5?|8X9 Y{O|f]Mk?}Jw!/|s ?{rL1zߒr{ۇog_W/>׮x `\Gu0|8$y rXlGDdYHr08z]Iv}rbUR 4/ QH & 6-RpB5N ?s޻Z;N^̙33gΜ93sft`@T>QA' 9'O2`:|ѱX厾е-[z gٵ3`ތ鴿, jpcMDLP"ptHl>;l3$g0NC):80 !Q^Iֲޭ@np{q$- :1*}~[Kp1)m8L,jKYLh0(a!^W Šlͪ sH(RjQ;Vzv>4~4n70|oHtU2{r?PRr\oKaoK6ecr)~TRYUȥSc@tlW?8h*bm^\5SLnF5NyXr9\CDXHt nc㖞]!\)S+~ڎf8 -(z#VEӱ0.eRсnT (@H,`ҥt`B6>Ҷ0a’R9E MqivHēMh@dӂ[p8jQcצ7l t{} p&Z`ƓC@b\o!d؀R~5(Re\:w ce:$ϗ^I i˨UРۢʞ|80MTE2>_5j. Ř`6@\%d1L Mq<vrƢhH܍73=\5*Zlfd#LH, 0 ȇw&$-E|DKDh`7vtSu5%3 % KܚhD"ԡ|F<2UV4{rQ[4HBIhme37U/_vs1B(34U73eB LNdP$% 0e3F5@zԖ'z6~#&Um bm;tMԢQM޾riox3[AӵzDB,@3) !@H4M 1j8pX)D#'Y.ܢ?Κ06؇(A^?[E Vl2+SŲ| 1%AMH=*ZḿkVɅ Nb/J|7.҈X8 6ãhCU*hHRE'4&nBT:<$AׇI gBeY1ku>8MJ\ؖ K]4om|  0(؃MK$AD 9hDTŒ BJ'(ja>"vt$@c#d1k][6$(à%'Kඎ#S"Om#ؼv_ 'S~E<%$B,NvfJCۓpK뱊0'M}<F(ӾnIq),# xKo&1hKJHrZ6Ilsؗ`£t8w0$G0dYTzY$*-;& 04! F!DQކS{3=7-p;COJ6JP8eiE חЋQkO|k{}mLZXsQp4@uHHyE4*fl4͖O-Sj06obul J &PeNsB\*¨ᝩ,&'CPL\RɘGMLA"n|ЁCuffW, "팍3 ,Q+yJTݢ /L&gm!LQ쬩ƕ\"/rf$dlkxBJfufw&1]Sc{lvc:9?sq\m뫗74u`l=K`}X*;unM=}XMugUdfl6b)%lItw,BL*N %-AzB *{p%Ғ/>@,Av,h/ Adsҁ ;cpኬ4XYS̥F"٦pTmf5tVLӤ6Hf0 q$Gm%R,cRj%mך#V:keXeL|8[7~{x@*H>!l&RT&Ś (Ƴh(z!h2ibU'XV(w5.BHP""EAmt;$+4٢^1xV-2xJ/9:,WW 5x7RדtJfZEXU2/TjTjT_A 5ASƹX"ǢVxKJ:`W]5/6Sx(*RNDL_A+=Hp BTʛr6n]e&]EU޽KyٿJ1dXEfU`dV)d8뒌e_X0o[~u% mҢ-&أz|D4Kby/X:0r䒺h;37zh3U#磗IDP/#b)N+:{B n /z{Y:sgD98 sh JEsi,f _yQ'H8m$RKDQrn5 Eeӹ{ޝe $[c![2nGXfr!Lo]+^Ki0URpsNZILtf4Av4\ScHd2FxgSd2 YS:.p-4Wh2O@ |7rwE./<͖YؑpfȺ+\ ˖Uqr5qOϧTA&R셢:%ɲ%5[]r5cmYaJc,KLrdR1YUKБHZDZHA[2/Lik䠒W1kHL" m!8[bqz.T 6r &7X0D# (mm "2s YɍϠI$E6ywZ7T@^Xph'wDxaD4o f^HKUg[)_ }JȨ1R"[)/nֺ3Ȗ:\ t G{)$Zp] HCHr+ g2LE(F* M5 "ZB\=Dh/4YiPy(T\X8cv墝S> TO+!^OΨv訏Is9]q:J:s\D# KMZdwǣ/Bg 649IiXo659ى8ȣB5+rLjtfUuƪP_cx6&>={~yW@1p$`}1pg&d}s:NPr+@qż, FӓPqDhb2$1H5B-ĆUSJe0a`4yA#|kg㩞]ҔzxJQVYiNTN$ԊW6)6[16Bα$Kb Wt#W-a*"͊"BI]|+y:ܪQ]ȉãa"XuN T(,Ay$i̳2NE! e]PG.- ̎:nlM]C&Snw+ d c;k{Bqs )is;yDGţ.C7*n %9XbQz"Eh`yM+LH].dNegmUg*f{H@>Q ;L6G..oVuP,Zl;C\<4 m.F~y&KDŀYVفaUl΀Hn]d@DB2h dc,gN3!uX7|äm9>v ht&Q8k„?ƃl /M: ܋6ELԸ]Sz6ؐ7&):!4ڧ?G ΒUXtgCC1aÑH Qu825;wpA?5{ E`"1"!')YL%Tlj̈ nqSHʖ9bM{ tf; ޘٮ!Wհu\U vilʃݿoGRG.ջŸE!q(Jwc@ї9޴ȰZg$2sj?kfh=Z;7H&['W#N7Yۊ1+(bd (Z6{_DeH* uL䭌|]K,N+zYZi%ОIӰKdߢf:>vDq+g 3Q,NݫTU[R]vx1ݥGJ&͒iZ[i"J]Dt(L!ܦ.ҕ/xvˋΚYߦ8+_LfMM{'Xt;0'UWL! ]K|o$ŞƮ-F,jj z8M5ڵf(Ij3.֪Kxٶg$#Aa+5q;%ox퍸e@:7% l6}UuS[gGW0ԮnBAs].1ٌq\W" Ͷ3%Fҙv3y6qrg(42̔$Qa2 ĵ6-=A߻2nmlT:9hwhv V\Vz)y;[MՐYk<z+/1ʕĎGh$4+^!2KcH4!+WujE\̬HiNQy)bh/61yb,V(Y]JY{bB@ެEa Wt۹k#)3řد]*R"3)HĸSFbI=X/4n.QB/yKKqls1BBeiMȲHfWrZT9Tcm! b$ّҌ \Hr{A/|YG#(+l'Y@>vZ:M gDŐIJm>1{1<|Iq,<ƒ,eE em+6NY6{fA; 7/vSٸdqiRWTy*L)Kw%iE޹K `nhgqx39(Ҏ@dc8^앃B1qVө=BhNg~|Hv8^Ճ.vh -Sx?66KV8 >W['1of=F<4gxD ?}2fXRY}'sD5ȘZ/KBOJBnl|xG;)/ {Yzjr-+5O[i)fS(mâI >Zj&@:xr[-\U fwѲ:m'*E ap9nMw4`6wq,^ Gb*^lfppU2w똕g`V2~IJ?K4 ØԐq:ÈGNa٭-RYigfui1HC 1lCrU.ca?6fɣMM(b y |1%ߓBWk^y\=#d[7lL@4Z- i,Z*,/Kϲ;u aOFM]i:,8όX$%3JkI]k_2xߌ[PEX*tm&Cѭ#a (a$. ڱEۢa"S싢 -J<$F"^`h|[X'" 3RiYН [a2 !3O[QSvyb  uI|oڐ he vĻr6(YhtUcJΗ( Pip+q]x/=b+VRrgňR^H˝ -JB$&=ˤ)l -]nG>Yꉻ(ɨ|'|dq)IM-oܳ%Շ@LC՗ZͬWf{~4J&"qLAX=xj1lzY| 3 ͕B/n@rgS=i]Ltf_)Wkjߒ$|s|L^%>S"Z̼VR(~&d&Q(wt4nSҸ(<o5N /.Vj" EbƬ5W-D RjኺHy]U? "R5\>|YNϻ^kׂͲnn SDY{@ wތ/ /Y.F݌|۳ޞ콷4ؗtﭗd?z!nKc ~[ۋ[q.$ܛ,y`P̑Bq(qicw+eXT_xXn Yev ÉZ1SZ Ea"{l*( kvٍ~nWI_ כu4;:/o`9[ qyi|g6p_4-a!gy/f^ ZK,}lu}Q\ZVPJb6 %J:ڍPfŸTT`Ўy(]d(M"ȑe$k7+#]vʌxJ)%n8XFx<,$U$ٖjp jM7͏JT]ue iv#rj)+Pk߫k#Is=Tlk۸ěaB 㴃lFR,ˁP?w_f+"F8E,9R(iSSD,SC孥mׇz6oj m'"'#K\~zA$۳3VYUVmM3_#]z}ps&7|uՍh+vr0bAɧ9=i)cPZ0xC1p*{ iytfyxTh)!o`n8GC+}_x.E7*pZq]C$m>M3qnɥt_,4$5($w%+2<-z)K1hɦ'L鴿TUPIpNuS 7EN5U0Mb+N)QA!mq)H&_ )cPXu5,HzYiumOa= EZ_i{ nzA'I+Qz1U0Q,HUGBׅyqi iHCUvqq+/SJ˞SVg\z[%N\,ԃUuFC,,3_v d%{|[X[De< eNں^g:zQsCM.#z3Jc .)}VH8@{̡hx"W`lPHUi5£ !j]ob|R %TY6@KxS@%Κ֥rK-YjARܲp!ѣK[]k,W#j:]Ǘ( bOM7KU ]e5R+2fEIϜ]0f,Xf\5ElsWG&Ŧ *c k@AGP$\&.ĠʜKZX򊳦35YD4Pf",poBXmT}& G*^pDzx{FP:5ibcTWTv'~D_OXjoT̝YSR7E.  yǺpdai28;9+Mv䷄/ﺇO ?VZy/ks|7{B; Z.ڋmQ Ɠ)4\Jj9&M}{ Zݡ$8c2xzFRxl$G±dWߤڝ ekк}= xSZSC|\ Pq+u!D40mջd3xRq<ŧ~PC-ckUf5?_g<[^UNUW۟'9woHE>μʮ,]NuCZ<٨pwq/*4 Dx0I_ÄEFLi<`7$$qZ$qϓ&|JxcGRUN:)^u6'Wu%ࣗ˓Q?)U*v-~x5 >CE`2+5K24p(ozIbrr )bah^N& \_Q,.5ʩt(g0<.tPF'/L)~+ _7وЕIvl(-@JTz#Äve+]hRUCr%h(@lV2Ԣ^iVή%cZU%YJ_VSAPڰ R/#K'T1KFLm⦿"#ou{iec `Oo殀&5ۨWGΎ E\ǣ  :L-jP,Y)YВ̩f_^R~ڪKgzҦϯ ZMOZH*l4q)ӵıe()/[}r,B VTJ?]37m=>,ch{!2ɁquP9zzYL&XǣʞPj(BB1X=H/R:uF0N/n?Tz?9#d:ޭ?M9gczZ;W/kE`*K"d{]V!/!ɒT8eaު'f77B2+fT3י&)Wk2{ dL;X )ø=q:)e̮}i6 =zJO #9eBMT!2RI <ǚDo/lS(ebd.i%~?geeVƜ4?sH !jӶ,#%?(/ٳvT$*l|S7ۦUFu7S@-sjZwYI]X9g=p2رdKm-APkVo݀JN99X22AJ)G5gT)!ʉf2Nʪ)y jvH.k.ﵗ>SaMt@q,,shވϳӛC&? )i t֎е-[zr8ppZ}c*BL.+m[ .O[͜ å-}Ά@et^:n=l,Uxo{KENv!{-{8)rW +h nim jj{K`Lx4j5Pދ8!d_MRID0&V廴Ջ\Hj#alomI_k"\va0G5s*,s)ɡhtTe=*\VeÉH8Mĥ!O_ijV1$`AN3FkʫX{^R lhe v,Vf?^:jO; C3k UXgp\I@c`(t;bS ̥UKy#Tݪ 298 +Z*c0m4]z K@P.7,#c/8[Eel/5O?;CL4bM IpיB1H/$( o|Fm!95f(-u^n\ 7(T_zCR]Bt\ ]2^I\H4BB7Tfae=GԃL&%LhXAHx`"q?X,5'[ۼHτz)wӍ8"<%UZNp2"HegBV+!A8Udm9˗% v3e^>uV뵘 4 f}IdOA)N!|"BƿT,B**R$wB:z >V.?k;ًtpd:*։B# Tϝ&Jqm$8Nψh)4զUCp"/ )le&jMo<=j+ 55~Y }j\') 406S*xÙ!7bJ-]7q\}eѭ&+ֶly4]e{@d-X~!<䏀i, 3Y'}{f SP vjUeMCAwUk;ڢ9f*h^u7\sǙBRuf_Z=4P.|}x,I$&"<4G@xF;|;´R0{({8C$ܝX0;ў,:s:Kt@ ),z*=E\8L# &6qC9ZT0 * ]u**t 'rWdW*EXP0_4͈ӲbcPC6(!$sliaH9I-ad$ ]Q((-I b\+u 61ZZ2SH$pO&VdG SM Mz)ʩD2C Zu̥,gakKQ #rjW mR멎UJem]8Y#ž ,=NqCsK!،!|SX&ODeM>Id*D? 'U>H)5%"48(~K,Y Q\>0:k\ֱp:duk>EQ]]sGuҀ'\dE\ 6_iwc z5ߓllEq -&Z-$Wcy:udaPu }YH8=t fwbفa >GFyUi6YJGO-+]}n .@%& $#Zy͎T(^e`D8UUMHF{MG팔Mo1fvmB<.Ĵ~K"Ok‘綿3 OYN8oHҺ7l$ҦJdսHԵѝ4@&K9$ښHCUS7(4x}SL8 VQ+ <;o~QVC)7N-~|@i,kaǬI43i%\,; pf{(tmR@S&zNߜꅇBp/Et}*#ueTPŖ:6)\=WN-Z1]-B`J ˱"EڼgҞD;Z7i}\7Hb&82c'b@ĸZW62;اdrtO 9Z$Ql!6E1uHj]ZUCTFvEP@)do:|@ros&B,nqӳ殢$[G$,ptH&9[Ֆک 0yZpre>7\;q|=T6Z+7l&qJ֬-2ָ"ཊ FgzPdAQ~1N+HLjJٴ:a٥>Z#UaaQl*24Md, 鈲#Yϐ*\L5 *(KIN|ɦhzuVDg@|+#$.t=ȋS-mh2U 7D^`G/[c۰0%]ZѢԪeV-kkx!ͺl%E}ׁր-lDlXG6 [l9jr.Ρte6x[-l [#2y1 wRAǍ-{vTMzU]`[/PT9[O^ugڪ"+X+2E s@,ګKs-XAHH9un`5i@ Am[(ycL obUnqaXapC)Y3BSkFZu${G̥,-s1H3GS!lf`,ρ:\:!:-::aމM@NqRI`q`NXG 2V6W-. =NuנF@l#)x5{7 6ÉdqUAMJ΀xE(hEdJ+-Rz) :jid "WنFW/glZ@p9S]إ sin!\,82w jJs\wԧ-d .Vub)0U,T): w&.\ %Z3tM@X%sznt~|vzڠ=auJϤ]ɰ7iqۉX5ٙe~ҋiH_p= ].#nM^T0gF60=6qx"OI5Hy]ZSh:Ǯ~@qzFi6bwѮ?*?>6@Fo0PGk,ϳf*x: b]e>$Ӥ(iBݰIB35j.I\垎Cq{(VS)C++V*ء͡*3xp9ůN &հFx^8[ODrWѬj1$m5NX\Ge i49 QE" Tg2`=Uf6Kǫ77nrxb;$ v=.0]`.{[.*F 8^.9 鯃N/Gy py%Qb䞕>>$_yTҫ蒱T럫hWF@=n>IR4! 0{rP@,dp($Ի3ZB<^(U&SGLQll2jSUR[ atjzBK/sR%FYnZSC #|5>rmJ~#řĵoGƭS V`0JT5M%(-QE(.f[Th _jVmR){҂V ГV RrQgLR=SL}" [XJ``4'OB,F,e"et!ׯ !((: U{$-"#Q~{xS4yH&>qebrp0БU)ك/Y75W ,O%^$*lvF h2npQ(I5#[Uzr,ySF $#=TK(0 ੸HlnH^wW81! y⋬SL&lP<.Q\grshILCtT+{Hpw9:0 ((S'dZpB"h"T*aӑhhapQ״>K/UxpAKA"vzVH`BLҒy<2\Pe6j(SduTI`XERwPu'K ETvIAn++PY֔$8+ٱQ4t,%J<џPט wa4nu"˴a5~hmfO|2#!a7ۈ ?d2OPjYös ֔o{y#7RvPc@H)Gu\{U,׮X<2ϒY-\Y3@5r]k0}s[L(+ˆ26͎[#Fe 腵LZ!I^s hpoE#wgS&ʷאhk~9@,\YިBZ( 6߶q)i>}y2M!#]yM#L-9:AvfK ~\Ëlg#ZVM"BM|%PK,(FI-mxt0-.pF(J}}xͶv +N\eRqWNoYqfLW cYUBgPlCv6 E|XAoç1(}#$xɷgKd|E$6*NaF~êmec[zlq%LNEnzHJ$(,tKwG2IUacFhC[YoS "E+- RXJe6%V̊5ho@]zuƬ^kVit{>RwUU8@6,ρ?mٷW72]}KА졓ϲvp[\yG% (&.ԪI֯37\Zu~k ,C Ĩmѿ һ0mFlU 7G =4غ)ҁh2fuM|?խ^f(`,E jBL 41H_(¨< fֳ3)ip(WnesoE8g^-kΙsN\Lf~W40`s5f;_y]3k6th4g6la*uD9>uoD:/{EuozIwe#4܏c\M<,gϋ.퀛bV#qP8ƪ֬^&i2-#Ɲ+\ol.]j0o`o/;Vm|GU@\ |W÷Bn+p%|_!~ oj-:^߫Us>,}-|w-| #i鹂>E.aWՎ8SؾH˫ n]𭂯_ĿR /C)k+˫5vAs1ٷV ?|ߞ=sm[/ʾ_.դ;p[8?_1=w=^K<_?(/<4x#|G>?G:zE=#{?gƣ<=<_Q2<=|_x_G={GxG<ŸoMzz2_[~Bc%Ͷa _hܕ2 AբF GC=!Dӭq\[7uzY\Cn g N/=#q:NDcAw^Ku9pM7l60?%]dA^ sj G%iMFVr+ K|-|P ?G _M-\p]}Z-\_\j۵ n-B-|+;ZP Oin>V---R-N-UZ]Z>׸[ hዴ{hjjkkGkG7hZZx^ 7ZZI-\ 7¯҂kድZ>\/M-\/p^5Zxެ/۵*-[ ·j5Z-\u^i ZZr--|~Wiwk᫵{5ZZZZx~P o>3? !78̿ z|c3+?aӘ0]pE3B?j7ƱT(!E'Q7EɟF?cwF=v+я="[яv(zuG{ЏXя"Ih=Gz 9Q;<_T_/CTE%T)՟D)/5՟O՟BkGOG:?r?SiOETߣC7_N';՟71՟iWP#PߏT~?{诤Q߈*?_M'%k/G-9jQ1/C՟sѿO~+%WRS_M'_CuL|O'~о}L49BΡk"y'sri~0;wu;ϖ"Z9w݃v퓯z#x۫瑓:~WJc߆9V̥_]I^I~2'=~}WFrgKuBFo*()?xk-3?DǸlP||{O>O: Ó[:5x J cz3}+7C&Q $H  nRt]e!mys-[ZY&?>+ 'Oߟ lA#wRV`&.x`v;8y!2%wo @Z Ƴ7grbt&w_N8'Gxt2x_ ;NLMZp \~GF,|5 ;&2u_im4<ȶɻRt'/ g ym{>y-'~xL!;F#F/N3|^[$*<ΞZ\XdN<1*U)~yE̻! Ĝ ,wE{rC?i fgIp } >08U侃!dn~ͣ!a j7@ǫ'?/Jv.":Jvs$gFd—NTC{/`~l& ]m33CP>>j_or_oW`|UaEV!U|[u8ފD">ϧhq⚫^.]~(y#R1Pԯ< >7&)|G'r̔?`cSʒTܡ_>_56o̹Ga>|x.sTElN_w9#:fyM R &R|FG2?Ԇɂ l\Et,N;} wL=EngUn~aiO;yr",rR){>NMgs ZMonhŁ>5?|]Hg{t ZpA2K9đGaL$r.<y A6{\p;cHMGLwK+xb+:3`>|ه_tdZ)FGM݃y=S]Y0UW&n99|zb._]NB7%޹6TQ0? Sznx+[ckzꢲ@)VPٲ:Wrx`\kR,?MVMZo}Gz'Dd#?aM)$QJ~Nt?枚:8a22e'@6G_(q@~MP7Oم u1i\h}27=9t~@P̅/ aX9/Qu|"QR{=y<ǔ2;IQd1މ? (O7II=I5d-[J]11GQd>xc/W2=JS rNҝ|B-K$RsYM*P'< *4![|='0ߎMS5_`oO*4 sũߣRX8ȞGot~s33Y'fPyB5r%m7xtN*V`u l, 4>X[=t\NzlIqG,ԸG;Ȝٛ>E) s'B{{Аtxd6@QoOVdA5NrJœvAGr'G/%>IT( 1l!A~,-~7LɾP".M8Xx>-8Yiϝs8D ՜Mq_|jϽ"w}@~`odL~e"sSzO5&xd94r묹o es4o_:_#D𙽠i澯E\b(y9ɷZqĦlW3Ͻe*=-Mly'O{>EiUni\b7gqYm.`n2Nsƌg3h3bgPuB-~OL{MTLhf5D{;{~K ol>+0% {}"5;|JyJS{Wi4ON3-LAܛߡLSiqF]N8.9 ݎc%<=,xy ; )lUC4r9dVg+ȁd1Y8ng#ӣ̓wpN@4%2Qb|Mrǧ!EGV֓NZP | OL-\ܤ ;jN7mOK@WItтGh^o"6\;\j NڅEd|_1N^dٚ5~/ɯ}"SKԽ:?yt/NөySKϰ1=l,Aec#`GkO ־ǵgF5U#D*Q3:މđ<ՖkAl>AÍ~uJ?̫ Dv t[%ܻSZc}u hN=EUR*u€B$**&>XbJ^[vtUNb:"Ǒ#8PGC0/P{oҒ{p{8C ?8' |  &dolsWg醔#v I$ZH7\XZ7\h[4wХczT3_R#u_{~sjQIpâ.h)~+w U!63ݨr R_V+;GqpP,~Qǥ88H `Uqq1c"i4wYg/D˕ I;kQ b-Jv.HX^O]NȵӫN>m;SS>3ٟBZu˅Ǖ"Ofo18wʆS7◭M:[a9 E=CXpI-Lzvf76X|«msJ<ɿ"᫈n$ hWZѩ~FC h!a9=v!;.Ж߯ ;Id`KnEs% _H!Ӻy!mhр`PVCuCC= u/j ?9|# ;pbz{+5A!Q`AA>%d`,GTr.md x+e=N*[j{U a&}~݊kɮ[aNm,xerBe'd<6wm|r)huAnT:p-k|d~y$"/u԰͇ABTHrܥP|5R{Fqb戨4w ]p0Jg{3ff>n-&Ig~U? r NӼj{ U@* 쁋)pquĖiYpۇ6'6{sh*;x/_.f#v6c/C_i?נsJf nL !z 6S< {a oxxl/eooEgatpy^aX[!6o+9% 6AST'?%Z܏3 ̌uoԚ hKT^5޴tAJ0E{kJUɑ6?@ OnyF$e<(އ񯑽]p>H}}4<~xb]{>/=jM0'xm54۩8m狛:&Nkmjϲ@>WN5&#~&W $!:m*,w&l8Y|mFZY×C۴ҭdos$/?Ey"F<9"2J ZڒUF:JPtLl*`} Mux| x:wWL -MniU0cR4sgV)uŭE`e'bn =tB׮~|ݧz-DAgfb[(E>B7^M}K=,-B1<> ?ú %-&Eimtؖy;M^/szVw=qMm3:>O;3'gz| /PzIN{W.~7@0Tݻ^4(GJw*vRP>b +lΞ3v_z-t\1B:!Z(RSwT}sXz 9(rܥ[K||.+85SMG|w3$MUއghhYRSqZ :Rj43oA:~̙Gb/+Sgfۧ/5u~ ru^_%l;3IEĚjA{3} ?jޣ7H{#I?f `OA̓y>w1E5mɹн3kXgB^1Aרz.u9He[!c P@i@A:v<{PȧT_dA jA,~0!|(Qzm '{w-SIb@ H,قZ[_|OG=ndMQXuH;% 9*wݧ@6L,$ vLVc=͇&?˓7~R#Y<}R4RD7Ob*k9}ރs/#Gӷݭx+MsM߂|O:\a#'cݒ; &j}3|)s<qq'⋏vNY kLsߺљ,|Dv _fMg-wLO#?c>e/Gs?ku̥o؜>ybM ;'#u/s29gq]<}ܻ֛<3}?j h=>/č/iX/3 >s ;>nk=[3*B̊n"\ejA/fxrnl [ C_ԋTYij?E@]g-^s{N\SϘ4x6 %6Lc{ wdor @#'Yl4=KD`brΨYNo=0J<K Al*dzwEh" -hpr$9MD ӵf*<536P 5P.g ۜ}g^ gu1~z4 \cMu pL(PjfR(l"1׮3уOH4=E#tem4AcT*-(ZE*/1j颟-]mk;mFpkkގFo{Kl麮k󛻌]Ag3> ilܼ̏Z-ExᢡҞŔC:2#H>)PAʭɥRtǹAJx*\k۔q2Gr{! 76فՍ 5T2=YC5GWO4 gwSCu.@ڒNѝN3I4)W<\eH-,p4 S"9X 0zCGђJcZ -82£(`UN3k&wFD|hA+"7m#3k#u"AGxz?*ϙcw_O oW8E|T˷cxFx~Fx~Qw<(O2 j)r|\\+&LtxP+2L^]@r2ô#2ќx]Ʈ?ZBd幥U"80bIF+jQ-T|L{*e2s8CM[ZWFI:ae|mX-x[.ko2Ҋ3q%架`-z[ :(8+,$B3ךm_nH.Ybjk1a?ʬ K@Ы^367ENaT~ A')jnE΅ܸ0~A7&-V7:S82Ku@%D3W`0?m8tx 1$0bwG#$ a?~=nA'+?W cOwJ;ι]p# G A/F.EF;7D\ߐ?[E|\=9[xϝxځG|ԣ_pf}e͖p^_-kΤp"ٿ^+=+{7p'EQ;""ܹ^ٿ́G ٿ>Q߇_\_r}wn{ק6My9Mwc8'ff߯Kr7S0;gf?y Uynp/?"k17[0~Hn =9prBzp¿~ {pw]ܧ}+ ҃pZp? w!H7҃Ðr ] @zpp'0?Hzp7|w-p ܇)p/(w;Ypׂ;1H?' =_Aހu@zpG?- ~pBzp/{HG =|҃;Bzp9p/4 4g^ un!=v~?w$O|҃{0^eH_~҃~HAzp@ztBzp_(qH0ھ ' =-_ 7uH҃o@zp=oBzpc߂~=xĿBzp/>߅~IH? = =[0S>Czp/^CT97sy'|H&b-\m\J:ӝHԫtJyqk-7wk{|yZxy}ϼ#|wG>wGSW4?7eiSG<Ÿww|M#6Ŀ#4ër= \o?7/n6@^fcW!~'A;K^ ǫi,m6yVCB|WCvo? #?h6ӫfs^WW!~^UqW!>QM-į?? ˼V{U{⇫?_ , __>[ ~W!/ -^gj_][H~o?xzwx.k66{<; =6T64 a E?x Ql[lݫJ/_ll? ~W!m ګ_X!~h O!;W7?Íc|_cw|9Ul3UlE}W<Uۿ[ ^?m.}x׋| gq[>^'z-^/u^/@ƫe0HKzz>{oY7 |޹:o{_s{~f/?s<ܧ{p\LW޹gh{<2y"^v|_$W9{pA3nFijN_l ?,^As%g_//էs;\E; ]p?!p# W=Rv/½^A~B ~G?{ޫD]"U½V wP}p?(O>!pϻL/%]%k{p;*w pp,YN_)mnpۅ#ܭ8n:om½Cp:?p? [`:9W:gEXwrHnxh3q^3w K^52t 5{zA~yV9F|MYdz9[L|RQ~h3_3;p H%ˋH/="3DCd!ҟ0 eI9a#7=yp,GU[Y>P%Hȣ>Wwe_zeQ_r*_+0OoP0Mc(M쌦=288R8۟{LxՊ|j_Q|eC=կZ^`>|meYi(l,<@|M=ݭ&Sӱ uuWW7כxftS~Dbfk8>C@ "com @4F\"Mᨹs"kˮid#dz-(A(p3x?c0{Lc!gX*_֟L]9acCp68:™L4u`{:}Fnxr36f ^,ײ1k{|L*oqgLoVGGq=%Hkc}sO~{`[ǖM ѝ6VpDҭp [6xkװr _ˆF ̷P}\gUa8]}M] 1DBBx-l 9m}OO4GX8L7 n jƼs\ [#DWW ( HK fZ6&ӪzUf/u gDԊ:3pNxS lq(5Pz>xSH@ō^i>/ҥ^|Aҵx/ M.& T $tU+w0a%Lq-]FcuVfVJ/QT&. MiD4כ <3M$vzR}_s(!>˚$_,'4g ]q q,8zJ#8$͝h j CЮhN0^aߕNQ iecP2c_{sWoTI/rg ɫ}!2V,2CH3bA):/Ե-"6FtlTTpΥRW=ss,pJ9*!Ζ R2%Y`5JA`wQ_Y< e:sY]fuj1:r[dqX4YPkWh:;Z[:6wRMmYMZJNcvEm]HzEnl6lQIi!/yhE\Q!q/r-6>WW;#)j}V 6|ZN)Q@#*@k=8!မK;xwj4iDw#CxiM!y=3f5p+#**LK#}F\8MG :DRV>:?12 Gng T쾷t#.cuu^h QT}]Tl6~g$Bš8I/ajX$rIH 3=(ŸIV%fa\t4K'  l^cVjD :DѴh*z0lo8߯_Q$$q8 l-pbZZdk݇vc:FjU? {m Xކ@r<&RBdPxcų#q5S2>>z ?8GkfCz’5ƨDQt1p#AwXXZ_>,Lb~[~J0o4wEO ._14|uNw&!.-;2!^}5~6(6OǑ?Go˖xƁ2ߖ\~QPsȣ$96__kt>p‡W ߘ :>3ӺëaLҦp(M)=Yv w"SG)֟\RG1/qrfH)m-a%)%KB8nP* '\\2<>-i)d1Q\&k݌WжP' ՞`u*i,PT?}; U'j3٣RZeI'C~e+ Q`oW%aXD(w+4u_3vtMy0BR*Xo}ry?\'oH;h$KB%霅VfWuDA)o"&m\39b&{vQAEJCR2:aPaɆ"CF6 jg vhCx ~x?j[zYgv鮥~VǼSZ\,tϛ~j 4g F4Qu<`W@ڵaYu^gӈ2mih_~vs@2lC,PSOwZr qȻQr']f=魐 X6+5Ӌ# $ށYgF,7_ijL}eED )16QBoh}4GdɃX:Y[j8 tLIЫ)zRH% "e86^M%I< 4d (eQvX$BsiI⪦02/7uu cC³&eS5PY/h LU 鸃3,Sk;"AjueIVk.&Rnd@M*TCcrɎ7P mKRPr KA ]N+Zr7PAC$fr6(*|?eRXAr4 Zrb ڭ|4ėڱP4$a-D"МOY@A8'+H<Ө=&%-dLo.jI:$k- YlUT9Vm&2fʨGN k:d 0aASwBwtxMULF4D͝FSw+&# ߤobitg;_=_>tܒQN,QTɔ o/yYJiqIX!}S/w(945iN5l7\z0%@HNa;[qG/A= Y;v( dMl "6s*+.4L)6H8Bۃ1ć`8y0KX3oƴ7Xbxp6eV e (k&g(22 Mb,,1čżiÙ{,1m̌mQC̎1^LBU* ?3P;րPBw(Zja|5乃&St,(8 .KL,m|RN!t̎;-s2 '`jSN./תÁg湕"4,Ca1HNVNzm Bb't'<)JxH drxC4g57jZa QCmmY1lftNq!3t\ƸRtjbT,@6FT;w$E%WWN !wl;,;{N go~.Z low*g'}WW}}O;&wYֹ{8<:dGqYgV{H-Iت@\DNz\}WڭaKvwOuxq|wY뇇!|xal{B&֮g'D Vm6$98EXUT#; E{>19AZczLݙVTЭQ;mr/`"zWQsI=3vX7ѠީOz֎h]9*+.0XuE}Ԁ]3elJ̈sxQMsI[w-:OTH$u0"Ed"U o1MلwL`QfP'GǾp)YOG'\^^}VJK}|XC M`X}ή*BIS0S mO;&n6_ol67`>N-^nlm{.a{MMn/Cn~KR6(λm带17DE% n vQ(܅˗}P>XrD"Ɓlxh݃÷G vץ-abcA]$; wqtNydj{og'ٗ 6Vn71*lĈ ^-Kla6j|U P-T7^)oY}CyDD6F 9C5=K]Ug@vbuT[ QV?VY%#:_z.k2CYzџXWDC\ʯV_׿yu3.]z6YCvKCZ19 7!{r{|xf6BfY׮6l>V _qgǑzO XpeV+ePo)u{ZJD*T,YP !]$j'4i2gJ:oǴ_pSZBl6Ŕ̞]4?k4HXYpW8x^ %S WjA`j=ܩNCⳀXƲ\,W`[Lfr-9,.t:ep4M*̧GĩEQߖWlgWT*]n<"ʫN{@<[$\zǿ~g[^T':{Opjw42L6H%wH~:8|}|ƿ/_j] 4<9T?bƆ>:y?PNE6_w7Xǡ퍭ãߕGY.[\(C_jOƄ6htYL+G|i~*-9xu%{i};(> !uԻïKjA )Q˹rQ%ҐTp. dP.XJ40ۮm|S/z[wK)]àpR[>kܫf{ Cih.Zgg@n҈.W~Hڒ[ +2 ;f2UL~Ep_I/n"ڷ_}<*[tG٭헰pb#aQz>|} 9U_ ZCvig_n~ڏFW vw-<*6L&)^uHO12 F]!)ρzw. 'b#1x3>82j.s6< n&HAyn5‘SghtYY?Ng ДxUX\cafK<8Z7ڽ0H9ىu 'v.Rwd"S\:$K8 'DJr:W^^/H9ƛ_ȱiE̢EPz1uav3O(44y]lf8[M "{zfO;+t\^JhZY]}E \G}I!{G;l/nF"!>CO#8v'}l*55,hNLn*.uF-,@Pok5'Wwr{L2"!3J Nm z]|=2p #Jx|T^Dg0`lv2Esg]M_IQW(c:QG6WQNQ, CfIz (`Vo`X?i0ߨ4Xht$Fb3 ם?E^c:eI)%nc 2'ݎߎ:Ey%̲_E p]Ap|OWFW@L@s}`nB p[<_y'`SλV|xߠlzކ"Ⱦgz10Ü5Jac->(5,t< d64ۚqeup' )-fA6avͽi:X3( {{;fjU̵K)bsv,Agy=@|+ pPk`YQh֟wk X{A)9<XTc SѢzYolpl 1;{;j|BXM.4 tߞ>?V䯼Jވ^0B5C Vx[V(Bd5| prV4J@ DT>7O ߣKi`<(Ҋp9Cx<q!0ڷYN<ܼ 1+{&'yB]UF*V>[.nOpd Bc*[se%gّ#-ق@XԴ ;wLҕB;P$a;@ߎZEQL&װW|4ApJw`h)Wq;]l58\+tZ3qmP&"21t(;`#]t8K=4L̑%tUsJچ)4ihxsvwRo&.7ݲȠkoh7԰Vuu\Mxe;nEbBcrƉ/3k˵2[bDT![wۨ|c871Mqxk)39+۩r5{W%Cl/nʹ\rmiza;k0G[Ջљ  1D> %v{F*MmجBmӓ_kfrjҮ.ErO*3k#k4Rv˴Q A5gv(;XbVD\#)Z;`ZJN$YėZM_8#%A'HnEAvt lNGA$${sL=~rh]M,W~+JLĜǼ*;M<6(uLBuSR<REN*L~sNfYtSm6G00ZgFqS3=[nȟcL #rT*%[~τ?!6^\dRj>5=qқ1O~cQ8XDfCt|׹ӢXep"ʎ9ywa8&]\NWI!JJ3%S,\(rjJ]w S(CjJZpbÞuh5~GRy %6ڐ1Y!߾Y9T)tá))1R|"SF]o#ˍݪљϑ?p2 YuNeh"Kryxד'2AcDgMc`&NL>-v]to?w}9"+Z`@;Ve+\PO*d=Q4磨](Q? L,wak86zI++R Y03fMc# NE;s _zg y^+aK6uo;'~$Iz_8.kS;#A+wDCke cyWȽL8e+aT*L :6#,ӛbavEw\;8[,)+YZQNMk=V]:ȺCv7WKX‡+`]Ę;DDozQE$z7lFQ7޹3wOB ]7\9ŽlwU#'.U/j*BX LgsxK  Ѕ0@l*Ej\M80aah"&uF, &uFCǂ%Y*7$]\Cohlr24څ{iY"rbNPGN7[@d#=ex2NeMIx$ߠ=2důOj ģA%_-K\f{qV7(:h &0+d8V@na\ S.t82e_2oe?2R*D|<&@~6#z}LyVk5u= Ұ{y Vsi*IaJ)|Nc6@k\*|^+F:0ʽt┑QvOXd#K׀| It]4'\3nkǽ`׾my]N˞;ۭ{^3nƁ "fLl8A9m̽ -Rʃ>>=i]mj~dv&k}ٻd_TeKcBi)=J$%xi6KW2UoɎ8|{Tl8l/ALD)SdR`6Ɩgef-M%|+E0Sf?ъSi#,=srKkѣ{g ^T`K%o.(κ2R0QeNvGQN&Y_6c&&e FBH狐N ȋc~kӰa9&.AaXqNjVdn1=[݀Uep^-D^#=b\7w7(mcXWt꛱ȒFb~u<Ҳ`L?qcG &rYG (~8=;,&bNV*_ew>ŬI7}ױ^t%Hc<_O&^YBN4 x*]ZH#TݵgCoՆ ޙ73~y;$s#,`ԍIN4މDf"Mu>fX8S%ufhƋ~lb-=WfDMZub;ufhB{i chIݪ>~t#ӣvG\EC{Zkap³ߞiGPNg|aQaucâ:1!kMwZo8b`WZh5b=3Yn4447l=Y[OF>Vt--1cLvZ Lwl?p›lVfqv|%lb#Ũ~1 sZpz)̽d-]vn2 #8t:z19Zo᲏1|צ5zU*oܿS.du#n&&j;d7&؊lװՎ,1[߀EY#Gwz'iKhr!|$huRː~RR$CIU0E5] [T?qs&>ΖI Ϡg3ƹ:7crNGaLDzrXʹjX~Ɇb1|Vaz WASeѕP-)SA >(ibN)C@gjxVV>VzۗuFPjR,X*YqWR٠xVT?%n^ ?)&PӔGר*]J7XZv:жNz˼,J=,KЉT0+e^.qg8U YS%@}o4~ IcȍPuY n忈2eMN_"#эQk)nN<^07z7F΍smog&o._ !o5]5@6W v4J>b& $aC=&b0zjb;*S?_4y(ڍ V=L8X CNГ ۷0=nI$cn1ɡke d$+)YncPK_1ӕLx3ROEyBӱΘ0c-S`dn7/|>=O6ᩍ:>ޖBButv ~5F(f?zwn{Lw_n>]p [ECFµ(&N ΚYno |3_^.*PkihD ^[w08Aji>R"3%)3VH}ZG@M_~$K@>X:=<>S2ο<XЗ%C %s\r'aܡ[$Ʊǀ5n8X/,hG0(TTwVg +Ÿ́'Y" J1wI8E<'&\Q1g:h"o14eF~A։N+IPS?akVy&Xxl3;]P?U%y\ehk,j42+( DlTd/ tKfAs':tpaS<䯌b"M b @# B8qrm"\{q_lύڻaxAQPhabߓgluGi^iV >ͩǖ,\LG +Xt$oeo+{ũAo^m߽{m\{61 uU`Sbˆ'y(Cv^Osky18R"S(zUޢDP^!|ߢz񧶶jok/v)L ͖*'N\"MZZ"X+ 7%r)H n;P ( bٜRJc%^Ll(Ey,A+dUd0NcHFedO$W4t6Ƣ7 kǧMچE/.UrU=)G6*ǐ٘"[-4uN]ʩe%j*0*꿕rWImD!ӹ]ƱnaڸKz[ӜgLnF2aumJv7P 1`dJd("djEVPs͜]md!9K/9/2E{;-\//jyҚ{]w*Tr}ྌ% ܗS)]>I?acOV(8eGנ)"U6OK#Z7w"4ϫآo+xL5|Ty$u.`-}^͗O ?+R_Do N1o{L^ys|Z- E˂_uR޾ٜ-;yC>RQG<6j 8=_X%F}"]O1$5 #quVFd :A7ᘏ&!F֥#,ri'р1b0|A`5I FڼFUzxn@x& 6 kZ#]MF-hhLb1A8&Avɍ&NQw[YTERUqYh.zegO쉩"8jb%

TuNVoNq8X<pQd NMdgy6v12 $7'$dJ5Ū5jYZZUL p(dN7-Xr)ioHS)~3h!(f +[K;K;fBNY%X#0ZFo sPmo)0TR5}k<¼yuQ`l w.҈Ǧö 64g(,G!b1 -kZŕ~VvbMmܻۡy(L2 YW#c,wlvjǛ-{.2_rOLχsY;h;x%VĐR4&]V!V9.& EĆ:&l6z|ӸD6k7k {Mн1])oL<Z$҉GcDB냺 'F>FMgH8g*)%9!hѰ? ߔQZC0 Xl]nfà a9ey|L}=\P/|(qlKH_1]EE] _Dў,N(z3=>ZŠb:!K:,D>p@>Np8m]N8FAٺT,j mJU H@? z [n1PƬwt9HPV}1XA6K󪉭bQ̧\;Uzi"P璾a<5ac%7?XX;Q4.w»m9 ea3j[pS%惩1ȐnF.vW_Am@;P#7ѓ4>{=!]G[쇭PƳuq$ :-NItI оZZR][tM!KsP*,18x : mHҫfĜ'OmXCnj? L D8N8N81tF2?Z}i}`g1o P ʹc[#E0de[ BFIk{3jS0mƇv `>86%{G\XMG8 7h8ѓ'<6r\LE|t]&بYTm:xgy}-&s\ ة|yf=ɼh@ P]פG$䪪4:ÛsHVc_s,?wFvUHN'ΩȠ| lָczoW2)+x2P^p1ԈV_MV=H˝ExD!8k#"4%2b# $Hd #@5c&Iܸ&NfFn8f;@b)3y% YlxMv=nn_".ɫ>8VD^NV[J5u`*ar"zlk>- 'YovSh *a`6/4'psL~TQ` f# k1p8TDXӲ'>l ;U 9$-;һTCDQ(=sf>v :KEGlg}x{ i-hIzHc&F!q)u'# 5 79_{s6K7j 示,I39SxUN8s0L,D?HtXK˥hC{ޮMKo^p?kQRPnjt\~Vs3epqmohKa=~@StyzG4^>>ŔTINN@'Uu[jdλMx:n ٛ(`-[ғ[))Օh|!UmJVe/uJ1)+U sC8lqZ]{]!>em+5jQS{&kY@$2V';L 9SFQ`mi'>O?):()w:;Owtʽf?N)ƛ7賫՘q\KA;;8\[m]g{J&f76oKS{{I?6]5{(&_^^} {\ ^䠁գ?v6Nz(=Z-@;/>atOE#=GK;r $ł=F% `yݠ-V7j7^2q1gi跷';NQ,>[6;' ywL͛3fuX$|3Y6Fvu9*` .V^G{[60uj.?9F^l>fzCKwgJgF┰.&D2۰Urw`4H[IQP;*gtS[QR+0+sMDL cILgTh 4 _ԯ6~޼WZq9LH"Ya#~F 䙬5ol G IgySy+IuBːČ{ o9yo:h71~+0ثS"~HV;޺:ZU~{Rtv_նm=e/t@3*qo(N#nS3E?5?LckQj) Vz=l? [*Gi.tLS[ AG] F%]F;C{å{ÿ_p{Q+Oo\^._pwrF3Fbȁ*=}q>tM Z~US 0#vyZm3hj58:вcJc؃t@ŢZ fR&Q>D-K-Xl!HrEZł7t%x-,e(E6h^3Y|$I`1{ܜ;K/`_َ^֢#ԢmfOdzg=؆I7C>VywbfSfKf|ڀCǓ!3C;Ǜ7|f$r}Pn޾o̓-~7~(}yxo;̛Goeyo_N3"^x:WENiˊy2ڞ$%%3ýrewVF}L^9rTn˼E}o݃W:;$Po|N ݤ,Qo(Wor߼ՏfF.5(Kx_Fl o$. 0zY Ѡ ꑰ[Aw:αX@:۽ $壘1ٓ-`nX{_kRbUGc:ab La()^tc@h7b,I((jKWcn 4s,&0851syaԧ**̩G#)Ym5ΐBqƧ5E_k$'kss-R+N@B܄f, vڡM-D3< c_QI1̓xfa`sF ksGz-mA'uu&6ȶĘ96刲k E6yN4_Phkxm-nnR5\D@4 p+C`'0 rÙ0팡OxfLg9V@\>~ --֯%={vTYMp뙗 ν=^8Y]OŒ <%'A?ҩųLN^Qtm`bt[zD+Qb[vgUs\ scOacɀ"nD6$8D FZ29M2.t֖^ZȰtZ g:¬!=ϵZ\ F#F`0W*%~ht:1;wH2 hzb]δFk'7P))&6i=ڲ)}Yϓ h UHԺ6#f")0M4m }%>>?ځ|WB)v,KbyqFk/}wCMԂ} oGI촏 Qu=tMޠ4)&=ƇJ3?`8%Vl+d[1NnXaxIS==qͱBaÀ+{,+9$=X`3Pީb9OI1:7c|{ɾH\+57[G-1qmNDTOth F?fyog'!2Լِ ц;)ה}Ly[ZuV LqwC3mgE4D);Ƕ$-2DHStlVJY-1>܇ 5m'FfD| 8"2<3fv$5},8qL #NI)&Y0A䗜Șל$sF6kߩ8~HǧM <-'@LvZĊ;1,zn缄F8jkI91€E+ V ޠ꒺4J!"2Iio獌z̄FDiTM9mYCǮ@l)KhIEcg՘(jYo"imZ_D M ESsqt+F?Bh^:ysgJc:nDWx.0AมU>^ЙkmXX+yb (<'šך]5S6fì?!37I P<>CuvR Z#nLgpg\*vF٠vNi\Kwܙuw؅Nc:;Zɟ{"K(F7Nooh=ܹȮkMO) ݹ6*s̺}Q" 'ٜtJ(ɱSlX7QB>p = 1o \k:]ncu`j\n #23$[30n}+V]UYCRt8qnyb5iX+6+Ūvj[*7iuG3'JR՛דƭj}G8;}ۛKl6>r H82!&Žu?ۀYcҬE8Rf}b(+ *"qL#H+, ؝ͮw#`q^֔El NZ&wYaL@^zwe{jI dwolj}B+6vҹ#5n q^hw&6&=%ڑ a~ǒ>1) ÷ݔ7 /g .duU'+P̵1m*=Hvp%"GKUT|֟C*Q]3{n/N>&T8?; {@ "$OYKrNCn} (䶰oarn}Ӂeh}ᜂyk԰~Vjb]PPZ+,Mjx|3䟠UH&W]srv@x9'}N 0:+j8M9.XAY3FTbQ9a3V`Pz=@z1 4f֮L~Z]4yu-@(c5EPq!@+&a+(a+>sDj ŰpqHA[:œ= F91ۓ=A8RXƈCaܸV|_W7ǯqw&WUթ_:m\q439M9j|` ~d aq&.tN^V=nrB _'ܠ[Nq,O}= Smj4xTh:;ƢABwR;~rWDMz+՜ Ϭl,kӴ !5[My%CBvT{Nة8d?dMP6\j'pqy `D ^ (˼zC6QHըw1i yZS% lӍuԛ\nL.s^W$drdCJdr_L4pbI-9@;AKlwNk ӭ/I } ~dB'бBE!6 Ip`8PDG33&K}#IV -NVaM>Z:9 `H 0Ax ENn|үZ] ^Uzvr̙6@(:[A?2L}4ȝާnS|8}Jz,i (0x(H%N_Z83I6>~X]YYZ]ڒQ. Q|iSѶ;Ƙ@L( Ý*ZY+rAB*|5'hpǎHue; ;0mqѸ8Dv2t$6[x^pnͼ\tkwͻS@G!x܄:^eef$F`JB=`e1~whɭS8 ^{)26ӍBI^$q^LۣEI}*%(0{idw//|:.IBk>HaFkZz6FSW8.TzD;P>D\P.甇,`BWG39J |Iږ(>),+p8ڜߪ"7%N+IZXFqj/ǗL4Gh7S_CG}m \ uQ]hz@qe@Ivq*>3SX2; -KOW#?&y`aw} Dh* B'vSgϓ5>Ot*{Lךkv>ცX-jP/Yn 5:M 4Q叱2X㭝˝hyxwؼ ?wv&׺6F]ؚ3xp1]i; ŋzk?ۋ=ԉ"zsԯ5QM)YO< ;S6%(쁰nzhߧMQ--0}ѯ7AkD|X!~E]VwWerok^ `Z=ׂFh֑`A557/~-9ljl75ky„B"8(+^nxVW6#+O |Qģݷ{*4Y جqt 5P9vӢsUlhvb$)e ^>ysGĒ0^]plYn՗{pׇ)П¯HeF]*8 @WF%XN5 Aa zR4D"`F@4\Zo6( Y7j, z,P` x!2 wi`u' >iʉ&%k6#xījpf˕M򾜠{f$ẃ>of!"|> PN=ZlڲT_@ס N1tcT+hf+ :Tqy5eE27*ЗaU4py,W#]#Bʥ<2grE1?SUōa ذ)DYÍ xy Y#FG+ 58m aJt4]7( cHodҥIj{g9N)}EP@/yL|RvǮ)|SyEo @:МAU(Y(AKqmˬva]+?Fnxܘ+~S$`Z3 …/D3Ny*.Ħ 8$U:QdͨݙB$QF w Y^^.E?OW?(8dGZeduuԏy6JwHH.DQ(&ޫkU.2C}=&63e u5 !B_9p)뙡5 "-{4Dwl9s 7Az}9Ku 'r 2 7*f jT٩ъSi%O! ^TRyueR~y%c`< ۽#0Tgu [a:jл,Ƨ9Udz\l1<,EG;'Y8d{/6dĝAيC d eM^R|>ڟr7=B]|Z͊RYf]z>W r;0rbwgUmsw|Ob\>V|P(qU% D%^nIi1!y}L9zr6M2pu,sbOT75\O+KOK\Yz[E1@\ik`FGpc5Dq֘$% apS4j40O1Mjp \k5maej 3:>e9s¹IpP:-aNmO0n>Ŕ}ObbQ5o_zP}c͕7o.6z`"p4|t3̵ ;QӞ /B17-,ۊ,TRN\DzD8K8f.ryu9?zHS٦0} k ~09qC쐩NaS#?_:>:*ud1KJ 2l{j P/S@[K$*g)Z̭[ 1`c^4DGōי —&'t N8_`:6B;ΓKP.x%|!ͱvOA(lY}qOZºR1rWMW,dВ L}LIgSÇx3bZS'%Ih sg,5MH=uJ y͞:= s9mXmY4Y"8VY{~32#:qsG"2?Q&a_nL8!Z v)׌`@G353:u@gWԊ?Ή+MK=]r/HM;JF9_lD #L8J C+W:䤄 襉=Lpqy~Y> efyfEJ]LY chkEL'eB8:8N/Tu Pc&Oqļa9pw%}X ɕ`wW-l‹9 `ɷ#8 MH>o! |-fY)\g6R^[>˩p 5 E /`˹s`H܅Eyia0!&׶iC^X^ח ɇ)\RsyELc 1{ᦷE>Ni D]^ux2C'&QMUN~F@155=%s6L(QR^_.kw~ۖ;xQlx?RC:T7W]KB5qrt?OWuRRFJ`yĀTBVÎN VܳMLǑ(: xltw`M̝p6GyP gGU9BPP| a PvYrG%_R zLo)W|$g4YIqfA !ZbnV? sѸrLbr&Q _b;so}/PMQ}#?8~T.VNZնD667cN@U}>k,:sF||Ħ!F0}ϻ3Qu#LY–}&*)Ej Kgkت[_X X&#ZSB@R`P.DQ`Gf(x`Ǚ <2UrrB) [IHV:"20mTVfG+DiP`^0 n,8L3 +ȸр.CJF"18j1|(hJ]N\+c#Qɬٻ] = b iɖA -|”Ju4 ѶC[=hNO"`Y nM^ Cc ;QcC°-r$R /z e=*k`"!PfX@Z7/b$}[S4 cB.XdT\dPNd:cwnTlԬqtRqMئʛ)S8ʵ0ǘ˵SE dV*r[q,3-9xt[0 &ؾ%F0FpDIL~fvw|8%߼-pvb&Q8:+?4&(Ha#l8scB ^E)NC]vK#C:6,I#sO0Q+ujhn;Yiƞ&v0QP7{O,P JHGڨpgʬ22{ť-JhS[4Fxؤ=6erp0̯:cx&L+߮?=ViF)u( #`yfl %KhіnF`1şTb+&0jɓk@$dK)>yu"6`3yo"(i O1lvMD0u N"m8S[,P;e>i2k# "~Q%=9̕mlw$#5[? ڷO}m,m(໒-72kpH^taOF&tRn$@}͛1$. yqs u䡽sm ڋlxxghfxKZ뮤g_'jݦwij,S`qLpjDaZ>m:)&lc6 l1j M XV7zhfNkؐ5<6ژTF hp o\ΔQ-Ģ+%7cO\o;zFv؂z^;kYH<^b.HzԀemÍW۔D“<hBuLR.+v9pX :]$$b2Fs<PɧIsȧLfKdҠzw&3 j_b| Su_0:%w`@首Hkt\B[ !C3Mq!]&&3f'ͪJqMoE)!!l`_\̽?U$ `s O)9o-|R\_/wLG \ե"'^m(#]YSB "i>AH^]Zz3R F.<-9^;EJ=j -5$#^)Ib9L.!vhywV8&́pZ[&6rk KaiSB,rI=SVJA")ү“'Iv\DD.7[;oydV DR3 ʗpnvcU훘syU˫h>"C=p@p6b4 f%cv^W[R,P:z6VL0L(ɾBh13/ߣB/ٝzHIȼ]K)S^t׏Spj=!V?.Jvqomm9~M'5 |JPJ"1i@"E 1b:_dg$<ƚ+0rfu]mk }}8k1bd,^gP#F\x2:}¼VQWa\cfFĞl`?X)',8e;H|pSM,5sJx,Ie7n2WIns#+4P%-lyA*\4k#Ã#lkER&EvU)Sk-0䔾w~ZZ.W, ebzZ57tˑަ=s֢*p ֺ*2Zts/dDh8ӌ`srTɍC$ 󟅘 4" vm򂖅A(|kb;Bi\,+/ӦIbi1YZM}-u|k~hzúx yײ'A7zby(/.ʹ"Fsnin BHn!n%(#t\Y Uon>ZVL6?߃f6+@0 |#4w߫WGi{[NﭝCӺ*.N4apHUգu̒Wl䰬<7%|g,VWo^n77SPHn_ m>Job?|( 퍭ã5Er*'(%,B~ ڦ?0Ž) g+*y MJaG٭헀S g˦&YTi4DtUQ 3N?U$s*cL(yzԣ,c`]bOUe[9ՙ 7b>S^R5|jЧ7ž\Z_l Tkb4*p/G S ~RBՋp'e?Zgd('Nvx4P>ۧE!ʮۀ[[ML?ӧ*+Йo6Na{Mt~wn[}r?ᾓ3~zz}vM7p6Fѹ*6l4L P$R !F?Zೊ$Rtd;\pqsJeac{C֌Z\l+i8obUk ްx ìtv1 9Mu1'BlZ,#(_C0jm=s_@8|Ndp)6_:>ȻfQ({h:QHŴPZ4ab.878ć.06&L-B&N3I#Ca XlI tiLhgՍChAkYC*8F$9d(1e,V9`C\RΠ*iF)bU^hQW|1 /ufiRZ(;vAU am-d)@CGDqIBYQ8k6fc6D7zh$DQ0BnmnocԎ+I'b!*ɼ<<سa<¼$FJzp>iJ-Iy|fMVwx8;CV fm 9Q T+dۑ2~fxy)BM о` =/KKvx2B =0754L.KheڂruR0@3u3ZQ W,&Àΐ8Cw,!1L"\?,. $x\973!mɢhNa;>&>i@n5+ O07YU[P%2c@&yqE_Q1o4+Pb,LZ^ѻy{c^:<$:!!6pԑ+qec=~!‹S'|:Nt $26~#'N*gnpn{a3Ͳ`a*<FfNodeu^E {e_WP{wgU~ݟB{gWq@{C8,B P/&A|{~7IY҄%b˖HJ%nWڻګUidxp++Gۣ௕53 aF64ҝb@eSLJ*-%oRb$E&jf: j tՁ% 4jkR?_V1ϯNs*vX}ؗbEn0 Fsi@,/`C^+ew$lL91XDi=+~E|BHmsgr?*Ry_P>>~`x ;?ؐ{p.1인V E&g9o$\",/RsE'!;0qoIW w?ڱj}{[:,X0//GZ^-ӽodI'J,Zu}r2%;N0 SV KθDc)̍9G$ @=Q )C=3eaA+bDb4v:a6:8j߭{P\{i*Hᣡ;cvm}EAf2cjj5*C)YZi&sռs?.RP8?qr/@>t*a7LU:UeL `yPA B rfF WS{6nh)X3;'>끉 X/Z_N Iu5/yyTȉ vЌ THz C!]7C81 'A{q65q.q.-*#|Rf* $ le0 Lg)X$fr[2"Fős|ӽ Wx$VlɴYn^ d-~9` 0Zr~oЬ)ɳ Zr֬kJg*}:/̱n|2o;kuڟ2H{'޼0_VH_;8gXx}2]+r]qT&.,&*iH5bijG+ zBes~Ye3Fmv%]`e^9/qwZuǓ['G8 ,ᝫcUKaS]>+DW2QqVcƷUm)7dž©\=4>ҦY/B8wU{,sfDu4-Ui<&ŞWYLz]-5Qf^4lM?JJ7 NtP'g2CG&tuBW%W+^/MX,,7XƓOƑhtQl9W5caD~30%>4ø.(&Fָ oR +cfSzY614ce*/4mE ?4u%RգoK Xӳ? }g;iDjfM}l^yM&׼?FZ+A8o| 韐ubLs+Z026gRN*FһَLt h;ώ,,bAe]+iN}.϶RіzUխBX)rRPERj"7К6,+[;cRnx L42M?7. }8%O(r,Lom!m`-lrag%EZe{׬wVI l? ]YK+K?:]õTS?.//KiloaBXTlsHǠ Ou؃Qlx/FV':Nn _(KT+5~60 ᦊǠx[4uuP ҩ4(P$gIl @q@ ANKʄ'X_gQ (?ӓ-G׶ ++\ZZuW Ljvc~GiGbv-tn"|&&$Lޝf3.J9=[w3ZZrds?|O7&nybAZUJO߽|C#isc&R,.@*v!ӱELR@T.2R?S׿FSji俧Օe,W!ӽ|~+/NSlW?{?o9Ζo̳zM>8:9:8VKimAZ1O%94W8%3~8vlu ]P6Q[ ?`)d"s~ ZJٍh;rvi]1);-o ' B@q=7lKc^Z*(X ؏7tsu~XseW%EEUuܕUrAf`Փqv86^7̸2ګ}]䇑K3cH #!c#75voͷ=Ǥqw6MICvN%MF^Mvs]U&ZEڡD浜=TjAG6=hP<~(axb.*& :pбWh|L.8')8M()t)r!Nt#,L'vR*!vR{Cjw&kgΝE~ILIIs^-14k*FLCn2sF!xh֯7I L&1 GvIB'3i$b~֯_$2匧;G0_J]ly=TlTm`=~_jt-[\n-2Cz~GQ &h snŌcX _|Ӥ0=Gڨ,.Z6V *z .sFSdn H3t|-ۯ9SRۃ+e:pn'k78ĘiֱG<̽9F+ֶt! ;'F5q l9} Q>:zз_}I`sK%~g:Ji>}σgOO‹SF ep/?R;6ROSϬk_&=g}} A۩Ic~o_N65a&rC18^TDL烊Sqkj',OZ;eʟ4^$1t@YfqIbK}>hio5wG}(r*+B5#W.jW0lXWj؟d0σ$utؚ)|[+ioi3[ckOYK JeLi(SySq=oΐUTnx Il@g,i'jؐc+6W*ln> eT h}}_+K%ul9]|#RFrxΰ #6|>,S)p39S"=w`ׇgQKEQS43FOkVBWǓN**| qRp Ls&s\ Pp2߻N1o)Gry;kY')1)]ěAbbEw\8޶#N]#oqƟT&g،9Mi啇uH,z㜊с>esmD @*`7UeKf.{7EPǥ] P.lj4),"^{|p8E($@Y3)W:zM=Ī-n} .ZaY4] IV'-15Tp4ृ0_~RK.t0(MMksO\ .^#˕}|17L;K{5nO![.(ID<EamNOU!RB8 <N-1Ez%c *tsbM qPo,!l X?b{a IjKS]ob2`Wy)+ C6jvJ֥ٜa$@|(2ASZ.gD 7ac +UoKwSy]QkZ%UA}}=c-_B+ByjYצHlm3͍n"Z+|jIOp2l2-v]cLFO9Sw$Q[JϨf9TpAaםc֟GCz6vw_WcW+./+r`8{޾ݎ^_67_o:ϔ ]gK!;ig| g@ǭ7֦sHULp?zH# S㦊R+Q~lDO(ɫe?ԩ%C;5͛c黹X,Žc`@?Ye2CS˕ ++2>>8 /A;}{2vX u@Lzו }E<-G; &L*jKrY~V+KJfAm_Gi8P6;;`q!zܨ7#SyyrsdS+bjMS3` <|+4r?B0[.RTdRXЈG(@~DJ#)Qt뗡3Z(o3.|S HE-P~[h6z 0q9Hv(Tآ1 £/GVKI9 m}Ʋ+?ãs&/(rbp)8s$C _v,P;ij!qMfM$9 9A)(0(^ͨTU&ƼL#AK.pv ChcE1..x#CU1H=:I`jk_(g!g#y7.9>֣W?+Ae⏾Gxѷ4 e/Yڭ^ j#ux#]faԭBfͮH[KxFs8a_orZTw Am[!_%ݢ?Z)=τL+K+|C!bCQm8H4-}BB"ڰ k-LK[w#}ibTb/=TrwQɺ*ӻ[;/Ÿ_s)OBA M$+*"!I-"q͈}ڰU:(S_ϮPؗzs:m UszYG.$9bg4T8jTVhkg@hЁ1G!y &Lwk`>ًg(6mLQz?˕*T]eyi#( ]p˨Sh 4 rx'L:J ϳ#Bx~";pOLcy#73:|]9WpD COG;6. 0/)r"\e]XZ0(a:5A<,<?t͡SU0ʹlo[R;n;g/8.V1aKf" tí7y€R8Ē4[z?T'G8 \EaH0MXs:Wh".L:SG=(urOņY0RՋsAPwAf$!M,!zA@6YXdĻE!zxC!y\g:F}Gˁ0zzem5e[%L2T3SOiU^b@ ِ*ְ Ch’g1WS=ݢNńu84eA3ˀSIKZetZM<73\m M(mStie ֻXB锈8zL8^A(1g3E7zo:u!B,Z#Cd48Q' mP_a9>ؼ ͇]}?V(>}8gqaALS(W? Z)EK;(xGXa khGyTۊZ ;V3jBXT|8W/..gQ78[ {C݂ES: t߮BCu:RAaGpD:Y^!nf povA*S{[;۵&K}F{SuA{=dEŗ̧X/۵̓;yf2&36lW^;ۺOVkۣKxjR͌jNѼ8<>?x& ƙ:cܯ`eh`\{.7)Ri &U99Z#jg I+)A_ ts,s˖٠E2qxp19 ;6wj6 `~lzf%ÉvG hV Ѡz4Y'u>fz穅1Y5}Ohm~&D|wL,U/I W$($'C%1  'uM| )x_wpx`#n@#PjxQpW0CJŐׅxC@O/K͂^F- &PG%3ɀanB5ok`*!d8t&|byw8;+/vmGMAP^;Woɫ۽۟b';W1ȿϊ^ɹ^}V\9%[:kOŲpWa~sks'N+~+݁ Ws 3vnB2Y7"k{]AQ(r,3J+&J@6F~[ӨcCF2)kQBiyL]6 Jy6(P*AL4Pg<lPV'CY < ʏ0LOg3e3oye "M?pr%$49G %@+yߢRC-s;kΙqi)Y&r=vgom.ٺ2.Q_y7a'j~;@IjdP̷[N(`͑^:ƉU6 |1[߄21݄2lsIyGHnp8wio5fYrLvH4^ɻ_/ۼB~BF<(7h&mᩥGSP-G3}a-.eNq*@6; q 1&5ϸ %Zg2Iw&2z U&*yhh,z([tJ-ds@RkHwl6 x4@obbg}sM0Aa&GtB6M=N5rU".XR_g{c-:Eom#s7\`LKuT*t22ff|96(MFut©lqIdY!SZl+lvf}@, 9j<@gA\ц? _0.3А)s FeCJ o mWԎ|} H\GuRS~a xIXTE/n?Σ3uյ{[?9ʑ08zۊ9RD^` g#c-:e5ˆӬL~Ls^* X]K‹\z77 w.O[f@.72]ɞct wџwkI cl >q̷&l!>,̸=,K4V\PZx&^u~|‰L'H?XM ߽%tM'x?n<%{!Ig&ꉛMx?z<5?z|)+$vK"T"{X5h3`RWfp*H&1'[/z+l-(?9y ύf,U@?qVbNFgU=xБ0,@w?u{ݪkL tD'H6l}#u4ൢ_< وAL#9JFU)VQ qH@ȲnQM1ȩ?*Ғګ_:PDmՇu(E:mLlCCzl˰jŲ*=k׻MfrP\!CE'\SW›¬0ހ HzHx8H:&v.f@#L m"FRV#bǐH灉R;s$Q/,/TB `eu0!BG AׯT>iɘ6 x{d[^Qs~8YD`8 \s~.۟?r +hfه\a>a耬X%d\A`@FW;|pIdǴ+ZƯ_+Fbxg6Y0jL;?hT^}F򓉹 RzpxNE1I(K(džÂKTtBLhIos4לEKTM'V.y?+O.D?e籩W-T0*~)T+Q\W*x-fQDr y#w]%Yw˺V)frV:oF{}c]ySpپ=t,E{[f8)uՍs2=˫kWT H:}72F[}.T{=BBdo\bA"ͺ{,AhakeNAF\o`H]ksw Yl7QrDSY}Ay{`0B&O{m;d6ri &g5].wkYD TW ~A 6V X "DD//x)g*+C&8[eR>;@cc/,J^(w]~¯_/J5^fAdMǠ2}fl΍ /\ࡌt$O%[MTV?Yz&yBBwtH#!fWFvt/h`@;ŋb̪^l/Hjx>줕=}І2sMY Vh탗YOsJf430KV32;?NeG`VIOɘ0 \aU ] ;`_nYMQ&wNju~HL4ܹ0.{7/|>={ez5#ԽEo~w,?@DZ:Ei$/f]^ͺDWG[/N D@SlYT4֦nF9U:sڙqwv ypO>,U>aRHi6/HZ`P#n(WƉ%^qtW 8viJD7(34n u?OH,'fruXQV@b[Ľ݀}Ȥiֿ!TnFy ac/*Oj4j~cOJ&x4~s >TJ˔>"syy7`/k/K+e2HuR<.kAUm)8=#)Y 3/x@qE̓] ?/C ߎv曷7Ʊ{ۯm&q A/{[{ZyRd& A:BBS />R/6v;jRIa 6,ZzJ-׻M g(9:;N-ty)0+aXc&o'87[YIWQ@]]3<:u=ç <ؕ=J9׍`܆f[[]P<+*@.w=: U'MX6 3[' pWjc[:ع,dIF'vn:P~Ԧwy w^f<΄Mݑd'3czK^dK6qX|KqJJZfcV9g[XҬ`V+wʜϜ~םTITjt.*0j4$κElnXPQdO! iLceo,:""K$"a}CuLI:$rnq2vZ!eQHH5N u.Sƛ#  ֺ u6%GX? 3;La?*sq~> ʾCG-"KG&ȑ Iu,06#"Él kj!THP/A\!}>&p lPYPbCRjԭow/As}ۣ_R7ԁp=sZ1OSQGHٸʦɪ5Nf(+s!N6BC4[+R=flj'Amʐ5w%}2xcǧA';cZR6=%s7a%呢)y9-"?z{*K'J~%/tnvnN89]ޫr%^S nʈgK{HzהeUYn|"ž;TMhYTjc,蔃*CͦYKn4 Zy9/zR=7@)k_GkYɉFaιA >XciXpܤ)z@*[ef>S:C>Ӭ@kF)ٳuC4?2Gq~C+{Sׯ[oT_7LCDZU< Ig]1v 5~kwP GdQй }q{ߣFiRS*IjR[9nϵ9A~\ ܏_lF;od$W$zt*e#=f@+svpWC*+sH?WbW{8$2"wnz1B貵y=Ȳa0jbbKEJHM>1Ec+[nbx 6^ M:62Vc5:dNޑwT9IAVVƝ?OWKOW{_jn?~<{Of]S?Sriu ҃>>G\+ҏJ\VSG[_R6 siSͣ#8Sk/sJ1]o|zV'VYCzWUsi7>ʆ8ϻC٩MZ-F6TϲG8W],7Kⓚ}m@ɺJ{oҏQis3U+l-EgwԽy eiL+姫K+ciaLJ 00  N7ޡTdMc<>7toaW#޸3}Vޅv7Y8 ޢyz_ zojeUkfžiq' $8aPV~ a+ٵƴ۱UXUIuۻ=@L(U(j 4I;˂&wRbnRnPXW$4I`JSE/m7{QwD)E4N!e_o˹GD5QU̹a̅NrUIϪZxkkO5VQPŚo k6* L:>z{ueWdnt>ge_S^܇:pN}sxE$v?@'W،vO{?O6L9ZDq˼7ac87^C̵NtnU%U}[XwZؚIեӘR XA㼧?(Qvoo94h./>(G4[e5왪MyׯrFPQp/J(׿VMYRŬj ,ݓA}ތcoHR]#} ;Fց/DAS-4kR vƺ~G D糦1r&ׂCmD.i$>xP|"R7pCY\#|>z ?)h4zް[vJl~-`n>Jo"gn iIpӯ7y,ۏ\;P<ϥ餵po>z/U0Eo=KpӖ*¯8¯,A3藽 [Q$ӡ1J) | d8l7~1>)O+y}J`vn*,sȐs<$4(TP+GFVP[5E  $+Řo2Q =0noHt4M[}"Cn٭{٤q!(e+wz ]_6/{ٛq}x-娄a px)ާxJN'7 a`Vޯ;)@4+Puv@]ؐwd@ćA><HDB$%hDt*&!F)`X{u̓7?SD/:0?nd.YT tFW:)E 0 F)!"R'[/Ի@Gj9@Ŭ ~U [Nܹ3DI[2 |6]4Ed$pfH ~U HT=2KVM'$eH^vup> f,uѴ]V\?/}nH * Ӗe4x#Ҁ@ 2,Ӑ85ßA*Ca uKˆ଩*%gy<|>y<|>y<|>y<|>y<|>lxFslurm-slurm-15-08-7-1/contribs/cray/opt_modulefiles_slurm.in000066400000000000000000000032271265000126300240560ustar00rootroot00000000000000#%Module1.0##################################################################### # slurm/munge support module # Put into /opt/modulefiles/slurm or some other part of $MODULEPATH ################################################################################ # SUBROUTINES proc ModulesHelp { } { puts stderr "\tThis is Slurm $::version.\n" puts stderr "\tPlease consult http://slurm.schedmd.com/cray.html" } # CONFIGURATION conflict xt-pbs pbs torque set slurmdir "@prefix@" set mungedir "@MUNGE_DIR@" set perldir [exec perl -e "use Config; \$T=\$Config{installsitearch}; \$P=\$Config{installprefix}; \$P1=\"\$P/local\"; \$T =~ s/\$P1//; \$T =~ s/\$P//; print \$T;"] set version "UNKNOWN" if {![catch {exec $slurmdir/bin/sbatch --version} out]} { set version [lindex $out 1] } set helptext "Support for the Slurm Workload Manager $version" # SCRIPT PROPER module-whatis $helptext prepend-path PATH "$slurmdir/bin" prepend-path PATH "$mungedir/bin" prepend-path MANPATH "$slurmdir/share/man" prepend-path MANPATH "$mungedir/share/man" prepend-path PKG_CONFIG_PATH "@libdir@/pkgconfig" prepend-path PERL5LIB "$slurmdir/$perldir" # other useful environment variables setenv SINFO_FORMAT {%9P %5a %8s %.10l %.6c %.6z %.7D %10T %N} setenv SQUEUE_FORMAT {%.8i %.8u %.7a %.14j %.3t %9r %19S %.10M %.10L %.5D %.4C} setenv SQUEUE_ALL {yes} ;# show hidden partitions, too setenv SQUEUE_SORT {-t,e,S} # logfile aliases set-alias sd_log {tail -f "/ufs/slurm/var/log/slurmd.log"} set-alias sc_log {tail -f "/ufs/slurm/var/log/slurmctld.log"} if {[exec id -u] == 0} { prepend-path PATH "$slurmdir/sbin" prepend-path PATH "$mungedir/sbin" set-alias sdown {scontrol shutdown} } slurm-slurm-15-08-7-1/contribs/cray/pam_job.c000066400000000000000000000100701265000126300206570ustar00rootroot00000000000000/* * pam_job.so module to create SGI PAGG container on user login. * Needed on Cray systems to enable PAGG support in interactive salloc sessions. * * 1. install the pam-devel-xxx.rpm corresponding to your pam-xxx.rpm * 2. compile with gcc -fPIC -DPIC -shared pam_job.c -o pam_job.so * 3. install on boot:/rr/current/lib64/security/pam_job.so * 4. in xtopview -c login, add the following line to /etc/pam.d/common-session: * session optional pam_job.so */ /* * Copyright (c) 2000-2006 Silicon Graphics, Inc. * All Rights Reserved. * Copyright (c) 2011 Centro Svizzero di Calcolo Scientifico * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as published by * the Free Software Foundation; either version 2.1 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. */ #include #include #include #include #include #include #include #include #define error(fmt, args...) syslog(LOG_CRIT, "pam_job: " fmt, ##args); #define PAM_SM_ACCOUNT #define PAM_SM_SESSION #include #include /* * Unroll job.h/jobctl.h header declarations. The rationale is that not all * systems will have the required kernel header (job.h, jobctl.h, paggctl.h). * On early 2.4/2.5 kernels there was a paggctl() system call which was then * replaced by the /proc/job ioctl, which this implementation tests for. All * patches from ftp://oss.sgi.com/projects/pagg/download that use /proc/job * for ioctl have the same ioctl declarations and identical ioctl parameters. * Comparing these patches shows that, when using a 2.6 kernel, there are no * differences at all in the 23 ioctl calls (last patch was for 2.6.16.21). */ #define JOB_CREATE _IOWR('A', 1, void *) struct job_create { uint64_t r_jid; /* Return value of JID */ uint64_t jid; /* Jid value requested */ int user; /* UID of user associated with job */ int options; /* creation options - unused */ }; PAM_EXTERN int pam_sm_open_session(pam_handle_t * pamh, int flags, int argc, const char **argv) { struct job_create jcreate = {0}; struct passwd *passwd; char *username; int job_ioctl_fd; if (pam_get_item(pamh, PAM_USER, (void *)&username) != PAM_SUCCESS || username == NULL) { error("error recovering username"); return PAM_SESSION_ERR; } passwd = getpwnam(username); if (!passwd) { error("error getting passwd entry for %s", username); return PAM_SESSION_ERR; } jcreate.user = passwd->pw_uid; /* uid associated with job */ if ((job_ioctl_fd = open("/proc/job", 0)) < 0) { error("can not open /proc/job: %s", strerror(errno)); return PAM_SESSION_ERR; } else if (ioctl(job_ioctl_fd, JOB_CREATE, (void *)&jcreate) != 0) { error("job_create failed (no container): %s", strerror(errno)); close(job_ioctl_fd); return PAM_SESSION_ERR; } close(job_ioctl_fd); if (jcreate.r_jid == 0) error("WARNING - job containers disabled, no PAGG IDs created"); return PAM_SUCCESS; } /* * Not all PAMified apps invoke session management modules. So, we supply * this account management function for such cases. Whenever possible, it * is still better to use the session management version. */ PAM_EXTERN int pam_sm_acct_mgmt(pam_handle_t *pamh, int flags, int argc, const char **argv) { if (pam_sm_open_session(pamh, flags, argc, argv) != PAM_SUCCESS) return PAM_AUTH_ERR; return PAM_SUCCESS; } PAM_EXTERN int pam_sm_close_session(pam_handle_t *pamh, int flags, int argc, const char **argv) { return PAM_SUCCESS; } slurm-slurm-15-08-7-1/contribs/cray/plugstack.conf.template000066400000000000000000000001641265000126300235650ustar00rootroot00000000000000# # SPANK config file # # required|optional path_to_plugin args # include plugstack.conf.d/* slurm-slurm-15-08-7-1/contribs/cray/slurm.conf.template000066400000000000000000000071621265000126300227370ustar00rootroot00000000000000# # (c) Copyright 2013 Cray Inc. All Rights Reserved. # # This file was generated by slurmconfgen.py. Rather than editing this file # directly, you may want to edit slurm.conf.template and run slurmconfgen.py # again. # # See the slurm.conf man page for more information. # ControlMachine=sdb #ControlAddr= #BackupController= #BackupAddr= # AuthType=auth/munge CacheGroups=0 #CheckpointType=checkpoint/none CoreSpecPlugin=cray CryptoType=crypto/munge #DisableRootJobs=NO #EnforcePartLimits=NO #Epilog= #EpilogSlurmctld= #FirstJobId=1 #MaxJobId=999999 GresTypes={grestypes} #GroupUpdateForce=0 #GroupUpdateTime=600 #JobCheckpointDir=/var/slurm/checkpoint JobContainerType=job_container/cncu #JobCredentialPrivateKey= #JobCredentialPublicCertificate= #JobFileAppend=0 #JobRequeue=1 JobSubmitPlugins=cray KillOnBadExit=1 #LaunchType=launch/slurm #Licenses=foo*4,bar #MailProg=/bin/mail #MaxJobCount=5000 #MaxStepCount=40000 #MaxTasksPerNode=128 MpiDefault=none MpiParams=ports=20000-32767 #PluginDir= #PlugStackConfig= #PrivateData=jobs ProctrackType=proctrack/cray #Prolog= #PrologSlurmctld= #PropagatePrioProcess=0 #PropagateResourceLimits=ALL # Some programming models require unlimited virtual memory PropagateResourceLimitsExcept=AS #RebootProgram= # ReturnToService 2 will let rebooted nodes come back up immediately ReturnToService=2 #SallocDefaultCommand= SlurmctldPidFile=/var/spool/slurm/slurmctld.pid SlurmctldPort=6817 SlurmdPidFile=/var/spool/slurmd/slurmd.pid SlurmdPort=6818 SlurmdSpoolDir=/var/spool/slurmd SlurmUser=root #SlurmdUser=root #SrunEpilog= #SrunProlog= StateSaveLocation=/var/spool/slurm SwitchType=switch/cray #TaskEpilog= TaskPlugin=task/affinity,task/cgroup,task/cray #TaskPluginParam= #TaskProlog= TopologyPlugin=topology/none #TmpFS=/tmp #TrackWCKey=no #TreeWidth= #UnkillableStepProgram= #UsePAM=0 # # # TIMERS #BatchStartTimeout=10 #CompleteWait=0 #EpilogMsgTime=2000 #GetEnvTimeout=2 #HealthCheckInterval=0 #HealthCheckProgram= InactiveLimit=0 KillWait=30 MessageTimeout=10 #ResvOverRun=0 MinJobAge=300 #OverTimeLimit=0 SlurmctldTimeout=120 SlurmdTimeout=300 #UnkillableStepTimeout=60 #VSizeFactor=0 Waittime=0 # # # SCHEDULING DefMemPerCPU={defmem} FastSchedule=0 MaxMemPerCPU={maxmem} #SchedulerRootFilter=1 #SchedulerTimeSlice=30 SchedulerType=sched/backfill SchedulerPort=7321 SelectType=select/cray SelectTypeParameters=CR_CORE_Memory,other_cons_res # # # JOB PRIORITY #PriorityFlags= #PriorityType=priority/basic #PriorityDecayHalfLife= #PriorityCalcPeriod= #PriorityFavorSmall= #PriorityMaxAge= #PriorityUsageResetPeriod= #PriorityWeightAge= #PriorityWeightFairshare= #PriorityWeightJobSize= #PriorityWeightPartition= #PriorityWeightQOS= # # # LOGGING AND ACCOUNTING #AccountingStorageEnforce=0 #AccountingStorageHost=sdb #AccountingStorageLoc=/var/log/slurm/accounting #AccountingStoragePass= #AccountingStoragePort= #AccountingStorageType=accounting_storage/filetxt #AccountingStorageUser= #AccountingStoreJobComment=YES ClusterName={clustername} #DebugFlags= #JobCompHost= #JobCompLoc= #JobCompPass= #JobCompPort= JobCompType=jobcomp/none #JobCompUser= JobAcctGatherFrequency=30 JobAcctGatherType=jobacct_gather/linux SlurmctldDebug=info SlurmctldLogFile=/var/spool/slurm/slurmctld.log SlurmdDebug=info SlurmdLogFile=/var/spool/slurmd/%h.log #SlurmSchedLogFile= #SlurmSchedLogLevel= # # # POWER SAVE SUPPORT FOR IDLE NODES (optional) CpuFreqDef=performance #SuspendProgram= #ResumeProgram= #SuspendTimeout= #ResumeTimeout= #ResumeRate= #SuspendExcNodes= #SuspendExcParts= #SuspendRate= #SuspendTime= # # # COMPUTE NODES {computenodes} PartitionName=workq Nodes={nodelist} Shared=EXCLUSIVE Priority=1 Default=YES DefaultTime=60 MaxTime=24:00:00 State=UP slurm-slurm-15-08-7-1/contribs/cray/slurmconfgen.py.in000066400000000000000000000201451265000126300225710ustar00rootroot00000000000000#!/usr/bin/python # # (c) Copyright 2013 Cray Inc. All Rights Reserved. # # slurmconfgen.py # # A script to generate a slurm configuration file automatically. Should be # run from a service node on the system to be configured. Reads a template # file from the Slurm configuration directory (/etc/opt/slurm) and # writes the filled-in template to stdout. import subprocess, os, shutil, sys, datetime, tempfile, stat, re, time, socket ####################################### # sdb_query ####################################### def sdb_query(query): """ Query the SDB. Returns the results, space separated. """ # Set correct path for isql os.environ['ODBCSYSINI']='/etc/opt/cray/sysadm/' # Call isql isql = subprocess.Popen(["isql", "XTAdmin", "-b", "-v", "-x0x20"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Execute the query (out, err) = isql.communicate(query) if len(err) > 0: raise Exception(err) return out ####################################### # get_nodes ####################################### def get_nodes(): """ Get the nodes from the SDB. Returns a list of tuples with entries for nodeid, memory, cores, sockets, compute units, gpus, mics """ # Query the SDB for the info out = sdb_query("SELECT nodeid,availmem,numcores,sockets,cu, \ IF(type='GPU',avail,0) AS gpu, \ IF(type='MIC',avail,0) AS mic \ FROM attributes LEFT JOIN processor ON nodeid=processor_id \ LEFT JOIN gpus ON nodeid=node_id \ WHERE processor_type='compute' ORDER BY nodeid;") # Now out should contain all the compute node information nodes = [] for line in out.splitlines(): fields = line.split() nodes.append(tuple([int(x) for x in fields])) return nodes ####################################### # split_nodes ####################################### """ Test data from opal-p2: [(24,32768,40,2,20,0,0), (25,32768,40,2,20,0,0), (26,32768,40,2,20,0,0), (27,32768,40,2,20,0,0), (32,32768,16,1,8,1,0), (33,32768,16,1,8,1,0), (34,32768,16,1,8,1,0), (35,32768,16,1,8,1,0), (48,65536,32,2,16,0,0), (49,65536,32,2,16,0,0), (50,65536,32,2,16,0,0), (51,65536,32,2,16,0,0)] """ def split_nodes(nodelist): """ Given a list of nodes as returned by get_nodes, returns a tuple of equivalence class representative list, equivalence class nid list. """ class_reps = [] class_nodes = [] for node in nodelist: # Check if this matches an existing representative i = 0 match = False for rep in class_reps: if (node[1:] == rep[1:]): # It matches, add to the nodes for this class class_nodes[i].append(node[0]) match = True break i += 1 # We didn't find a matching equivalence class, make a new one if not match: class_reps.append(node) class_nodes.append([node[0]]) return class_reps, class_nodes ###################################### # range_str ###################################### def range_str(range_start, range_end, field_width): """ Returns a string representation of the given range using the given field width """ if range_end < range_start: raise Exception('Range end before range start') elif range_start == range_end: return "{0:0{1}d}".format(range_end, field_width) elif range_start + 1 == range_end: return "{0:0{2}d},{1:0{2}d}".format(range_start, range_end, field_width) return "{0:0{2}d}-{1:0{2}d}".format(range_start, range_end, field_width) ###################################### # rli_compress ###################################### def rli_compress(nidlist): """ Given a list of node ids, rli compress them into a slurm hostlist (ex. list [1,2,3,5] becomes string nid0000[1-3,5]) """ # Determine number of digits in the highest nid number numdigits = len(str(max(nidlist))) if numdigits > 5: raise Exception('Nid number too high') # Create start of the hostlist rli = "nid" + ('0' * (5 - numdigits)) + '[' range_start = nidlist[0] range_end = nidlist[0] add_comma = False for nid in nidlist: # If nid too large, append to rli and start fresh if nid > range_end + 1 or nid < range_end: rli += ("," if add_comma else "") + \ range_str(range_start, range_end, numdigits) add_comma = True range_start = nid range_end = nid # Append the last range rli += ("," if add_comma else "") \ + range_str(range_start, range_end, numdigits) + ']' return rli ####################################### # scale_mem ####################################### def scale_mem(mem): """ Since the SDB memory value differs from /proc/meminfo on the compute nodes, we must scale all memory values for FastSchedule 1 to work """ return mem * 98 / 100 ####################################### # format_nodes ####################################### def format_nodes(class_reps, class_nodes): """ Given a list of class representatives and lists of nodes in those classes, formats a string in slurm.conf format (ex. NodeName=nid00[024-027] Sockets=2 CoresPerSocket=10 ThreadsPerCore=2 RealMemory=32768) """ i = 0 nodestr = "" for rep in class_reps: nodeid, memory, cores, sockets, cu, gpu, mic = rep # KNC nodes only have half network resources if mic > 0: gres = "craynetwork:2,mic:{0}".format(mic) else: gres = "craynetwork:4" if gpu > 0: gres += ",gpu:{0}".format(gpu) nodestr += "NodeName={0} Sockets={1} CoresPerSocket={2} \ ThreadsPerCore={3} Gres={5} # RealMemory={4} (set by FastSchedule 0)\n".format( rli_compress(class_nodes[i]), sockets, cu/sockets, cores/cu, memory, gres) i += 1 return nodestr ####################################### # get_gres_types ####################################### def get_gres_types(class_reps): """ Searches the class representatives for different gres (gpu and mic) and returns them in a comma-separated string. """ gpu = max([rep[5] for rep in class_reps]) mic = max([rep[6] for rep in class_reps]) gres = "craynetwork" if gpu > 0: gres += ",gpu" if mic > 0: gres += ",mic" return gres ####################################### # get_mem_per_cpu ####################################### def get_mem_per_cpu(nodes): """ Give a list of nodes formatted as in get_nodes, determines the default memory per cpu (mem)/(cores) and max memory per cpu, returned as a tuple """ defmem = 0 maxmem = 0 for node in nodes: if node[1] > maxmem: maxmem = node[1] mem_per_thread = node[1] / node[2] if defmem == 0 or mem_per_thread < defmem: defmem = mem_per_thread return (scale_mem(defmem), scale_mem(maxmem)) ####################################### # cluster_name ####################################### def get_cluster_name(): """ Gets the cluster name from /etc/xthostname """ with open("/etc/xthostname", "r") as xthostname: return xthostname.readline().rstrip() ####################################### # write_slurm_conf ####################################### def get_slurm_conf(infile, replace): """ Reads from infile, replaces following the given dictionary, and returns the result as a string """ with open(infile, "r") as template: text = template.read() # Using replace is less elegant than using string.format, # but avoids KeyErrors if the user removes keys from # the template file for i, j in replace.iteritems(): text = text.replace(i, j) return text ####################################### # main ####################################### if __name__ == "__main__": # Some constant file names sysconfdir = "@sysconfdir@" slurmconf = "/slurm.conf" slurmconf_template = slurmconf + ".template" # Get nodes using isql nodes = get_nodes() # Split them into equivalence classes class_reps, class_nodes = split_nodes(nodes) # Determine the min and max memory per cpu defmem, maxmem = get_mem_per_cpu(nodes) # Create the replacement dictionary replace = { '{sysconfdir}' : sysconfdir, '{defmem}' : str(defmem), '{maxmem}' : str(maxmem), '{clustername}' : get_cluster_name(), '{computenodes}' : format_nodes(class_reps, class_nodes), '{nodelist}' : rli_compress([node[0] for node in nodes]), '{grestypes}' : get_gres_types(class_reps) } # Read and format the template text = get_slurm_conf(sysconfdir + slurmconf_template, replace) # Just print to stdout print text slurm-slurm-15-08-7-1/contribs/env_cache_builder.c000066400000000000000000000317231265000126300217430ustar00rootroot00000000000000/*****************************************************************************\ * On the cluster's control host as user root, execute: * make -f /dev/null env_cache_builder * ./env_cache_builder ***************************************************************************** * This program is used to build an environment variable cache file for use * with the srun/sbatch --get-user-env option, which is used by Moab to launch * user jobs. srun/sbatch will first attempt to load the user's current * environment by executing "su - -c env". If that fails to complete * in a relatively short period of time (currently 3 seconds), srun/sbatch * will attempt to load the user's environment from a cache file located * in the directory StateSaveLocation with a name of the sort "env_". * If that fails, then abort the job request. * * This program can accept a space delimited list of individual users to have * cache files created (e.g. "cache_build alice bob chuck"). If no argument * is given, cache files will be created for all users in the "/etc/passwd" * file. If you see "ERROR" in the output, it means that the user's * environment could not be loaded automatically, typically because their * dot files spawn some other shell. You must explicitly login as the user, * execute "env" and write the output to a file having the same name as the * user in a subdirectory of the configured StateSaveLocation named "env_cache" * (e.g. "/tmp/slurm/env_cache/alice"). The file is only needed on the node * where the Moab daemon executes, typically the control host. ***************************************************************************** * Copyright (C) 2007 The Regents of the University of California. * Copyright (C) 2008 Lawrence Livermore National Security. * Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). * Written by Morris Jette . * CODE-OCEC-09-009. All rights reserved. * * This file is part of SLURM, a resource management program. * For details, see . * Please also read the included file: DISCLAIMER. * * SLURM is free software; you can redistribute it and/or modify it under * the terms of the GNU General Public License as published by the Free * Software Foundation; either version 2 of the License, or (at your option) * any later version. * * In addition, as a special exception, the copyright holders give permission * to link the code of portions of this program with the OpenSSL library under * certain conditions as described in each individual source file, and * distribute linked combinations including the two. You must obey the GNU * General Public License in all respects for all of the code used other than * OpenSSL. If you modify file(s) with this exception, you may extend this * exception to your version of the file(s), but you are not obligated to do * so. If you do not wish to do so, delete this exception statement from your * version. If you delete this exception statement from all source files in * the program, then also delete it here. * * SLURM is distributed in the hope that it will be useful, but WITHOUT ANY * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more * details. * * You should have received a copy of the GNU General Public License along * with SLURM; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. \*****************************************************************************/ #include #include #include #include #include #include #include #include #include #include #include #define _DEBUG 0 #define SU_WAIT_MSEC 8000 static long int _build_cache(char *user_name, char *cache_dir); static int _get_cache_dir(char *buffer, int buf_size); static void _log_failures(int failures, char *cache_dir); static int _parse_line(char *in_line, char **user_name, int *user_id); char *env_loc = NULL; main (int argc, char **argv) { FILE *passwd_fd; char cache_dir[256], in_line[256], *user_name; int i, failures = 0, user_cnt = 0, user_id; long int delta_t; struct stat buf; if (geteuid() != (uid_t) 0) { printf("Need to run as user root\n"); exit(1); } if (_get_cache_dir(cache_dir, sizeof(cache_dir))) exit(1); strncat(cache_dir, "/env_cache", sizeof(cache_dir)); if (mkdir(cache_dir, 0500) && (errno != EEXIST)) { printf("Could not create cache directory %s: %s\n", cache_dir, strerror(errno)); exit(1); } if (stat("/bgl", &buf) == 0) { printf("BlueGene Note: Execute only a a front-end node, " "not the service node\n"); printf(" User logins to the service node are " "disabled\n\n"); } if (stat("/bin/su", &buf)) { printf("Could not locate command: /bin/su\n"); exit(1); } if (stat("/bin/echo", &buf)) { printf("Could not locate command: /bin/echo\n"); exit(1); } if (stat("/bin/env", &buf) == 0) env_loc = "/bin/env"; else if (stat("/usr/bin/env", &buf) == 0) env_loc = "/usr/bin/env"; else { printf("Could not location command: env\n"); exit(1); } printf("Building user environment cache files for Moab/Slurm.\n"); printf("This will take a while.\n\n"); for (i=1; i 1) { _log_failures(failures, cache_dir); exit(0); } passwd_fd = fopen("/etc/passwd", "r"); if (!passwd_fd) { perror("fopen(/etc/passwd)"); exit(1); } while (fgets(in_line, sizeof(in_line), passwd_fd)) { if (_parse_line(in_line, &user_name, &user_id) < 0) continue; if (user_id <= 100) continue; delta_t = _build_cache(user_name, cache_dir); if (delta_t == -1) failures++; user_cnt++; if ((user_cnt % 100) == 0) printf("Processed %d users...\n", user_cnt); if (delta_t < ((SU_WAIT_MSEC * 0.8) * 1000)) continue; printf("WARNING: user %-8s time %ld usec\n", user_name, delta_t); } fclose(passwd_fd); _log_failures(failures, cache_dir); } static void _log_failures(int failures, char *cache_dir) { if (failures) { printf("\n"); printf("Some user environments could not be loaded.\n"); printf("Manually run 'env' for those %d users.\n", failures); printf("Write the output to a file with the same name as " "the user in the\n %s directory\n", cache_dir); } else { printf("\n"); printf("All user environments successfully loaded.\n"); printf("Files written to the %s directory\n", cache_dir); } } /* Given a line from /etc/passwd, sets the user_name and user_id * RET -1 if user can't login, 0 otherwise */ static int _parse_line(char *in_line, char **user_name, int *user_id) { char *tok, *shell; /* user name */ *user_name = strtok(in_line, ":"); (void) strtok(NULL, ":"); /* uid */ tok = strtok(NULL, ":"); if (tok) *user_id = atoi(tok); else { printf("ERROR: parsing /etc/passwd: %s\n", in_line); *user_id = 0; } (void) strtok(NULL, ":"); /* gid */ (void) strtok(NULL, ":"); /* name */ (void) strtok(NULL, ":"); /* home */ shell = strtok(NULL, ":"); if (shell) { tok = strchr(shell, '\n'); if (tok) tok[0] = '\0'; if ((strcmp(shell, "/sbin/nologin") == 0) || (strcmp(shell, "/bin/false") == 0)) return -1; } return 0; } /* For a given user_name, get his environment variable by executing * "su - -c env" and store the result in * cache_dir/env_ * Returns time to perform the operation in usec or -1 on error */ static long int _build_cache(char *user_name, char *cache_dir) { FILE *cache; char *line, *last, out_file[BUFSIZ], buffer[64 * 1024]; char *starttoken = "XXXXSLURMSTARTPARSINGHEREXXXX"; char *stoptoken = "XXXXSLURMSTOPPARSINGHEREXXXXX"; int fildes[2], found, fval, len, rc, timeleft; int buf_read, buf_rem; pid_t child; struct timeval begin, now; struct pollfd ufds; long int delta_t; gettimeofday(&begin, NULL); if (pipe(fildes) < 0) { perror("pipe"); return -1; } child = fork(); if (child == -1) { perror("fork"); return -1; } if (child == 0) { close(0); open("/dev/null", O_RDONLY); dup2(fildes[1], 1); close(2); open("/dev/null", O_WRONLY); snprintf(buffer, sizeof(buffer), "/bin/echo; /bin/echo; /bin/echo; " "/bin/echo %s; %s; /bin/echo %s", starttoken, env_loc, stoptoken); #ifdef LOAD_ENV_NO_LOGIN execl("/bin/su", "su", user_name, "-c", buffer, NULL); #else execl("/bin/su", "su", "-", user_name, "-c", buffer, NULL); #endif exit(1); } close(fildes[1]); if ((fval = fcntl(fildes[0], F_GETFL, 0)) >= 0) fcntl(fildes[0], F_SETFL, fval | O_NONBLOCK); ufds.fd = fildes[0]; ufds.events = POLLIN; ufds.revents = 0; /* Read all of the output from /bin/su into buffer */ found = 0; buf_read = 0; bzero(buffer, sizeof(buffer)); while (1) { gettimeofday(&now, NULL); timeleft = SU_WAIT_MSEC * 10; timeleft -= (now.tv_sec - begin.tv_sec) * 1000; timeleft -= (now.tv_usec - begin.tv_usec) / 1000; if (timeleft <= 0) { #if _DEBUG printf("timeout1 for %s\n", user_name); #endif break; } if ((rc = poll(&ufds, 1, timeleft)) <= 0) { if (rc == 0) { #if _DEBUG printf("timeout2 for %s\n, user_name"); #endif break; } if ((errno == EINTR) || (errno == EAGAIN)) continue; perror("poll"); break; } if (!(ufds.revents & POLLIN)) { if (ufds.revents & POLLHUP) { /* EOF */ #if _DEBUG printf("POLLHUP for %s\n", user_name); #endif found = 1; /* success */ } else if (ufds.revents & POLLERR) { printf("ERROR: POLLERR for %s\n", user_name); } else { printf("ERROR: poll() revents=%d for %s\n", ufds.revents, user_name); } break; } buf_rem = sizeof(buffer) - buf_read; if (buf_rem == 0) { printf("ERROR: buffer overflow for %s\n", user_name); break; } rc = read(fildes[0], &buffer[buf_read], buf_rem); if (rc > 0) buf_read += rc; else if (rc == 0) { /* EOF */ #if _DEBUG printf("EOF for %s\n", user_name); #endif found = 1; /* success */ break; } else { /* error */ perror("read"); break; } } close(fildes[0]); if (!found) { printf("***ERROR: Failed to load current user environment " "variables for %s\n", user_name); return -1; } /* First look for the start token in the output */ len = strlen(starttoken); found = 0; line = strtok_r(buffer, "\n", &last); while (!found && line) { if (!strncmp(line, starttoken, len)) { found = 1; break; } line = strtok_r(NULL, "\n", &last); } if (!found) { printf("***ERROR: Failed to get current user environment " "variables for %s\n", user_name); return -1; } snprintf(out_file, sizeof(out_file), "%s/%s", cache_dir, user_name); cache = fopen(out_file, "w"); if (!cache) { printf("ERROR: Could not create cache file %s for %s: %s\n", out_file, user_name, strerror(errno)); return -1; } chmod(out_file, 0600); /* Process environment variables until we find the stop token */ len = strlen(stoptoken); found = 0; line = strtok_r(NULL, "\n", &last); while (!found && line) { if (!strncmp(line, stoptoken, len)) { found = 1; break; } if (fprintf(cache, "%s\n",line) < 0) { printf("ERROR: Could not write cache file %s " "for %s: %s\n", out_file, user_name, strerror(errno)); found = 1; /* quit now */ } line = strtok_r(NULL, "\n", &last); } fclose(cache); waitpid(-1, NULL, WNOHANG); gettimeofday(&now, NULL); delta_t = (now.tv_sec - begin.tv_sec) * 1000000; delta_t += now.tv_usec - begin.tv_usec; if (!found) { printf("***ERROR: Failed to write all user environment " "variables for %s\n", user_name); if (delta_t < (SU_WAIT_MSEC * 1000)) return (SU_WAIT_MSEC * 1000); } return delta_t; } /* Get configured StateSaveLocation. * User environment variable caches get created there. * Returns 0 on success, -1 on error. */ static int _get_cache_dir(char *buffer, int buf_size) { FILE *scontrol; int fildes[2]; pid_t child, fval; char line[BUFSIZ], *fname; if (pipe(fildes) < 0) { perror("pipe"); return -1; } child = fork(); if (child == -1) { perror("fork"); return -1; } if (child == 0) { close(0); open("/dev/null", O_RDONLY); dup2(fildes[1], 1); close(2); open("/dev/null", O_WRONLY); execlp("scontrol", "scontrol", "show", "config", NULL); perror("execl(scontrol)"); return -1; } close(fildes[1]); scontrol = fdopen(fildes[0], "r"); buffer[0] = '\0'; while (fgets(line, BUFSIZ, scontrol)) { if (strncmp(line, "StateSaveLocation", 17)) continue; fname = strchr(line, '\n'); if (fname) fname[0] = '\0'; fname = strchr(line, '/'); if (fname) strncpy(buffer, fname, buf_size); break; } close(fildes[0]); if (!buffer[0]) { printf("ERROR: Failed to get StateSaveLocation\n"); close(fildes[0]); return -1; } waitpid(-1, NULL, WNOHANG); return 0; } slurm-slurm-15-08-7-1/contribs/lua/000077500000000000000000000000001265000126300167315ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/lua/Makefile.am000066400000000000000000000001131265000126300207600ustar00rootroot00000000000000EXTRA_DIST = \ job_submit.license.lua \ job_submit.lua \ proctrack.lua slurm-slurm-15-08-7-1/contribs/lua/Makefile.in000066400000000000000000000417621265000126300210100ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/lua DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ EXTRA_DIST = \ job_submit.license.lua \ job_submit.lua \ proctrack.lua all: all-am .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu contribs/lua/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu contribs/lua/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile installdirs: install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: .MAKE: install-am install-strip .PHONY: all all-am check check-am clean clean-generic clean-libtool \ cscopelist-am ctags-am distclean distclean-generic \ distclean-libtool distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags-am uninstall uninstall-am # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/lua/job_submit.license.lua000066400000000000000000000040041265000126300232100ustar00rootroot00000000000000--[[ Example lua script demonstrating the SLURM job_submit/lua interface. This is only an example, not meant for use in its current form. For use, this script should be copied into a file name "job_submit.lua" in the same directory as the SLURM configuration file, slurm.conf. --]] function _limit_license_cnt(orig_string, license_name, max_count) local i = 0 local j = 0 local val = 0 if orig_string == nil then return 0 end i, j, val = string.find(orig_string, license_name .. "%:(%d)") if val ~= nil then slurm.log_info("name:%s count:%s", license_name, val) end if val ~= nil and val + 0 > max_count then return 1 end return 0 end --########################################################################-- -- -- SLURM job_submit/lua interface: -- --########################################################################-- function slurm_job_submit ( job_desc, part_list, submit_uid ) local bad_license_count = 0 bad_license_count = _limit_license_cnt(job_desc.licenses, "lscratcha", 1) bad_license_count = _limit_license_cnt(job_desc.licenses, "lscratchb", 1) + bad_license_count bad_license_count = _limit_license_cnt(job_desc.licenses, "lscratchc", 1) + bad_license_count if bad_license_count > 0 then slurm.log_info("slurm_job_submit: for user %u, invalid licenses value: %s", job_desc.user_id, job_desc.licenses) return slurm.ESLURM_INVALID_LICENSES end return 0 end function slurm_job_modify ( job_desc, job_rec, part_list, modify_uid ) local bad_license_count = 0 bad_license_count = _limit_license_cnt(job_desc.licenses, "lscratcha", 1) bad_license_count = _limit_license_cnt(job_desc.licenses, "lscratchb", 1) + bad_license_count bad_license_count = _limit_license_cnt(job_desc.licenses, "lscratchc", 1) + bad_license_count if bad_license_count > 0 then slurm.log_info("slurm_job_modify: for job %u, invalid licenses value: %s", job_rec.job_id, job_desc.licenses) return slurm.ESLURM_INVALID_LICENSES end return 0 end slurm.log_info("initialized") return slurm.SUCCESS slurm-slurm-15-08-7-1/contribs/lua/job_submit.lua000066400000000000000000000036531265000126300216000ustar00rootroot00000000000000--[[ Example lua script demonstrating the SLURM job_submit/lua interface. This is only an example, not meant for use in its current form. Leave the function names, arguments, local varialbes and setmetatable set up logic in each function unchanged. Change only the logic after the line containing "*** YOUR LOGIC GOES BELOW ***". For use, this script should be copied into a file name "job_submit.lua" in the same directory as the SLURM configuration file, slurm.conf. --]] function slurm_job_submit(job_desc, part_list, submit_uid) if job_desc.account == nil then local account = "***TEST_ACCOUNT***" slurm.log_info("slurm_job_submit: job from uid %u, setting default account value: %s", submit_uid, account) job_desc.account = account end -- If no default partition, set the partition to the highest -- priority partition this user has access to if job_desc.partition == nil then local new_partition = nil local top_priority = -1 local last_priority = -1 local inx = 0 for name, part in pairs(part_list) do slurm.log_info("part name[%d]:%s", inx, part.name) inx = inx + 1 if part.flag_default ~= 0 then top_priority = -1 break end last_priority = part.priority if last_priority > top_priority then top_priority = last_priority new_partition = part.name end end if top_priority >= 0 then slurm.log_info("slurm_job_submit: job from uid %u, setting default partition value: %s", job_desc.user_id, new_partition) job_desc.partition = new_partition end end return slurm.SUCCESS end function slurm_job_modify(job_desc, job_rec, part_list, modify_uid) if job_desc.comment == nil then local comment = "***TEST_COMMENT***" slurm.log_info("slurm_job_modify: for job %u from uid %u, setting default comment value: %s", job_rec.job_id, modify_uid, comment) job_desc.comment = comment end return slurm.SUCCESS end slurm.log_info("initialized") return slurm.SUCCESS slurm-slurm-15-08-7-1/contribs/lua/proctrack.lua000066400000000000000000000233121265000126300214250ustar00rootroot00000000000000--[[ Example lua script demonstrating the SLURM proctrack/lua interface. This script implements a very simple job step container using CPUSETs. --]] require "posix" --########################################################################-- -- -- SLURM proctrack/lua interface: -- --########################################################################-- local use_release_agent = false function slurm_container_create (job) local id = cpuset_id_create (job) local cpu_list = cpumap:convert_ids (job.JobCPUs) log_verbose ("slurm_container_create: job=%u.%u CPUs=%s (%s) cpuset=%d", job.jobid, job.stepid, job.JobCPUs, cpu_list, id) if not cpuset_create (id, cpu_list) then return nil end return id end function slurm_container_add (job, id, pid) log_verbose ("slurm_container_add(%d, %d)\n", id, pid) return cpuset_add_pid (id, pid) end function slurm_container_signal (id, signo) log_verbose ("slurm_container_signal(%d, %d)\n", id, signo) cpuset_kill (id, signo) return slurm.SUCCESS end function slurm_container_destroy (id) log_verbose ("slurm_container_destroy (id=%d)\n", id) return (cpuset_destroy (id)) and 0 or -1 end function slurm_container_find (pid) log_verbose ("slurm_container_find (pid=%d)\n", pid) for i, id in ipairs (posix.dir (cpuset_dir)) do path = string.format ("%s/%s", cpuset_dir, id) st = posix.stat (path) if st.type == "directory" and cpuset_has_pid (id) then return id end end return slurm.FAILURE end function slurm_container_has_pid (id, pid) log_verbose ("slurm_container_has_pid (id=%d, pid=%d)\n", id, pid) return cpuset_has_pid (id, pid) end function slurm_container_wait (id) local s = 1 if not cpuset_exists (id) then return 0 end log_verbose ("slurm_container_wait (id=%d)\n", id) while not cpuset_destroy (id) do cpuset_kill (id, 9) log_debug ("Waiting %ds for cpuset id=%d\n", s, id) posix.sleep (s) s = (2*s <= 30) and 2*s or 30 -- Wait a max of 30s end return slurm.SUCCESS end function slurm_container_get_pids (id) log_debug ("slurm_container_get_pids (id=%d)\n", id) return cpuset_pids (id) end --########################################################################-- -- -- Internal lua functions: -- --########################################################################-- root_cpuset = {} function split (line) local t = {} for word in line:gmatch ('%S+') do table.insert(t, word) end return t end function get_cpuset_dir () for line in io.lines ("/proc/mounts") do local t = split (line) if t[3] == "cpuset" then return t[2] end end return nil end function cpuset_exists (id) local path = cpuset_dir .. "/" .. id local s = posix.stat (path) return (s ~= nil and s.type == "directory") end -- Set cpus function cpuset_set_f (path, name, val) local f, msg = io.open (path .."/".. name, "w") if f == nil then log_err ("open (%s/%s): %s\n", path, name, msg) return nil end -- -- Write value to cpuset if [val] was passed to this function, -- if not use the value from the root cpuset: -- f:write (val or root_cpuset[name]) f:close () end function cpuset_create (name, cpus) local mask = posix.umask() local path = cpuset_dir .. "/" .. name posix.umask ("077") local d, s = posix.mkdir (path) if (d == nil and s ~= "File exists") then log_err ("cpuset_create: %s: %s\n", path, s or "msg") return false end posix.umask (mask) cpuset_set_f (path, "cpus", cpus) cpuset_set_f (path, "mems") if (use_release_agent == true) then cpuset_set_f (path, "notify_on_release", 1) end return true end function cpuset_destroy (name) if (not cpuset_exists (name)) then return true end local path = cpuset_dir .. "/" .. name return (posix.rmdir (path) ~= 0) or false; end function cpuset_add_pid (name, pid) if (not cpuset_exists (name)) then return -1 end local path = cpuset_dir .. "/" .. name local f = io.open (path.."/tasks", "w") f:write (pid) f:close() log_debug ("Added pid=%d to cpuset %s", pid, name) end function cpuset_kill (name, signo) if (not cpuset_exists (name)) then return end local path = string.format ("%s/%s/tasks", cpuset_dir, name) local path_fh = io.open(path) if path_fh then while true do local pid = path_fh:read() if pid == nil then break end log_debug ("Sending signal %d to pid %d", signo, pid) posix.kill (pid, signo) end end end function cpuset_read (path, name) local f = assert (io.open (path .. "/" .. name)) val = f:read("*all") f:close() return val end -- -- lua doesn't have bitwise operators, fun. -- function truncate_to_n_bits (n, bits) local result = 0 for i = 1, bits do local l = math.mod (n, 2) if (l == 1) then result = result + (2 ^ (i-1)) end n = (n - l) / 2 end return result end -- -- Create a unique identifier from the job step in [job] -- to be used as the name of the resulting cpuset -- function cpuset_id_create (job) local id = job.jobid -- Simulate a left shift by 16 (I think): for i = 0, 16 do id = id*2 end -- Add the lower 16 bits of the stepid: id = id + truncate_to_n_bits (job.stepid, 16) -- Must truncate result to 32bits until SLURM's job container -- id is no longer represented by uint32_t : return truncate_to_n_bits (id, 32) end function cpuset_has_pid (id, process_id) if (not cpuset_exists (id)) then return false end local path = string.format ("%s/%s/tasks", cpuset_dir, id) -- Force pid to be a number local pid = tonumber (process_id) for task in io.lines (path) do -- again, ensure task is represented as a lua number for comparison: if tonumber(task) == pid then return true end end return false end function pid_is_thread (process_id) local pid_status_path = string.format ("/proc/%d/status",process_id) local pid_status_fh = io.open(pid_status_path) if pid_status_fh then while true do local pid_status_line=pid_status_fh:read() if pid_status_line == nil then break end if string.match(pid_status_line,'^Tgid:%s+' .. process_id .. '$') then return false end end end return true end function cpuset_pids (id) local pids = {} if (cpuset_exists (id)) then local path = string.format ("%s/%s/tasks", cpuset_dir, id) local path_fh = io.open(path) if path_fh then while true do local task=path_fh:read() if task == nil then break end if not ( pid_is_thread(task) ) then table.insert (pids, task) end end end end return pids end -- -- cpumap_create() creates an object whose sole purpose is to -- convert a list of physical CPU ids, which are given relative -- to physical location, back to the logical cpu d map of the -- current host. -- function cpumap_create () function cpuset_list_create (s) local cpus = {} for c in s:gmatch ('[^,]+') do local s, e = c:match ('([%d]+)-?([%d]*)') if e == "" then e = s end for cpu = s, e do table.insert (cpus, cpu) end end return cpus end function read_cpu_topology_member (id, name) local val local cpudir = "/sys/devices/system/cpu" local path = string.format ("%s/cpu%d/topology/%s", cpudir, id, name) local f, err = io.open (path, "r") if f == nil then print (err) return f, err end val = f:read ("*all") f:close() return val end function cpu_info_create (id) local cpuinfo = {} local cpudir = "/sys/devices/system/cpu" cpuinfo.id = id cpuinfo.pkgid = read_cpu_topology_member (id, "physical_package_id") cpuinfo.coreid = read_cpu_topology_member (id, "core_id") return cpuinfo end function list_id (self, i) return self.cpu_list[i+1].id end local function cmp_cpu_info (a,b) if a.pkgid == b.pkgid then return a.coreid < b.coreid else return a.pkgid < b.pkgid end end local function convert_cpu_ids (self, s) local l = {} for i, id in ipairs (cpuset_list_create (s)) do table.insert (l, list_id (self, id)) end return table.concat (l, ",") end local cpu_map = { cpu_list = {}, ncpus = 0, get_id = list_id, convert_ids = convert_cpu_ids } for i, dir in ipairs (posix.dir ("/sys/devices/system/cpu")) do local id = string.match (dir, 'cpu([%d]+)') if id then table.insert (cpu_map.cpu_list, cpu_info_create (id)) end end cpu_map.ncpus = #cpu_map.cpu_list table.sort (cpu_map.cpu_list, cmp_cpu_info) return cpu_map end --########################################################################-- -- -- Initialization code: -- --########################################################################-- log_msg = slurm.log_info log_verbose = slurm.log_verbose log_debug = slurm.log_debug log_err = slurm.error cpuset_dir = get_cpuset_dir () if cpuset_dir == nil then print "cpuset must be mounted" return 0 end root_cpuset.cpus = cpuset_read (cpuset_dir, "cpus") root_cpuset.mems = cpuset_read (cpuset_dir, "mems") cpumap = cpumap_create () log_msg ("initialized: root cpuset = %s\n", cpuset_dir) return slurm.SUCCESS -- vi: filetype=lua ts=4 sw=4 expandtab slurm-slurm-15-08-7-1/contribs/make-3.81.slurm.patch000066400000000000000000000026151265000126300216420ustar00rootroot00000000000000Index: README =================================================================== --- README (revision 321) +++ README (working copy) @@ -145,6 +145,18 @@ force make to treat them properly. See the manual for details. +SLURM +----- + +This patch will use SLURM to launch tasks across a job's current resource +allocation. Depending upon the size of modules to be compiled, this may +or may not improve performance. If most modules are thousands of lines +long, the use of additional resources should more than compensate for the +overhead of SLURM's task launch. Use with make's "-j" option within an +existing SLURM allocation. Outside of a SLURM allocation, make's behavior +will be unchanged. Designed for GNU make-3.81. + + Ports ----- Index: job.c =================================================================== --- job.c (revision 321) +++ job.c (working copy) @@ -1959,6 +1959,22 @@ void child_execute_job (int stdin_fd, int stdout_fd, char **argv, char **envp) { +/* PARALLEL JOB LAUNCH VIA SLURM */ + if (getenv("SLURM_JOB_ID")) { + int i; + static char *argx[128]; + argx[0] = "srun"; + argx[1] = "-N1"; + argx[2] = "-n1"; + for (i=0; ((i<124)&&(argv[i])); i++) { + argx[i+3] = argv[i]; + } + if (i<124) { + argx[i+3] = NULL; + argv = argx; + } + } +/* END OF SLURM PATCH */ if (stdin_fd != 0) (void) dup2 (stdin_fd, 0); if (stdout_fd != 1) slurm-slurm-15-08-7-1/contribs/make-4.0.slurm.patch000066400000000000000000000033671265000126300215570ustar00rootroot00000000000000diff --git a/README b/README index 8b10c42..9e52eb8 100644 --- a/README +++ b/README @@ -136,6 +136,18 @@ system contains rules that depend on proper behavior of tools like "cp force make to treat them properly. See the manual for details. +SLURM +----- + +This patch will use SLURM to launch tasks across a job's current resource +allocation. Depending upon the size of modules to be compiled, this may +or may not improve performance. If most modules are thousands of lines +long, the use of additional resources should more than compensate for the +overhead of SLURM's task launch. Use with make's "-j" option within an +existing SLURM allocation. Outside of a SLURM allocation, make's behavior +will be unchanged. Designed for GNU make-4.0. + + Ports ----- diff --git a/job.c b/job.c index febfac0..eea50f1 100644 --- a/job.c +++ b/job.c @@ -2269,6 +2269,23 @@ void child_execute_job (int stdin_fd, int stdout_fd, int stderr_fd, char **argv, char **envp) { + char** argx=NULL; + /* PARALLEL JOB LAUNCH VIA SLURM */ + if (getenv("SLURM_JOB_ID")) { + unsigned int i, argc=4; + for (i=0; argv[i] != NULL ; i++) argc++; + argx = (char**) malloc( sizeof(char*)*( argc )); + argx[0] = "srun"; + argx[1] = "-N1"; + argx[2] = "-n1"; + for (i=0; argv[i] != NULL ; i++) { + argx[i+3] = argv[i]; + } + argx[ argc -1 ] = NULL; + argv = argx; + } +/* END OF SLURM PATCH */ + /* For any redirected FD, dup2() it to the standard FD then close it. */ if (stdin_fd != FD_STDIN) { @@ -2288,6 +2305,8 @@ child_execute_job (int stdin_fd, int stdout_fd, int stderr_fd, /* Run the command. */ exec_command (argv, envp); + + free(argx); } #endif /* !AMIGA && !__MSDOS__ && !VMS */ #endif /* !WINDOWS32 */ slurm-slurm-15-08-7-1/contribs/mic/000077500000000000000000000000001265000126300167205ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/mic/Makefile.am000066400000000000000000000000331265000126300207500ustar00rootroot00000000000000EXTRA_DIST = \ mpirun-mic slurm-slurm-15-08-7-1/contribs/mic/Makefile.in000066400000000000000000000417021265000126300207710ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/mic DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ EXTRA_DIST = \ mpirun-mic all: all-am .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu contribs/mic/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu contribs/mic/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile installdirs: install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: .MAKE: install-am install-strip .PHONY: all all-am check check-am clean clean-generic clean-libtool \ cscopelist-am ctags-am distclean distclean-generic \ distclean-libtool distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags-am uninstall uninstall-am # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/mic/mpirun-mic000066400000000000000000000207211265000126300207250ustar00rootroot00000000000000#!/bin/bash # ************************************************************************** # Function: Wrapper that helps launching Intel MPI jobs within SLURM # using MICs in native mode. # mpiexec.hydra needs passwordless ssh access to all involved nodes # Version: 0.4 #--------------------------------------------------------------------------- # 11.10.2013 Created by Chrysovalantis Paschoulas, Juelich Supercomputing Centre - Forschungszentrum Juelich # Intial Script by (C) Olli-Pekka Lehto - CSC IT Center for Science Ltd. # ************************************************************************** # Usage message USAGE=" USAGE $(basename "$0") [ [-h] | [-v] [-x -c ] [-z -m ] ] OPTIONS -h Print this message. -c Binary that will run on host nodes. If it is not set then only the MICs will be used. -m Binary that will run inside the MICs. -x Number of tasks (MPI ranks) for the host nodes. Default value is 1. -z Number of tasks (MPI ranks) for the MICs. Default value is 1. -v Show more info for this script. --tv Run using TotalView (equivalent to export MPIEXEC_PREFIX=\"totalview -args\"). --tvcli Run using TotalView cli (equivalent to export MPIEXEC_PREFIX=\"totalviewcli -args\") MORE INFO The user MUST export the following environment variables: MIC_NUM_PER_HOST Number of MICs on each host that will be used by mpiexec. Available options: 0, 1, 2. Default 2. OMP_NUM_THREADS OpenMP threads number per task on hosts. This MUST be exported when OpenMP is used! MIC_OMP_NUM_THREADS OpenMP threads number per task on MICs. If not defined then is set same as OMP_NUM_THREADS. Also the user MAY pass additional flags to mpiexec exporting the following env vars: MPIEXEC_PREFIX Wrap the execution of mpiexec with another tool (e.g. totalview). MPIEXEC_FLAGS_HOST Flags that will be passed to the hosts. MPIEXEC_FLAGS_MIC Flags that will be passed to the MICs. -- Examples: export MPIEXEC_PREFIX=\"totalview -args\" export MPIEXEC_PREFIX=\"totalviewcli -args\" export MPIEXEC_FLAGS_HOST=\"-env VAR VALUE\" export MPIEXEC_FLAGS_MIC=\"-envlist VAR1,VAR2\" EXAMPLES Batch Script1 - Only hosts: --- #!/bin/bash #SBATCH -J TestJobMICNativeHybrid #SBATCH -N 4 #SBATCH -p q_mics #SBATCH -o TestJob-%j.out #SBATCH -e TestJob-%j.err #SBATCH --time=30 module purge module load impi intel/13.1.3 export MIC_NUM_PER_HOST=0 export OMP_NUM_THREADS=32 mpirun-mic -x 1 -c ./impi_native_hybrid --- Batch Script2 - Only mics: --- #!/bin/bash #SBATCH -J TestJobMICNativeHybrid #SBATCH -N 4 #SBATCH -p q_mics #SBATCH -o TestJob-%j.out #SBATCH -e TestJob-%j.err #SBATCH --time=30 module purge module load impi intel/13.1.3 export MIC_NUM_PER_HOST=2 export MIC_OMP_NUM_THREADS=240 mpirun-mic -z 1 -m ./impi_native_hybrid.mic --- Batch Script3 - Hosts and MICs: --- #!/bin/bash #SBATCH -J TestJobMICNativeHybrid #SBATCH -N 2 #SBATCH -p q_mics #SBATCH -o TestJob-%j.out #SBATCH -e TestJob-%j.err #SBATCH --time=30 module purge module load impi intel/13.1.3 export MIC_NUM_PER_HOST=2 export OMP_NUM_THREADS=2 export MIC_OMP_NUM_THREADS=4 mpirun-mic -v -x 16 -c ./impi_native_hybrid -z 60 -m ./impi_native_hybrid.mic --- "; # check script arguments if [ $# -lt 1 ] ; then echo "$USAGE" >&2 exit 1 fi # get script arguments while getopts "vhc:m:x:z:-:" OPTION do case $OPTION in c) HOST_BINARY=$OPTARG ;; h) echo "$USAGE"; exit 0; ;; m) MIC_BINARY=$OPTARG ;; v) MPIRUN_MIC_VERBOSE=1 ;; x) HOST_PPN=$OPTARG ;; z) MIC_PPN=$OPTARG ;; -) case $OPTARG in tv) MPIEXEC_PREFIX="totalview -args" ;; tvcli) MPIEXEC_PREFIX="totalviewcli -args" ;; \?) echo $USAGE >&2 exit 1 ;; esac ;; \?) echo "$USAGE"; exit 1; ;; esac done ### prepare the environment # If not under SLURM just run on the local system, but still we must be on a compute node.. if [[ -z "$SLURM_PROCID" ]] ; then SLURM_PROCID=0 fi if [[ -z "$SLURM_NODELIST" ]] ; then SLURM_NODELIST=`hostname` fi # give default values if [[ -z "$MIC_PPN" ]] ; then MIC_PPN=1 fi if [[ -z "$HOST_PPN" ]] ; then HOST_PPN=1 fi if [[ -z "$MIC_NUM_PER_HOST" ]] ; then MIC_NUM_PER_HOST=2 fi # We will use OMP_NUM_THREADS to decide if the user will run a Hybrid MPI+OpenMP job # Here set default value for MIC_OMP_NUM_THREADS if [[ -n "$OMP_NUM_THREADS" ]] ; then if [[ -z "$MIC_OMP_NUM_THREADS" ]] ; then MIC_OMP_NUM_THREADS=$OMP_NUM_THREADS fi fi # check the important values if [[ -z "$HOST_BINARY" ]] && [[ -z "$MIC_BINARY" ]] ; then echo "$USAGE" >&2 exit 1; fi # create the command line #MPI_EXEC=mpirun MPI_EXEC=mpiexec.hydra EXEC_ARGS="" # create the list of the nodes that are configured to have MICs LLIST_HOSTS_WITH_MICS=""; SLIST_HOSTS_WITH_MICS=`sinfo -h -o "%N %G" | grep mic | awk '{ print $1; }'`; for host in `scontrol show hostname $SLIST_HOSTS_WITH_MICS` ; do LLIST_HOSTS_WITH_MICS="${LLIST_HOSTS_WITH_MICS} ${host}"; done # create the lists of HOSTS AND MICS! HOST_NODELIST=""; MIC_NODELIST=""; for host in `scontrol show hostname $SLURM_NODELIST` ; do echo $LLIST_HOSTS_WITH_MICS | grep $host &> /dev/null if [ $? -eq 0 ] ; then if [ $MIC_NUM_PER_HOST -eq 1 ] ; then MIC_NODELIST="${MIC_NODELIST} ${host}-mic0"; elif [ $MIC_NUM_PER_HOST -eq 2 ] ; then MIC_NODELIST="${MIC_NODELIST} ${host}-mic0 ${host}-mic1"; fi fi HOST_NODELIST="${HOST_NODELIST} ${host}"; done # create the arguments # args for hosts here # run job on hosts if host binary is not null if [[ -n "$HOST_BINARY" ]] ; then if [[ -n "$HOST_NODELIST" ]] ; then for n in $HOST_NODELIST ; do if [[ -n "$OMP_NUM_THREADS" ]] ; then # with OpenMP EXEC_ARGS="${EXEC_ARGS} : -env OMP_NUM_THREADS $OMP_NUM_THREADS $MPIEXEC_FLAGS_HOST -n $HOST_PPN -host $n $HOST_BINARY"; else # without OpenMP EXEC_ARGS="${EXEC_ARGS} : $MPIEXEC_FLAGS_HOST -n $HOST_PPN -host $n $HOST_BINARY"; fi done fi fi # args for mics here # run job on mics if mic binary is not null and MIC_NUM_PER_HOST is 1 or 2 if [[ -n "$MIC_NODELIST" ]] ; then for n in $MIC_NODELIST ; do if [[ -n "$MIC_OMP_NUM_THREADS" ]] ; then # with OpenMP EXEC_ARGS="${EXEC_ARGS} : -env OMP_NUM_THREADS $MIC_OMP_NUM_THREADS -env LD_LIBRARY_PATH $MIC_LD_LIBRARY_PATH:$LD_LIBRARY_PATH $MPIEXEC_FLAGS_MIC -n $MIC_PPN -host $n $MIC_BINARY"; #EXEC_ARGS="${EXEC_ARGS} : -env OMP_NUM_THREADS $MIC_OMP_NUM_THREADS $MPIEXEC_FLAGS_MIC -n $MIC_PPN -host $n $MIC_BINARY"; else # NO OpenMP EXEC_ARGS="${EXEC_ARGS} : -env LD_LIBRARY_PATH $MIC_LD_LIBRARY_PATH:$LD_LIBRARY_PATH $MPIEXEC_FLAGS_MIC -n $MIC_PPN -host $n $MIC_BINARY"; #EXEC_ARGS="${EXEC_ARGS} : $MPIEXEC_FLAGS_MIC -n $MIC_PPN -host $n $MIC_BINARY"; fi done fi RUNCMD="$MPI_EXEC $EXEC_ARGS"; if [[ -n "$MPIEXEC_PREFIX" ]] ; then RUNCMD="$MPIEXEC_PREFIX $RUNCMD"; fi # extra important env (Local System depended) #export LD_LIBRARY_PATH="$MIC_LD_LIBRARY_PATH:$LD_LIBRARY_PATH" export I_MPI_MIC=1 export I_MPI_DAPL_PROVIDER_LIST=ofa-v2-mlx4_0-1 unset I_MPI_DEVICE unset I_MPI_PMI_LIBRARY # start the job if [ $SLURM_PROCID -eq 0 ] ; then if [[ -n "$MPIRUN_MIC_VERBOSE" ]] ; then echo echo "########################################################################" echo "MPI Tasks per host: $HOST_PPN" echo "Threads per host MPI task: $OMP_NUM_THREADS" echo "Binary for the hosts: $HOST_BINARY" echo "MPI Tasks per MIC: $MIC_PPN" echo "Threads per MIC MPI task: $MIC_OMP_NUM_THREADS" echo "Binary for the mics: $MIC_BINARY" echo "MIC_NUM_PER_HOST: $MIC_NUM_PER_HOST" echo echo "MPIEXEC_PREFIX: $MPIEXEC_PREFIX" echo "MPIEXEC_FLAGS_HOST: $MPIEXEC_FLAGS_HOST" echo "MPIEXEC_FLAGS_MIC: $MPIEXEC_FLAGS_MIC" echo "" echo "Run command: " echo "$RUNCMD" echo "########################################################################" echo fi $RUNCMD fi slurm-slurm-15-08-7-1/contribs/mpich1.slurm.patch000066400000000000000000000267221265000126300215240ustar00rootroot00000000000000This work was produced at the University of California, Lawrence Livermore National Laboratory (UC LLNL) under contract no. W-7405-ENG-48 (Contract 48) between the U.S. Department of Energy (DOE) and The Regents of the University of California (University) for the operation of UC LLNL. The rights of the Federal Government are reserved under Contract 48 subject to the restrictions agreed upon by the DOE and Universiity as allowed under DOE Acquisition Letter 97-1. DISCLAIMER This work was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the University of California nor any of their employees, makes any warranty, express or implied, or assumes any liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represented that its use would not infringe privately-owned rights. Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or the University of California, and shall not be used for advertising or product endorsement purposes. USE OF THIS PATCH This patch makes use of SLURM's srun command to launch all tasks. IMPORTANT: In order to launch more than one task per mode, shared memory is used for communications. You must explicitly enable shared memory when building MPICH with the following configure line: ./configure --with-device=ch_p4 --with-comm=shared Applications must be rebuilt with this new library to function with SLURM launch. The "--mpi=mpich1_p4" srun option MUST be used to launch the tasks (it sets a bunch of environment variables and launches only one task per node, the MPICH library launches the other tasks on the node). Here is a sample execute line: srun --mpi=mpich1_p4 [srun_options...] [options...] IDENTIFICATION: UCRL-CODE-234229 Index: mpid/ch_p4/p4/lib/p4_args.c =================================================================== --- mpid/ch_p4/p4/lib/p4_args.c (revision 11616) +++ mpid/ch_p4/p4/lib/p4_args.c (working copy) @@ -5,6 +5,10 @@ */ #include "p4.h" #include "p4_sys.h" +#include +#include +#include +#include /* Macro used to see if an arg is not following the correct format. */ #define bad_arg(a) ( ((a)==NULL) || ((*(a)) == '-') ) @@ -54,6 +58,213 @@ execer_mastport = 0; execer_pg = NULL; + /* + * For SLURM based job initiations (from srun command), get the + * parameters from environment variables as needed. This allows + * for a truly parallel job launch using the existing "execer" + * mode of operation with slight modification. + */ + if (getenv("SLURM_JOB_ID")) { + int i; + char *tmp, *hostlist, *host2, *tasks_per_node, *task2; + + execer_starting_remotes = P4_TRUE; + strcpy(execer_id, "mpiexec"); + + if ((tmp = getenv("SLURMD_NODENAME"))) + strcpy(execer_myhost, tmp); + else { + printf("SLURMD_NODENAME environment variable missing\n"); + exit(-1); + } + + if ((tmp = getenv("SLURM_NODEID"))) + execer_mynodenum = atoi(tmp); + else { + printf("SLURM_NODEID environment variable missing\n"); + exit(-1); + } + + if ((tmp = getenv("SLURM_NNODES"))) + execer_numtotnodes = atoi(tmp); + else { + printf("SLURM_NNODES environment variable missing\n"); + exit(-1); + } + + if (!(tmp = getenv("SLURM_MPICH_NODELIST"))) { + printf("SLURM_MPICH_NODELIST environment variable missing\n"); + printf(" SLURM's mpich1_p4 plugin likely not in use\n"); + exit(-1); + } + i = strlen(tmp) + 1; + hostlist = malloc(i); + bcopy(tmp, hostlist, i); + tmp = strtok_r(hostlist, ",", &host2); + if (!tmp) { + printf("SLURM_MPICH_NODELIST environment variable invalid\n"); + exit(-1); + } + strcpy(execer_masthost, tmp); + + if (!(tmp = getenv("SLURM_MPICH_TASKS"))) { + printf("SLURM_MPICH_TASKS environment variable missing\n"); + exit(-1); + } + i = strlen(tmp) + 1; + tasks_per_node = malloc(i); + bcopy(tmp, tasks_per_node, i); + tmp = strtok_r(tasks_per_node, ",", &task2); + if (!tmp) { + printf("SLURM_MPICH_TASKS environment variable invalid\n"); + exit(-1); + } + execer_mynumprocs = atoi(tmp); + + if (execer_mynodenum == 0) { + if ((tmp = getenv("SLURM_MPICH_PORT1"))) + execer_mastport = atoi(tmp); + else { + printf("SLURM_MPICH_PORT1 environment variable missing\n"); + exit(-1); + } + execer_pg = p4_alloc_procgroup(); + pe = execer_pg->entries; + strcpy(pe->host_name, execer_myhost); + pe->numslaves_in_group = execer_mynumprocs - 1; + strcpy(pe->slave_full_pathname, argv[0]); + pe->username[0] = '\0'; /* unused */ + execer_pg->num_entries++; + for (i=0; i<(execer_numtotnodes-1); i++) { + pe++; + tmp = strtok_r(NULL, ",", &host2); + if (!tmp) { + printf("SLURM_MPICH_NODELIST environment variable invalid\n"); + exit(-1); + } + strcpy(pe->host_name, tmp); + tmp = strtok_r(NULL, ",", &task2); + if (!tmp) { + printf("SLURM_MPICH_TASKS environment variable invalid\n"); + exit(-1); + } + pe->numslaves_in_group = atoi(tmp); +#if 0 + printf("host[%d] name:%s tasks:%d\n", + i, pe->host_name, pe->numslaves_in_group); +#endif + *pe->slave_full_pathname = 0; + pe->username[0] = '\0'; /* unused */ + execer_pg->num_entries++; + } + } else { + int p4_fd2, cc; + short new_port; + struct sockaddr_in serv_addr; + socklen_t serv_len; + struct pollfd ufds; + + if (strcmp(execer_myhost, execer_masthost)) { + /* look up correct task count */ + for (i=0; i [options...] + +The only real anomaly is that all output from all spawned tasks +on a node appear to slurm as coming from the one task that it +launched. If the srun --label option is used, the task ID labels +will be misleading. + +DETAILS: The srun command opens two socket connections and passes +their ports to all tasks via the SLURM_MPICH1_P4_PORT1 and +SLURM_MPICH1_P4_PORT2 environment variables. Task zero connects to +SLURM_MPICH1_P4_PORT1 and writes its port number. The other tasks connect to +SLURM_MPICH1_P4_PORT2 and read that port number. This avoid the requirement +of having task zero launch all subsequent tasks and also launches +all tasks under the direct control of SLURM (for process management +and accounting). SLURM only launches one task per node and that +launches additional MPI tasks as needed using shared memory for +communications. slurm-slurm-15-08-7-1/contribs/pam/000077500000000000000000000000001265000126300167255ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/pam/Makefile.am000066400000000000000000000014471265000126300207670ustar00rootroot00000000000000# # Makefile for SLURM PAM library # AUTOMAKE_OPTIONS = foreign AM_CPPFLAGS = -fPIC -I$(top_srcdir) -I$(top_srcdir)/src/common PLUGIN_FLAGS = -module --export-dynamic -avoid-version pkglibdir = $(PAM_DIR) if HAVE_PAM pam_lib = pam_slurm.la else pam_lib = endif pkglib_LTLIBRARIES = $(pam_lib) if HAVE_PAM current = $(SLURM_API_CURRENT) age = $(SLURM_API_AGE) rev = $(SLURM_API_REVISION) pam_slurm_la_SOURCES = pam_slurm.c pam_slurm_la_LIBADD = $(top_builddir)/src/api/libslurm.la pam_slurm_la_LDFLAGS = $(SO_LDFLAGS) $(PLUGIN_FLAGS) $(LIB_LDFLAGS) force: $(pam_slurm_la_LIBADD) : force @cd `dirname $@` && $(MAKE) # Don't specify basename or version.map files in src/api will not be built # @cd `dirname $@` && $(MAKE) `basename $@` else EXTRA_pam_slurm_la_SOURCES = pam_slurm.c endif slurm-slurm-15-08-7-1/contribs/pam/Makefile.in000066400000000000000000000641421265000126300210010ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ # # Makefile for SLURM PAM library # VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/pam DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am \ $(top_srcdir)/auxdir/depcomp README ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(pkglibdir)" LTLIBRARIES = $(pkglib_LTLIBRARIES) @HAVE_PAM_TRUE@pam_slurm_la_DEPENDENCIES = \ @HAVE_PAM_TRUE@ $(top_builddir)/src/api/libslurm.la am__pam_slurm_la_SOURCES_DIST = pam_slurm.c @HAVE_PAM_TRUE@am_pam_slurm_la_OBJECTS = pam_slurm.lo am__EXTRA_pam_slurm_la_SOURCES_DIST = pam_slurm.c pam_slurm_la_OBJECTS = $(am_pam_slurm_la_OBJECTS) AM_V_lt = $(am__v_lt_@AM_V@) am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@) am__v_lt_0 = --silent am__v_lt_1 = pam_slurm_la_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ $(pam_slurm_la_LDFLAGS) $(LDFLAGS) -o $@ @HAVE_PAM_TRUE@am_pam_slurm_la_rpath = -rpath $(pkglibdir) AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = DEFAULT_INCLUDES = -I.@am__isrc@ -I$(top_builddir) -I$(top_builddir)/slurm depcomp = $(SHELL) $(top_srcdir)/auxdir/depcomp am__depfiles_maybe = depfiles am__mv = mv -f COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \ $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \ $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \ $(AM_CFLAGS) $(CFLAGS) AM_V_CC = $(am__v_CC_@AM_V@) am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@) am__v_CC_0 = @echo " CC " $@; am__v_CC_1 = CCLD = $(CC) LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ $(AM_LDFLAGS) $(LDFLAGS) -o $@ AM_V_CCLD = $(am__v_CCLD_@AM_V@) am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@) am__v_CCLD_0 = @echo " CCLD " $@; am__v_CCLD_1 = SOURCES = $(pam_slurm_la_SOURCES) $(EXTRA_pam_slurm_la_SOURCES) DIST_SOURCES = $(am__pam_slurm_la_SOURCES_DIST) \ $(am__EXTRA_pam_slurm_la_SOURCES_DIST) am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` ETAGS = etags CTAGS = ctags DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) pkglibdir = $(PAM_DIR) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign AM_CPPFLAGS = -fPIC -I$(top_srcdir) -I$(top_srcdir)/src/common PLUGIN_FLAGS = -module --export-dynamic -avoid-version @HAVE_PAM_FALSE@pam_lib = @HAVE_PAM_TRUE@pam_lib = pam_slurm.la pkglib_LTLIBRARIES = $(pam_lib) @HAVE_PAM_TRUE@current = $(SLURM_API_CURRENT) @HAVE_PAM_TRUE@age = $(SLURM_API_AGE) @HAVE_PAM_TRUE@rev = $(SLURM_API_REVISION) @HAVE_PAM_TRUE@pam_slurm_la_SOURCES = pam_slurm.c @HAVE_PAM_TRUE@pam_slurm_la_LIBADD = $(top_builddir)/src/api/libslurm.la @HAVE_PAM_TRUE@pam_slurm_la_LDFLAGS = $(SO_LDFLAGS) $(PLUGIN_FLAGS) $(LIB_LDFLAGS) # Don't specify basename or version.map files in src/api will not be built # @cd `dirname $@` && $(MAKE) `basename $@` @HAVE_PAM_FALSE@EXTRA_pam_slurm_la_SOURCES = pam_slurm.c all: all-am .SUFFIXES: .SUFFIXES: .c .lo .o .obj $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign contribs/pam/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign contribs/pam/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): install-pkglibLTLIBRARIES: $(pkglib_LTLIBRARIES) @$(NORMAL_INSTALL) @list='$(pkglib_LTLIBRARIES)'; test -n "$(pkglibdir)" || list=; \ list2=; for p in $$list; do \ if test -f $$p; then \ list2="$$list2 $$p"; \ else :; fi; \ done; \ test -z "$$list2" || { \ echo " $(MKDIR_P) '$(DESTDIR)$(pkglibdir)'"; \ $(MKDIR_P) "$(DESTDIR)$(pkglibdir)" || exit 1; \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(pkglibdir)'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(pkglibdir)"; \ } uninstall-pkglibLTLIBRARIES: @$(NORMAL_UNINSTALL) @list='$(pkglib_LTLIBRARIES)'; test -n "$(pkglibdir)" || list=; \ for p in $$list; do \ $(am__strip_dir) \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(pkglibdir)/$$f'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(pkglibdir)/$$f"; \ done clean-pkglibLTLIBRARIES: -test -z "$(pkglib_LTLIBRARIES)" || rm -f $(pkglib_LTLIBRARIES) @list='$(pkglib_LTLIBRARIES)'; \ locs=`for p in $$list; do echo $$p; done | \ sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \ sort -u`; \ test -z "$$locs" || { \ echo rm -f $${locs}; \ rm -f $${locs}; \ } pam_slurm.la: $(pam_slurm_la_OBJECTS) $(pam_slurm_la_DEPENDENCIES) $(EXTRA_pam_slurm_la_DEPENDENCIES) $(AM_V_CCLD)$(pam_slurm_la_LINK) $(am_pam_slurm_la_rpath) $(pam_slurm_la_OBJECTS) $(pam_slurm_la_LIBADD) $(LIBS) mostlyclean-compile: -rm -f *.$(OBJEXT) distclean-compile: -rm -f *.tab.c @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/pam_slurm.Plo@am__quote@ .c.o: @am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $< .c.obj: @am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'` .c.lo: @am__fastdepCC_TRUE@ $(AM_V_CC)$(LTCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LTCOMPILE) -c -o $@ $< mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-am TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-am CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-am cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(LTLIBRARIES) installdirs: for dir in "$(DESTDIR)$(pkglibdir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool clean-pkglibLTLIBRARIES \ mostlyclean-am distclean: distclean-am -rm -rf ./$(DEPDIR) -rm -f Makefile distclean-am: clean-am distclean-compile distclean-generic \ distclean-tags dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-pkglibLTLIBRARIES install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -rf ./$(DEPDIR) -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-compile mostlyclean-generic \ mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-pkglibLTLIBRARIES .MAKE: install-am install-strip .PHONY: CTAGS GTAGS TAGS all all-am check check-am clean clean-generic \ clean-libtool clean-pkglibLTLIBRARIES cscopelist-am ctags \ ctags-am distclean distclean-compile distclean-generic \ distclean-libtool distclean-tags distdir dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-html install-html-am install-info \ install-info-am install-man install-pdf install-pdf-am \ install-pkglibLTLIBRARIES install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-compile mostlyclean-generic mostlyclean-libtool \ pdf pdf-am ps ps-am tags tags-am uninstall uninstall-am \ uninstall-pkglibLTLIBRARIES @HAVE_PAM_TRUE@force: @HAVE_PAM_TRUE@$(pam_slurm_la_LIBADD) : force @HAVE_PAM_TRUE@ @cd `dirname $@` && $(MAKE) # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/pam/README000066400000000000000000000071051265000126300176100ustar00rootroot00000000000000Module Name: pam_slurm Authors: Chris Dunlap Jim Garlick Moe Jette Management Groups Provided: account System Dependencies: libslurm.so Overview: Restricts access to compute nodes in a cluster using SLURM. Recognized Arguments: debug; no_sys_info; no_warn; rsh_kludge; rlogin_kludge Description: This module restricts access to compute nodes in a cluster where Simple Linux Utility for Resource Managment (SLURM) is in use. Access is granted to root, any user with an SLURM-launched job currently running on the node, or any user who has allocated resources on the node according to the SLURM database. The behavior of this module can be modified with the following flags: debug - log debugging information to the system log file no_sys_info - supress system logging of "access granted for user ...", access denied and other errors will still be logged no_warn - suppress warning messages to the application rsh_kludge - prevent truncation of first char from rsh error msg rlogin_kludge - prevent "staircase-effect" following rlogin error msg Notes: This module will not work on systems where the hostname returned by the gethostname() differs from SLURM node name. This includes front-end configurations (IBM BlueGene or Cray systems) or systems configured in SLURM using the NodeHostName parameter. rsh_kludge - The rsh service under RH71 (rsh-0.17-2.5) truncates the first character of this message. The rsh client sends 3 NUL-terminated ASCII strings: client-user-name, server-user-name, and command string. The server then validates the user. If the user is valid, it responds with a 1-byte zero; otherwise, it responds with a 1-byte one followed by an ASCII error message and a newline. RH's server is using the default PAM conversation function which doesn't prepend the message with a single-byte error code. As a result, the client receives a string, interprets the first byte as a non-zero status, and treats the remaining string as an error message. The rsh_kludge prepends a newline which will be interpreted by the rsh client as an error status. rlogin_kludge - The rlogin service under RH71 (rsh-0.17-2.5) does not perform a carriage-return after the PAM error message is displayed which results in the "staircase-effect" of the next message. The rlogin_kludge appends a carriage-return to prevent this. Examples / Suggested Usage: Use of this module is recommended on any compute node where you want to limit access to just those users who are currently scheduled to run jobs. For /etc/pam.d/ style configurations where modules live in /lib/security/, add the following line to the PAM configuration file for the appropriate service(s) (eg, /etc/pam.d/system-auth): account required /lib/security/pam_slurm.so If you always want to allow access for an administrative group (eg, wheel), stack the pam_access module ahead of pam_slurm: account sufficient /lib/security/pam_access.so account required /lib/security/pam_slurm.so Then edit the pam_access configuration file (/etc/security/access.conf): +:wheel:ALL -:ALL:ALL When access is denied because the user does not have an active job running on the node, an error message is returned to the application: Access denied: user foo (uid=1313) has no active jobs. This message can be suppressed by specifying the "no_warn" argument in the PAM configuration file. slurm-slurm-15-08-7-1/contribs/pam/pam_slurm.c000066400000000000000000000314371265000126300211000ustar00rootroot00000000000000/*****************************************************************************\ * $Id$ ***************************************************************************** * Copyright (C) 2002-2007 The Regents of the University of California. * Copyright (C) 2008-2009 Lawrence Livermore National Security. * Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). * UCRL-CODE-2002-040. * * Written by Chris Dunlap * and Jim Garlick * modified for SLURM by Moe Jette . * * This file is part of pam_slurm, a PAM module for restricting access to * the compute nodes within a cluster based on information obtained from * Simple Linux Utility for Resource Managment (SLURM). For details, see * . * * pam_slurm is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the * Free Software Foundation; either version 2 of the License, or (at your * option) any later version. * * pam_slurm is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * for more details. * * You should have received a copy of the GNU General Public License along * with pam_slurm; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. \*****************************************************************************/ #if HAVE_CONFIG_H # include "config.h" #endif #include #include #include #include #include #include #include #include #include #include #include #include #include #include "slurm/slurm.h" #include "src/common/xmalloc.h" #include "src/common/read_config.h" /* Define the externally visible functions in this file. */ #define PAM_SM_ACCOUNT #include #include struct _options { int disable_sys_info; int enable_debug; int enable_silence; const char *msg_prefix; const char *msg_suffix; }; /* Define the functions to be called before and after load since _init * and _fini are obsolete, and their use can lead to unpredicatable * results. */ void __attribute__ ((constructor)) libpam_slurm_init(void); void __attribute__ ((destructor)) libpam_slurm_fini(void); /* * Handle for libslurm.so * * We open libslurm.so via dlopen () in order to pass the * flag RTDL_GLOBAL so that subsequently loaded modules have * access to libslurm symbols. This is pretty much only needed * for dynamically loaded modules that would otherwise be * linked against libslurm. * */ static void * slurm_h = NULL; static int pam_debug = 0; static void _log_msg(int level, const char *format, ...); static void _parse_args(struct _options *opts, int argc, const char **argv); static int _hostrange_member(char *hostname, char *str); static int _slurm_match_allocation(uid_t uid); static void _send_denial_msg(pam_handle_t *pamh, struct _options *opts, const char *user, uid_t uid); #define DBG(msg,args...) \ do { \ if (pam_debug) \ _log_msg(LOG_INFO, msg, ##args); \ } while (0); /**********************************\ * Account Management Functions * \**********************************/ PAM_EXTERN int pam_sm_acct_mgmt(pam_handle_t *pamh, int flags, int argc, const char **argv) { struct _options opts; int retval; char *user; void *dummy; /* needed to eliminate warning: * dereferencing type-punned pointer will break * strict-aliasing rules */ struct passwd *pw; uid_t uid; int auth = PAM_PERM_DENIED; _parse_args(&opts, argc, argv); if (flags & PAM_SILENT) opts.enable_silence = 1; retval = pam_get_item(pamh, PAM_USER, (const void **) &dummy); user = (char *) dummy; if ((retval != PAM_SUCCESS) || (user == NULL) || (*user == '\0')) { _log_msg(LOG_ERR, "unable to identify user: %s", pam_strerror(pamh, retval)); return(PAM_USER_UNKNOWN); } if (!(pw = getpwnam(user))) { _log_msg(LOG_ERR, "user %s does not exist", user); return(PAM_USER_UNKNOWN); } uid = pw->pw_uid; if (uid == 0) auth = PAM_SUCCESS; else if (_slurm_match_allocation(uid)) auth = PAM_SUCCESS; if ((auth != PAM_SUCCESS) && (!opts.enable_silence)) _send_denial_msg(pamh, &opts, user, uid); /* * Generate an entry to the system log if access was * denied (!PAM_SUCCESS) or disable_sys_info is not set */ if ((auth != PAM_SUCCESS) || (!opts.disable_sys_info)) { _log_msg(LOG_INFO, "access %s for user %s (uid=%d)", (auth == PAM_SUCCESS) ? "granted" : "denied", user, uid); } return(auth); } /************************\ * Internal Functions * \************************/ /* * Writes message described by the 'format' string to syslog. */ static void _log_msg(int level, const char *format, ...) { va_list args; openlog("pam_slurm", LOG_CONS | LOG_PID, LOG_AUTHPRIV); va_start(args, format); vsyslog(level, format, args); va_end(args); closelog(); return; } /* * Parses module args passed via PAM's config. */ static void _parse_args(struct _options *opts, int argc, const char **argv) { int i; opts->disable_sys_info = 0; opts->enable_debug = 0; opts->enable_silence = 0; opts->msg_prefix = ""; opts->msg_suffix = ""; /* rsh_kludge: * The rsh service under RH71 (rsh-0.17-2.5) truncates the first char * of this msg. The rsh client sends 3 NUL-terminated ASCII strings: * client-user-name, server-user-name, and command string. The server * then validates the user. If the user is valid, it responds with a * 1-byte zero; o/w, it responds with a 1-byte one followed by an ASCII * error message and a newline. RH's server is using the default PAM * conversation function which doesn't prepend the message with a * single-byte error code. As a result, the client receives a string, * interprets the first byte as a non-zero status, and treats the * remaining string as an error message. The rsh_kludge prepends a * newline which will be interpreted by the rsh client as an * error status. * * rlogin_kludge: * The rlogin service under RH71 (rsh-0.17-2.5) does not perform a * carriage-return after the PAM error message is displayed * which results * in the "staircase-effect" of the next message. The rlogin_kludge * appends a carriage-return to prevent this. */ for (i=0; ienable_debug = pam_debug = 1; else if (!strcmp(argv[i], "no_sys_info")) opts->disable_sys_info = 1; else if (!strcmp(argv[i], "no_warn")) opts->enable_silence = 1; else if (!strcmp(argv[i], "rsh_kludge")) opts->msg_prefix = "\n"; else if (!strcmp(argv[i], "rlogin_kludge")) opts->msg_suffix = "\r"; else _log_msg(LOG_ERR, "unknown option [%s]", argv[i]); } return; } /* * Return 1 if 'hostname' is a member of 'str', a SLURM-style host list as * returned by SLURM database queries, else 0. The 'str' argument is * truncated to the base prefix as a side-effect. */ static int _hostrange_member(char *hostname, char *str) { hostlist_t hl; int found_host; if (!*hostname || !*str) return 0; if ((hl = slurm_hostlist_create(str)) == NULL) return 0; found_host = slurm_hostlist_find(hl, hostname); slurm_hostlist_destroy(hl); if (found_host == -1) return 0; else return 1; } /* _gethostname_short - equivalent to gethostname, but return only the first * component of the fully qualified name * (e.g. "linux123.foo.bar" becomes "linux123") * * Copied from src/common/read_config.c because it is not exported * through libslurm. * * OUT name */ static int _gethostname_short (char *name, size_t len) { int error_code, name_len; char *dot_ptr, path_name[1024]; error_code = gethostname(path_name, sizeof(path_name)); if (error_code) return error_code; dot_ptr = strchr (path_name, '.'); if (dot_ptr == NULL) dot_ptr = path_name + strlen(path_name); else dot_ptr[0] = '\0'; name_len = (dot_ptr - path_name); if (name_len > len) return ENAMETOOLONG; strcpy(name, path_name); return 0; } /* * Query the SLURM database to find out if 'uid' has been allocated * this node. If so, return 1 indicating that 'uid' is authorized to * this node else return 0. */ static int _slurm_match_allocation(uid_t uid) { int authorized = 0, i; char hostname[MAXHOSTNAMELEN]; char *nodename = NULL; job_info_msg_t * msg; if (_gethostname_short(hostname, sizeof(hostname)) < 0) { _log_msg(LOG_ERR, "gethostname: %m"); return 0; } if (!(nodename = slurm_conf_get_nodename(hostname))) { if (!(nodename = slurm_conf_get_aliased_nodename())) { /* if no match, try localhost (Should only be * valid in a test environment) */ if (!(nodename = slurm_conf_get_nodename("localhost"))) { _log_msg(LOG_ERR, "slurm_conf_get_aliased_nodename: " "no hostname found"); return 0; } } } DBG ("does uid %ld have \"%s\" allocated?", uid, nodename); if (slurm_load_job_user(&msg, uid, SHOW_ALL) < 0) { _log_msg(LOG_ERR, "slurm_load_job_user: %s", slurm_strerror(errno)); return 0; } DBG ("slurm_load_jobs returned %d records", msg->record_count); for (i = 0; i < msg->record_count; i++) { job_info_t *j = &msg->job_array[i]; if (j->job_state == JOB_RUNNING) { DBG ("jobid %ld: nodes=\"%s\"", j->job_id, j->nodes); if (_hostrange_member(nodename, j->nodes) ) { DBG ("user %ld allocated node %s in job %ld", uid, nodename, j->job_id); authorized = 1; break; } } } xfree(nodename); slurm_free_job_info_msg (msg); return authorized; } /* * Sends a message to the application informing the user * that access was denied due to SLURM. */ static void _send_denial_msg(pam_handle_t *pamh, struct _options *opts, const char *user, uid_t uid) { int retval; struct pam_conv *conv; void *dummy; /* needed to eliminate warning: * dereferencing type-punned pointer will * break strict-aliasing rules */ int n; char str[PAM_MAX_MSG_SIZE]; struct pam_message msg[1]; const struct pam_message *pmsg[1]; struct pam_response *prsp; /* Get conversation function to talk with app. */ retval = pam_get_item(pamh, PAM_CONV, (const void **) &dummy); conv = (struct pam_conv *) dummy; if (retval != PAM_SUCCESS) { _log_msg(LOG_ERR, "unable to get pam_conv: %s", pam_strerror(pamh, retval)); return; } /* Construct msg to send to app. */ n = snprintf(str, sizeof(str), "%sAccess denied: user %s (uid=%d) has no active jobs.%s", opts->msg_prefix, user, uid, opts->msg_suffix); if ((n < 0) || (n >= sizeof(str))) _log_msg(LOG_ERR, "exceeded buffer for pam_conv message"); msg[0].msg_style = PAM_ERROR_MSG; msg[0].msg = str; pmsg[0] = &msg[0]; prsp = NULL; /* Send msg to app and free the (meaningless) rsp. */ retval = conv->conv(1, pmsg, &prsp, conv->appdata_ptr); if (retval != PAM_SUCCESS) _log_msg(LOG_ERR, "unable to converse with app: %s", pam_strerror(pamh, retval)); if (prsp != NULL) _pam_drop_reply(prsp, 1); return; } /* * Dynamically open system's libslurm.so with RTLD_GLOBAL flag. * This allows subsequently loaded modules access to libslurm symbols. */ extern void libpam_slurm_init (void) { char libslurmname[64]; if (slurm_h) return; /* First try to use the same libslurm version ("libslurm.so.24.0.0"), * Second try to match the major version number ("libslurm.so.24"), * Otherwise use "libslurm.so" */ if (snprintf(libslurmname, sizeof(libslurmname), "libslurm.so.%d.%d.%d", SLURM_API_CURRENT, SLURM_API_REVISION, SLURM_API_AGE) >= sizeof(libslurmname) ) { _log_msg (LOG_ERR, "Unable to write libslurmname\n"); } else if ((slurm_h = dlopen(libslurmname, RTLD_NOW|RTLD_GLOBAL))) { return; } else { _log_msg (LOG_INFO, "Unable to dlopen %s: %s\n", libslurmname, dlerror ()); } if (snprintf(libslurmname, sizeof(libslurmname), "libslurm.so.%d", SLURM_API_CURRENT) >= sizeof(libslurmname) ) { _log_msg (LOG_ERR, "Unable to write libslurmname\n"); } else if ((slurm_h = dlopen(libslurmname, RTLD_NOW|RTLD_GLOBAL))) { return; } else { _log_msg (LOG_INFO, "Unable to dlopen %s: %s\n", libslurmname, dlerror ()); } if (!(slurm_h = dlopen("libslurm.so", RTLD_NOW|RTLD_GLOBAL))) { _log_msg (LOG_ERR, "Unable to dlopen libslurm.so: %s\n", dlerror ()); } return; } extern void libpam_slurm_fini (void) { if (slurm_h) dlclose (slurm_h); return; } /*************************************\ * Statically Loaded Module Struct * \*************************************/ #ifdef PAM_STATIC struct pam_module _pam_rms_modstruct = { "pam_slurm", NULL, NULL, pam_sm_acct_mgmt, NULL, NULL, NULL, }; #endif /* PAM_STATIC */ slurm-slurm-15-08-7-1/contribs/pam_slurm_adopt/000077500000000000000000000000001265000126300213365ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/pam_slurm_adopt/Makefile.am000066400000000000000000000016321265000126300233740ustar00rootroot00000000000000# # Makefile for pam_slurm_adopt # AUTOMAKE_OPTIONS = foreign AM_CPPFLAGS = -fPIC -I$(top_srcdir) -I$(top_srcdir)/src/common # -DLIBSLURM_SO=\"$(libdir)/libslurm.so\" PLUGIN_FLAGS = -module --export-dynamic -avoid-version pkglibdir = $(PAM_DIR) if HAVE_PAM pam_lib = pam_slurm_adopt.la else pam_lib = endif pkglib_LTLIBRARIES = $(pam_lib) if HAVE_PAM current = $(SLURM_API_CURRENT) age = $(SLURM_API_AGE) rev = $(SLURM_API_REVISION) pam_slurm_adopt_la_SOURCES = pam_slurm_adopt.c helper.c helper.h pam_slurm_adopt_la_LIBADD = $(top_builddir)/src/api/libslurm.la pam_slurm_adopt_la_LDFLAGS = $(SO_LDFLAGS) $(PLUGIN_FLAGS) $(LIB_LDFLAGS) force: $(pam_slurm_adopt_la_LIBADD) : force @cd `dirname $@` && $(MAKE) # Don't specify basename or version.map files in src/api will not be built # @cd `dirname $@` && $(MAKE) `basename $@` else EXTRA_pam_slurm_adopt_la_SOURCES = pam_slurm_adopt.c helper.c endif slurm-slurm-15-08-7-1/contribs/pam_slurm_adopt/Makefile.in000066400000000000000000000650261265000126300234140ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ # # Makefile for pam_slurm_adopt # VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/pam_slurm_adopt DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am \ $(top_srcdir)/auxdir/depcomp README ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(pkglibdir)" LTLIBRARIES = $(pkglib_LTLIBRARIES) @HAVE_PAM_TRUE@pam_slurm_adopt_la_DEPENDENCIES = \ @HAVE_PAM_TRUE@ $(top_builddir)/src/api/libslurm.la am__pam_slurm_adopt_la_SOURCES_DIST = pam_slurm_adopt.c helper.c \ helper.h @HAVE_PAM_TRUE@am_pam_slurm_adopt_la_OBJECTS = pam_slurm_adopt.lo \ @HAVE_PAM_TRUE@ helper.lo am__EXTRA_pam_slurm_adopt_la_SOURCES_DIST = pam_slurm_adopt.c helper.c pam_slurm_adopt_la_OBJECTS = $(am_pam_slurm_adopt_la_OBJECTS) AM_V_lt = $(am__v_lt_@AM_V@) am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@) am__v_lt_0 = --silent am__v_lt_1 = pam_slurm_adopt_la_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC \ $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=link $(CCLD) \ $(AM_CFLAGS) $(CFLAGS) $(pam_slurm_adopt_la_LDFLAGS) \ $(LDFLAGS) -o $@ @HAVE_PAM_TRUE@am_pam_slurm_adopt_la_rpath = -rpath $(pkglibdir) AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = DEFAULT_INCLUDES = -I.@am__isrc@ -I$(top_builddir) -I$(top_builddir)/slurm depcomp = $(SHELL) $(top_srcdir)/auxdir/depcomp am__depfiles_maybe = depfiles am__mv = mv -f COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \ $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \ $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \ $(AM_CFLAGS) $(CFLAGS) AM_V_CC = $(am__v_CC_@AM_V@) am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@) am__v_CC_0 = @echo " CC " $@; am__v_CC_1 = CCLD = $(CC) LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ $(AM_LDFLAGS) $(LDFLAGS) -o $@ AM_V_CCLD = $(am__v_CCLD_@AM_V@) am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@) am__v_CCLD_0 = @echo " CCLD " $@; am__v_CCLD_1 = SOURCES = $(pam_slurm_adopt_la_SOURCES) \ $(EXTRA_pam_slurm_adopt_la_SOURCES) DIST_SOURCES = $(am__pam_slurm_adopt_la_SOURCES_DIST) \ $(am__EXTRA_pam_slurm_adopt_la_SOURCES_DIST) am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` ETAGS = etags CTAGS = ctags DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) pkglibdir = $(PAM_DIR) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign AM_CPPFLAGS = -fPIC -I$(top_srcdir) -I$(top_srcdir)/src/common # -DLIBSLURM_SO=\"$(libdir)/libslurm.so\" PLUGIN_FLAGS = -module --export-dynamic -avoid-version @HAVE_PAM_FALSE@pam_lib = @HAVE_PAM_TRUE@pam_lib = pam_slurm_adopt.la pkglib_LTLIBRARIES = $(pam_lib) @HAVE_PAM_TRUE@current = $(SLURM_API_CURRENT) @HAVE_PAM_TRUE@age = $(SLURM_API_AGE) @HAVE_PAM_TRUE@rev = $(SLURM_API_REVISION) @HAVE_PAM_TRUE@pam_slurm_adopt_la_SOURCES = pam_slurm_adopt.c helper.c helper.h @HAVE_PAM_TRUE@pam_slurm_adopt_la_LIBADD = $(top_builddir)/src/api/libslurm.la @HAVE_PAM_TRUE@pam_slurm_adopt_la_LDFLAGS = $(SO_LDFLAGS) $(PLUGIN_FLAGS) $(LIB_LDFLAGS) # Don't specify basename or version.map files in src/api will not be built # @cd `dirname $@` && $(MAKE) `basename $@` @HAVE_PAM_FALSE@EXTRA_pam_slurm_adopt_la_SOURCES = pam_slurm_adopt.c helper.c all: all-am .SUFFIXES: .SUFFIXES: .c .lo .o .obj $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign contribs/pam_slurm_adopt/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign contribs/pam_slurm_adopt/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): install-pkglibLTLIBRARIES: $(pkglib_LTLIBRARIES) @$(NORMAL_INSTALL) @list='$(pkglib_LTLIBRARIES)'; test -n "$(pkglibdir)" || list=; \ list2=; for p in $$list; do \ if test -f $$p; then \ list2="$$list2 $$p"; \ else :; fi; \ done; \ test -z "$$list2" || { \ echo " $(MKDIR_P) '$(DESTDIR)$(pkglibdir)'"; \ $(MKDIR_P) "$(DESTDIR)$(pkglibdir)" || exit 1; \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(pkglibdir)'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(pkglibdir)"; \ } uninstall-pkglibLTLIBRARIES: @$(NORMAL_UNINSTALL) @list='$(pkglib_LTLIBRARIES)'; test -n "$(pkglibdir)" || list=; \ for p in $$list; do \ $(am__strip_dir) \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(pkglibdir)/$$f'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(pkglibdir)/$$f"; \ done clean-pkglibLTLIBRARIES: -test -z "$(pkglib_LTLIBRARIES)" || rm -f $(pkglib_LTLIBRARIES) @list='$(pkglib_LTLIBRARIES)'; \ locs=`for p in $$list; do echo $$p; done | \ sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \ sort -u`; \ test -z "$$locs" || { \ echo rm -f $${locs}; \ rm -f $${locs}; \ } pam_slurm_adopt.la: $(pam_slurm_adopt_la_OBJECTS) $(pam_slurm_adopt_la_DEPENDENCIES) $(EXTRA_pam_slurm_adopt_la_DEPENDENCIES) $(AM_V_CCLD)$(pam_slurm_adopt_la_LINK) $(am_pam_slurm_adopt_la_rpath) $(pam_slurm_adopt_la_OBJECTS) $(pam_slurm_adopt_la_LIBADD) $(LIBS) mostlyclean-compile: -rm -f *.$(OBJEXT) distclean-compile: -rm -f *.tab.c @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/helper.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/pam_slurm_adopt.Plo@am__quote@ .c.o: @am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $< .c.obj: @am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'` .c.lo: @am__fastdepCC_TRUE@ $(AM_V_CC)$(LTCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LTCOMPILE) -c -o $@ $< mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-am TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-am CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-am cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(LTLIBRARIES) installdirs: for dir in "$(DESTDIR)$(pkglibdir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool clean-pkglibLTLIBRARIES \ mostlyclean-am distclean: distclean-am -rm -rf ./$(DEPDIR) -rm -f Makefile distclean-am: clean-am distclean-compile distclean-generic \ distclean-tags dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-pkglibLTLIBRARIES install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -rf ./$(DEPDIR) -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-compile mostlyclean-generic \ mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-pkglibLTLIBRARIES .MAKE: install-am install-strip .PHONY: CTAGS GTAGS TAGS all all-am check check-am clean clean-generic \ clean-libtool clean-pkglibLTLIBRARIES cscopelist-am ctags \ ctags-am distclean distclean-compile distclean-generic \ distclean-libtool distclean-tags distdir dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-html install-html-am install-info \ install-info-am install-man install-pdf install-pdf-am \ install-pkglibLTLIBRARIES install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-compile mostlyclean-generic mostlyclean-libtool \ pdf pdf-am ps ps-am tags tags-am uninstall uninstall-am \ uninstall-pkglibLTLIBRARIES @HAVE_PAM_TRUE@force: @HAVE_PAM_TRUE@$(pam_slurm_adopt_la_LIBADD) : force @HAVE_PAM_TRUE@ @cd `dirname $@` && $(MAKE) # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/pam_slurm_adopt/README000066400000000000000000000155771265000126300222350ustar00rootroot00000000000000NAME pam_slurm_adopt.so SYNOPSIS Adopt incoming connections into jobs AUTHOR Ryan Cox MODULE TYPES PROVIDED account DESCRIPTION This module attempts to determine the job which originated this connection. The module is configurable; these are the default steps: 1) Check the local stepd for a count of jobs owned by the non-root user a) If none, deny (option action_no_jobs) b) If only one, adopt the process into that job c) If multiple, continue 2) Determine src/dst IP/port of socket 3) Issue callerid RPC to slurmd at IP address of source a) If the remote slurmd can identify the source job, adopt into that job b) If not, continue 4) Pick a random local job from the user to adopt into (option action_unknown) Jobs are adopted into a job's allocation step. MODULE OPTIONS This module has the following options (* = default): ignore_root - By default, all root connections are ignored. If the RPC is sent to a node which drops packets to the slurmd port, the RPC will block for some time before failing. This is unlikely to be desirable. Likewise, root may be trying to administer the system and not do work that should be in a job. The job may trigger oom-killer or just exit. If root restarts a service or similar, it will be tracked and killed by Slurm when the job exits. This sounds bad because it is bad. 1* = Let the connection through without adoption 0 = I am crazy. I want random services to die when root jobs exit. I also like it when RPCs block for a while then time out. action_no_jobs - The action to perform if the user has no jobs on the node ignore = Do nothing. Fall through to the next pam module deny* = Deny the connection action_unknown - The action to perform when the user has multiple jobs on the node *and* the RPC does not locate the source job. If the RPC mechanism works properly in your environment, this option will likely be relevant *only* when connecting from a login node. newest* = Pick the newest job on the node. The "newest" job is chosen based on the mtime of the job's step_extern cgroup; asking Slurm would require an RPC to the controller. The user can ssh in but may be adopted into a job that exits earlier than the job they intended to check on. The ssh connection will at least be subject to appropriate limits and the user can be informed of better ways to accomplish their objectives if this becomes a problem allow = Let the connection through without adoption deny = Deny the connection action_adopt_failure - The action to perform if the process is unable to be adopted into any job for whatever reason. If the process cannot be adopted into the job identified by the callerid RPC, it will fall through to the action_unknown code and try to adopt there. A failure at that point or if there is only one job will result in this action being taken. allow* = Let the connection through without adoption deny = Deny the connection action_generic_failure - The action to perform it there certain failures such as inability to talk to the local slurmd or if the kernel doesn't offer the correct facilities ignore* = Do nothing. Fall through to the next pam module allow = Let the connection through without adoption deny = Deny the connection log_level - See SlurmdDebug in slurm.conf(5) for available options. The default log_level is info. SLURM.CONF CONFIGURATION PrologFlags=contain must be set in slurm.conf. This sets up the "extern" step into which ssh-launched processes will be adopted. **** IMPORTANT **** PrologFlags=contain must be in place *before* using this module. The module bases its checks on local steps that have already been launched. If the user has no steps on the node, such as the extern step, the module will assume that the user has no jobs allocated to the node. Depending on your configuration of the pam module, you might deny *all* user ssh attempts. NOTES This module and the related RPC currently support Linux systems which have network connection information available through /proc/net/tcp{,6}. A proccess's sockets must exist as symlinks in its /proc/self/fd directory. The RPC data structure itself is OS-agnostic. If support is desired for a different OS, relevant code must be added to find one's socket information then match that information on the remote end to a particular process which Slurm is tracking. IPv6 is supported by the RPC data structure itself and the code which sends it and receives it. Sending the RPC to an IPv6 address is not currently supported by Slurm. Once support is added, remove the relevant check in slurm_network_callerid(). For the action_unknown=newest setting to work, the memory cgroup must be in use so that the code can check mtimes of cgroup directories. If you would prefer to use a different subsystem, modify the _indeterminate_multiple function. FIREWALLS, IP ADDRESSES, ETC. slurmd should be accessible on any IP address from which a user might launch ssh. The RPC to determine the source job must be able to reach the slurmd port on that particular IP address. If there is no slurmd on the source node, such as on a login node, it is better to have the RPC be rejected rather than silently dropped. This will allow better responsiveness to the RPC initiator. EXAMPLES / SUGGESTED USAGE Use of this module is recommended on any compute node. Add the following line to the appropriate file in /etc/pam.d, such as system-auth or sshd: account sufficient pam_slurm_adopt.so If you always want to allow access for an administrative group (e.g. wheel), stack the pam_access module after pam_slurm_adopt. A success with pam_slurm_adopt is sufficient to allow access but the pam_access module can allow others, such as staff, access even without jobs. account sufficient pam_slurm_adopt.so account required pam_access.so Then edit the pam_access configuration file (/etc/security/access.conf): +:wheel:ALL -:ALL:ALL When access is denied, the user will receive a relevant error message. pam_systemd.so is known to not play nice with Slurm's usage of cgroups. It is recommended that you disable it or possibly add pam_slurm_adopt.so after pam_systemd.so. slurm-slurm-15-08-7-1/contribs/pam_slurm_adopt/helper.c000066400000000000000000000134161265000126300227660ustar00rootroot00000000000000/*****************************************************************************\ * $Id$ ***************************************************************************** * * Useful portions extracted from pam_slurm.c by Ryan Cox * * Copyright (C) 2002-2007 The Regents of the University of California. * Copyright (C) 2008-2009 Lawrence Livermore National Security. * Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). * UCRL-CODE-2002-040. * * Written by Chris Dunlap * and Jim Garlick * modified for SLURM by Moe Jette . * * This file is part of pam_slurm, a PAM module for restricting access to * the compute nodes within a cluster based on information obtained from * Simple Linux Utility for Resource Managment (SLURM). For details, see * . * * pam_slurm is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the * Free Software Foundation; either version 2 of the License, or (at your * option) any later version. * * pam_slurm is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * for more details. * * You should have received a copy of the GNU General Public License along * with pam_slurm; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. \*****************************************************************************/ #ifndef PAM_MODULE_NAME # define PAM_MODULE_NAME "pam_slurm_adopt" #endif #if HAVE_CONFIG_H # include "config.h" #endif #include #include #include #include #include #include #include #include #include #include #include #include #include "slurm/slurm.h" #include "src/common/slurm_xlator.h" /* Define the externally visible functions in this file. */ #define PAM_SM_ACCOUNT #include #include /* Define the functions to be called before and after load since _init * and _fini are obsolete, and their use can lead to unpredicatable * results. */ void __attribute__ ((constructor)) libpam_slurm_init(void); void __attribute__ ((destructor)) libpam_slurm_fini(void); /* * Handle for libslurm.so * * We open libslurm.so via dlopen () in order to pass the * flag RTDL_GLOBAL so that subsequently loaded modules have * access to libslurm symbols. This is pretty much only needed * for dynamically loaded modules that would otherwise be * linked against libslurm. * */ static void * slurm_h = NULL; /* This function is necessary because libpam_slurm_init is called without access * to the pam handle. */ static void _log_msg(int level, const char *format, ...) { va_list args; openlog(PAM_MODULE_NAME, LOG_CONS | LOG_PID, LOG_AUTHPRIV); va_start(args, format); vsyslog(level, format, args); va_end(args); closelog(); return; } /* * Sends a message to the application informing the user * that access was denied due to SLURM. */ extern void send_user_msg(pam_handle_t *pamh, const char *mesg) { int retval; struct pam_conv *conv; void *dummy; /* needed to eliminate warning: * dereferencing type-punned pointer will * break strict-aliasing rules */ char str[PAM_MAX_MSG_SIZE]; struct pam_message msg[1]; const struct pam_message *pmsg[1]; struct pam_response *prsp; info("send_user_msg: %s", mesg); /* Get conversation function to talk with app. */ retval = pam_get_item(pamh, PAM_CONV, (const void **) &dummy); conv = (struct pam_conv *) dummy; if (retval != PAM_SUCCESS) { _log_msg(LOG_ERR, "unable to get pam_conv: %s", pam_strerror(pamh, retval)); return; } /* Construct msg to send to app. */ memcpy(str, mesg, sizeof(str)); msg[0].msg_style = PAM_ERROR_MSG; msg[0].msg = str; pmsg[0] = &msg[0]; prsp = NULL; /* Send msg to app and free the (meaningless) rsp. */ retval = conv->conv(1, pmsg, &prsp, conv->appdata_ptr); if (retval != PAM_SUCCESS) _log_msg(LOG_ERR, "unable to converse with app: %s", pam_strerror(pamh, retval)); if (prsp != NULL) _pam_drop_reply(prsp, 1); return; } /* * Dynamically open system's libslurm.so with RTLD_GLOBAL flag. * This allows subsequently loaded modules access to libslurm symbols. */ extern void libpam_slurm_init (void) { char libslurmname[64]; if (slurm_h) return; /* First try to use the same libslurm version ("libslurm.so.24.0.0"), * Second try to match the major version number ("libslurm.so.24"), * Otherwise use "libslurm.so" */ if (snprintf(libslurmname, sizeof(libslurmname), "libslurm.so.%d.%d.%d", SLURM_API_CURRENT, SLURM_API_REVISION, SLURM_API_AGE) >= (signed) sizeof(libslurmname) ) { _log_msg (LOG_ERR, "Unable to write libslurmname\n"); } else if ((slurm_h = dlopen(libslurmname, RTLD_NOW|RTLD_GLOBAL))) { return; } else { _log_msg (LOG_INFO, "Unable to dlopen %s: %s\n", libslurmname, dlerror ()); } if (snprintf(libslurmname, sizeof(libslurmname), "libslurm.so.%d", SLURM_API_CURRENT) >= (signed) sizeof(libslurmname) ) { _log_msg (LOG_ERR, "Unable to write libslurmname\n"); } else if ((slurm_h = dlopen(libslurmname, RTLD_NOW|RTLD_GLOBAL))) { return; } else { _log_msg (LOG_INFO, "Unable to dlopen %s: %s\n", libslurmname, dlerror ()); } if (!(slurm_h = dlopen("libslurm.so", RTLD_NOW|RTLD_GLOBAL))) { _log_msg (LOG_ERR, "Unable to dlopen libslurm.so: %s\n", dlerror ()); } return; } extern void libpam_slurm_fini (void) { if (slurm_h) dlclose (slurm_h); return; } slurm-slurm-15-08-7-1/contribs/pam_slurm_adopt/helper.h000066400000000000000000000004611265000126300227670ustar00rootroot00000000000000/* helper.h * * Some helper functions needed for pam_slurm_adopt.c */ #define PAM_SM_ACCOUNT #include #include extern void send_user_msg(pam_handle_t *pamh, const char *msg); extern void libpam_slurm_init (void); extern void libpam_slurm_fini (void); slurm-slurm-15-08-7-1/contribs/pam_slurm_adopt/pam_slurm_adopt.c000066400000000000000000000540331265000126300246750ustar00rootroot00000000000000/*****************************************************************************\ * pam_slurm_adopt.c - Adopt incoming connections into jobs ***************************************************************************** * Copyright (C) 2015, Brigham Young University * Author: Ryan Cox * * This file is part of SLURM, a resource management program. * For details, see . * Please also read the included file: DISCLAIMER. * * SLURM is free software; you can redistribute it and/or modify it under * the terms of the GNU General Public License as published by the Free * Software Foundation; either version 2 of the License, or (at your option) * any later version. * * In addition, as a special exception, the copyright holders give permission * to link the code of portions of this program with the OpenSSL library under * certain conditions as described in each individual source file, and * distribute linked combinations including the two. You must obey the GNU * General Public License in all respects for all of the code used other than * OpenSSL. If you modify file(s) with this exception, you may extend this * exception to your version of the file(s), but you are not obligated to do * so. If you do not wish to do so, delete this exception statement from your * version. If you delete this exception statement from all source files in * the program, then also delete it here. * * SLURM is distributed in the hope that it will be useful, but WITHOUT ANY * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more * details. * * You should have received a copy of the GNU General Public License along * with SLURM; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. \*****************************************************************************/ #ifndef PAM_MODULE_NAME # define PAM_MODULE_NAME "pam_slurm_adopt" #endif #if HAVE_CONFIG_H # include "config.h" #endif #include #include #define PAM_SM_ACCOUNT #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "helper.h" #include "slurm/slurm.h" #include "src/common/slurm_xlator.h" #include "src/common/slurm_protocol_api.h" #include "src/common/xcgroup_read_config.c" #include "src/slurmd/common/xcgroup.c" /* This definition would probably be good to centralize somewhere */ #ifndef MAXHOSTNAMELEN #define MAXHOSTNAMELEN 64 #endif typedef enum { CALLERID_ACTION_NEWEST, CALLERID_ACTION_ALLOW, CALLERID_ACTION_IGNORE, CALLERID_ACTION_DENY, } callerid_action_t; /* module options */ static struct { int single_job_skip_rpc; /* Undocumented. If 1 and there is only 1 user * job, adopt it and skip RPC. If 0, *always* * try RPC even in single job situations. * Unlikely to ever be set to 0. */ int ignore_root; callerid_action_t action_no_jobs; callerid_action_t action_unknown; callerid_action_t action_adopt_failure; callerid_action_t action_generic_failure; log_level_t log_level; char *node_name; } opts; static void _init_opts(void) { opts.single_job_skip_rpc = 1; opts.ignore_root = 1; opts.action_no_jobs = CALLERID_ACTION_DENY; opts.action_unknown = CALLERID_ACTION_NEWEST; opts.action_adopt_failure = CALLERID_ACTION_ALLOW; opts.action_generic_failure = CALLERID_ACTION_ALLOW; opts.log_level = LOG_LEVEL_INFO; opts.node_name = NULL; } /* Adopts a process into the given step. Returns SLURM_SUCCESS if * opts.action_adopt_failure == CALLERID_ACTION_ALLOW or if the process was * successfully adopted. */ static int _adopt_process(pid_t pid, step_loc_t *stepd) { int fd; uint16_t protocol_version; int rc; if (!stepd) return -1; debug("_adopt_process: trying to get %u.%u to adopt %d", stepd->jobid, stepd->stepid, pid); fd = stepd_connect(stepd->directory, stepd->nodename, stepd->jobid, stepd->stepid, &protocol_version); if (fd < 0) { /* It's normal for a step to exit */ debug3("unable to connect to step %u.%u on %s: %m", stepd->jobid, stepd->stepid, stepd->nodename); return -1; } rc = stepd_add_extern_pid(fd, stepd->protocol_version, pid); close(fd); if (rc == PAM_SUCCESS) info("Process %d adopted into job %u", pid, stepd->jobid); else info("Process %d adoption FAILED for job %u", pid, stepd->jobid); return rc; } /* Returns negative number on failure. Failures are likely to occur if a step * exits; this is not a problem. */ static uid_t _get_job_uid(step_loc_t *stepd) { uid_t uid = -1; int fd; uint16_t protocol_version; fd = stepd_connect(stepd->directory, stepd->nodename, stepd->jobid, stepd->stepid, &protocol_version); if (fd < 0) { /* It's normal for a step to exit */ debug3("unable to connect to step %u.%u on %s: %m", stepd->jobid, stepd->stepid, stepd->nodename); return -1; } uid = stepd_get_uid(fd, stepd->protocol_version); close(fd); /* The step may have exited. Not a big concern. */ if ((int32_t)uid == -1) debug3("unable to determine uid of step %u.%u on %s", stepd->jobid, stepd->stepid, stepd->nodename); return uid; } /* Return mtime of a cgroup. If we can't read the right cgroup information, * return 0. That results in a (somewhat) random choice of job */ static time_t _cgroup_creation_time(char *uidcg, uint32_t job_id) { char path[PATH_MAX]; struct stat statbuf; if (snprintf(path, PATH_MAX, "%s/job_%u", uidcg, job_id) >= PATH_MAX) { info("snprintf: '%s/job_%u' longer than PATH_MAX of %d", uidcg, job_id, PATH_MAX); return 0; } if (stat(path, &statbuf) != 0) { info("Couldn't stat path '%s'", path); return 0; } return statbuf.st_mtime; } static int _indeterminate_multiple(pam_handle_t *pamh, List steps, uid_t uid, step_loc_t **out_stepd) { ListIterator itr = NULL; int rc = PAM_PERM_DENIED; step_loc_t *stepd = NULL; time_t most_recent = 0, cgroup_time = 0; char uidcg[PATH_MAX]; char *cgroup_suffix = ""; if (opts.action_unknown == CALLERID_ACTION_DENY) { debug("Denying due to action_unknown=deny"); send_user_msg(pamh, "Access denied by " PAM_MODULE_NAME ": unable to determine source job"); return PAM_PERM_DENIED; } if (opts.node_name) cgroup_suffix = xstrdup_printf("_%s", opts.node_name); if (snprintf(uidcg, PATH_MAX, "%s/memory/slurm%s/uid_%u", slurm_cgroup_conf->cgroup_mountpoint, cgroup_suffix, uid) >= PATH_MAX) { info("snprintf: '%s/memory/slurm%s/uid_%u' longer than PATH_MAX of %d", slurm_cgroup_conf->cgroup_mountpoint, cgroup_suffix, uid, PATH_MAX); /* Make the uidcg an empty string. This will effectively switch * to a (somewhat) random selection of job rather than picking * the latest, but how did you overflow PATH_MAX chars anyway? */ uidcg[0] = '\0'; } if (opts.node_name) xfree(cgroup_suffix); itr = list_iterator_create(steps); while ((stepd = list_next(itr))) { /* Only use container steps from this user */ if (stepd->stepid == SLURM_EXTERN_CONT && (uid == _get_job_uid(stepd))) { cgroup_time = _cgroup_creation_time( uidcg, stepd->jobid); /* Return the newest job_id, according to cgroup * creation. Hopefully this is a good way to do this */ if (cgroup_time > most_recent) { most_recent = cgroup_time; *out_stepd = stepd; rc = PAM_SUCCESS; } } } /* No jobs from this user exist on this node. This should have been * caught earlier but wasn't for some reason. */ if (rc != PAM_SUCCESS) { if (opts.action_no_jobs == CALLERID_ACTION_DENY) { debug("uid %u owns no jobs => deny", uid); send_user_msg(pamh, "Access denied by " PAM_MODULE_NAME ": you have no active jobs on this node"); rc = PAM_PERM_DENIED; } else { debug("uid %u owns no jobs but action_no_jobs=allow", uid); rc = PAM_SUCCESS; } } list_iterator_destroy(itr); return rc; } /* This is the action of last resort. If action_unknown=allow, allow it through * without adoption. Otherwise, call _indeterminate_multiple to pick a job. If * successful, adopt it into a process and use a return code based on success of * the adoption and the action_adopt_failure setting. */ static int _action_unknown(pam_handle_t *pamh, struct passwd *pwd, List steps) { int rc; step_loc_t *stepd = NULL; if (opts.action_unknown == CALLERID_ACTION_ALLOW) { debug("Allowing due to action_unknown=allow"); return PAM_SUCCESS; } /* Both the single job check and the RPC call have failed to ascertain * the correct job to adopt this into. Time for drastic measures */ rc = _indeterminate_multiple(pamh, steps, pwd->pw_uid, &stepd); if (rc == PAM_SUCCESS) { info("action_unknown: Picked job %u", stepd->jobid); if (_adopt_process(getpid(), stepd) == SLURM_SUCCESS) return PAM_SUCCESS; if (opts.action_adopt_failure == CALLERID_ACTION_ALLOW) return PAM_SUCCESS; else return PAM_PERM_DENIED; } else { /* This pam module was worthless, apparently */ debug("_indeterminate_multiple failed to find a job to adopt this into"); return rc; } } /* _user_job_count returns the count of jobs owned by the user AND sets job_id * to the last job from the user that is found */ static int _user_job_count(List steps, uid_t uid, step_loc_t **out_stepd) { ListIterator itr = NULL; int user_job_cnt = 0; step_loc_t *stepd = NULL; *out_stepd = NULL; itr = list_iterator_create(steps); while ((stepd = list_next(itr))) { if ((stepd->stepid == SLURM_EXTERN_CONT) && (uid == _get_job_uid(stepd))) { user_job_cnt++; *out_stepd = stepd; } } list_iterator_destroy(itr); return user_job_cnt; } static int _rpc_network_callerid(struct callerid_conn *conn, char *user_name, uint32_t *job_id) { network_callerid_msg_t req; char ip_src_str[INET6_ADDRSTRLEN]; char node_name[MAXHOSTNAMELEN]; memcpy((void *)&req.ip_src, (void *)&conn->ip_src, 16); memcpy((void *)&req.ip_dst, (void *)&conn->ip_dst, 16); req.port_src = conn->port_src; req.port_dst = conn->port_dst; req.af = conn->af; inet_ntop(req.af, &conn->ip_src, ip_src_str, INET6_ADDRSTRLEN); if (slurm_network_callerid(req, job_id, node_name, MAXHOSTNAMELEN) != SLURM_SUCCESS) { debug("From %s port %d as %s: unable to retrieve callerid data from remote slurmd", ip_src_str, req.port_src, user_name); return SLURM_FAILURE; } else if (*job_id == (uint32_t)NO_VAL) { debug("From %s port %d as %s: job indeterminate", ip_src_str, req.port_src, user_name); return SLURM_FAILURE; } else { info("From %s port %d as %s: member of job %u", ip_src_str, req.port_src, user_name, *job_id); return SLURM_SUCCESS; } } /* Ask the slurmd at the source IP address of the network connection if it knows * what job initiated this connection. If it can be determined, the process is * adopted into that job's step_extern. In the event of any failure, it returns * PAM_IGNORE so that it will fall through to the next action */ static int _try_rpc(struct passwd *pwd) { uint32_t job_id; int rc; char ip_src_str[INET6_ADDRSTRLEN]; struct callerid_conn conn; /* Gather network information for RPC call. */ debug("Checking file descriptors for network socket"); /* Check my fds for a network socket */ if (callerid_get_own_netinfo(&conn) != SLURM_SUCCESS) { /* If this failed, the RPC will surely fail. If we continued * we'd have to fill in junk for lots of variables. Fall * through to next action. This is really odd and likely means * that the kernel doesn't provide the necessary mechanisms to * view this process' network info or that sshd did something * different with the arrangement of file descriptors */ error("callerid_get_own_netinfo unable to find network socket"); return PAM_IGNORE; } if (inet_ntop(conn.af, &conn.ip_src, ip_src_str, INET6_ADDRSTRLEN) == NULL) { /* Somehow we successfully grabbed bad data. Fall through to * next action. */ error("inet_ntop failed"); return PAM_IGNORE; } /* Ask the slurmd at the source IP address about this connection */ rc = _rpc_network_callerid(&conn, pwd->pw_name, &job_id); if (rc == SLURM_SUCCESS) { step_loc_t stepd; memset(&stepd, 0, sizeof(step_loc_t)); /* We only need the jobid and stepid filled in here all the rest isn't needed for the adopt. */ stepd.jobid = job_id; stepd.stepid = SLURM_EXTERN_CONT; /* Adopt the process. If the adoption succeeds, return SUCCESS. * If not, maybe the adoption failed because the user hopped * into one node and was adopted into a job there that isn't on * our node here. In that case we got a bad jobid so we'll fall * through to the next action */ if (_adopt_process(getpid(), &stepd) == SLURM_SUCCESS) return PAM_SUCCESS; else return PAM_IGNORE; } info("From %s port %d as %s: unable to determine source job", ip_src_str, conn.port_src, pwd->pw_name); return PAM_IGNORE; } /* Use the pam logging function for now since normal logging is not yet * initialized */ log_level_t _parse_log_level(pam_handle_t *pamh, const char *log_level_str) { unsigned int u; char *endptr; u = (unsigned int)strtoul(log_level_str, &endptr, 0); if (endptr && endptr[0]) { /* not an integer */ if (!strcasecmp(log_level_str, "quiet")) u = LOG_LEVEL_QUIET; else if(!strcasecmp(log_level_str, "fatal")) u = LOG_LEVEL_FATAL; else if(!strcasecmp(log_level_str, "error")) u = LOG_LEVEL_ERROR; else if(!strcasecmp(log_level_str, "info")) u = LOG_LEVEL_INFO; else if(!strcasecmp(log_level_str, "verbose")) u = LOG_LEVEL_VERBOSE; else if(!strcasecmp(log_level_str, "debug")) u = LOG_LEVEL_DEBUG; else if(!strcasecmp(log_level_str, "debug2")) u = LOG_LEVEL_DEBUG2; else if(!strcasecmp(log_level_str, "debug3")) u = LOG_LEVEL_DEBUG3; else if(!strcasecmp(log_level_str, "debug4")) u = LOG_LEVEL_DEBUG4; else if(!strcasecmp(log_level_str, "debug5")) u = LOG_LEVEL_DEBUG5; else if(!strcasecmp(log_level_str, "sched")) u = LOG_LEVEL_SCHED; else { pam_syslog(pamh, LOG_ERR, "unrecognized log level %s, setting to max", log_level_str); /* We'll set it to the highest logging * level, just to be sure */ u = (unsigned int)LOG_LEVEL_END - 1; } } else { /* An integer was specified */ if (u >= LOG_LEVEL_END) { pam_syslog(pamh, LOG_ERR, "log level %u too high, lowering to max", u); u = (unsigned int)LOG_LEVEL_END - 1; } } return u; } /* Use the pam logging function for now, so we need pamh */ static void _parse_opts(pam_handle_t *pamh, int argc, const char **argv) { char *v; for (; argc-- > 0; ++argv) { if (!strncasecmp(*argv, "single_job_skip_rpc=0", 21)) opts.single_job_skip_rpc = 0; else if (!strncasecmp(*argv, "ignore_root=0", 13)) opts.ignore_root = 0; else if (!strncasecmp(*argv,"action_no_jobs=",15)) { v = (char *)(15 + *argv); if (!strncasecmp(v, "deny", 4)) opts.action_no_jobs = CALLERID_ACTION_DENY; else if (!strncasecmp(v, "ignore", 6)) opts.action_no_jobs = CALLERID_ACTION_IGNORE; else { pam_syslog(pamh, LOG_ERR, "unrecognized action_no_jobs=%s, setting to 'deny'", v); } } else if (!strncasecmp(*argv,"action_unknown=",15)) { v = (char *)(15 + *argv); if (!strncasecmp(v, "allow", 5)) opts.action_unknown = CALLERID_ACTION_ALLOW; else if (!strncasecmp(v, "newest", 6)) opts.action_unknown = CALLERID_ACTION_NEWEST; else if (!strncasecmp(v, "deny", 4)) opts.action_unknown = CALLERID_ACTION_DENY; else { pam_syslog(pamh, LOG_ERR, "unrecognized action_unknown=%s, setting to 'newest'", v); } } else if (!strncasecmp(*argv,"action_generic_failure=",23)) { v = (char *)(23 + *argv); if (!strncasecmp(v, "allow", 5)) opts.action_generic_failure = CALLERID_ACTION_ALLOW; else if (!strncasecmp(v, "ignore", 6)) opts.action_generic_failure = CALLERID_ACTION_IGNORE; else if (!strncasecmp(v, "deny", 4)) opts.action_generic_failure = CALLERID_ACTION_DENY; else { pam_syslog(pamh, LOG_ERR, "unrecognized action_generic_failure=%s, setting to 'allow'", v); } } else if (!strncasecmp(*argv, "log_level=", 10)) { v = (char *)(10 + *argv); opts.log_level = _parse_log_level(pamh, v); } else if (!strncasecmp(*argv, "nodename=", 9)) { v = (char *)(9 + *argv); opts.node_name = xstrdup(v); } } } static void _log_init(log_level_t level) { log_options_t logopts = LOG_OPTS_INITIALIZER; logopts.stderr_level = LOG_LEVEL_FATAL; logopts.syslog_level = level; log_init(PAM_MODULE_NAME, logopts, LOG_AUTHPRIV, NULL); } static int _load_cgroup_config() { slurm_cgroup_conf = xmalloc(sizeof(slurm_cgroup_conf_t)); bzero(slurm_cgroup_conf, sizeof(slurm_cgroup_conf_t)); if (read_slurm_cgroup_conf(slurm_cgroup_conf) != SLURM_SUCCESS) { info("read_slurm_cgroup_conf failed"); return SLURM_FAILURE; } return SLURM_SUCCESS; } /* Parse arguments, etc then get my socket address/port information. Attempt to * adopt this process into a job in the following order: * 1) If the user has only one job on the node, pick that one * 2) Send RPC to source IP of socket. If there is a slurmd at the IP * address, ask it which job I belong to. On success, pick that one * 3) Pick a job semi-randomly (default) or skip the adoption (if * configured) */ PAM_EXTERN int pam_sm_acct_mgmt(pam_handle_t *pamh, int flags __attribute__((unused)), int argc, const char **argv) { int retval = PAM_IGNORE, rc, slurmrc, bufsize, user_jobs; char *user_name; List steps = NULL; step_loc_t *stepd = NULL; struct passwd pwd, *pwd_result; char *buf = NULL; _init_opts(); _parse_opts(pamh, argc, argv); _log_init(opts.log_level); switch (opts.action_generic_failure) { case CALLERID_ACTION_DENY: rc = PAM_PERM_DENIED; break; case CALLERID_ACTION_ALLOW: rc = PAM_SUCCESS; break; case CALLERID_ACTION_IGNORE: rc = PAM_IGNORE; break; /* Newer gcc versions warn if enum cases are missing */ default: error("The code is broken!!!!"); } retval = pam_get_item(pamh, PAM_USER, (void *) &user_name); if (user_name == NULL || retval != PAM_SUCCESS) { pam_syslog(pamh, LOG_ERR, "No username in PAM_USER? Fail!"); return PAM_SESSION_ERR; } /* Check for an unsafe config that might lock out root. This is a very * basic check that shouldn't be 100% relied on */ if (!opts.ignore_root && (opts.action_unknown == CALLERID_ACTION_DENY || opts.action_no_jobs != CALLERID_ACTION_ALLOW || opts.action_adopt_failure != CALLERID_ACTION_ALLOW || opts.action_generic_failure != CALLERID_ACTION_ALLOW )) { /* Let's get verbose */ info("==============================="); info("Danger!!!"); info("A crazy admin set ignore_root=0 and some unsafe actions"); info("You might lock out root!"); info("If this is desirable, modify the source code"); info("Setting ignore_root=1 and continuing"); opts.ignore_root = 1; } /* Ignoring root is probably best but the admin can allow it */ if (!strcmp(user_name, "root")) { if (opts.ignore_root) { info("Ignoring root user"); return PAM_IGNORE; } else { /* This administrator is crazy */ info("Danger!!! This is a connection attempt by root and ignore_root=0 is set! Hope for the best!"); } } /* Calculate buffer size for getpwnam_r */ bufsize = sysconf(_SC_GETPW_R_SIZE_MAX); if (bufsize == -1) bufsize = 16384; /* take a large guess */ buf = xmalloc(bufsize); retval = getpwnam_r(user_name, &pwd, buf, bufsize, &pwd_result); if (pwd_result == NULL) { if (retval == 0) { error("getpwnam_r could not locate %s", user_name); } else { errno = retval; error("getpwnam_r: %m"); } xfree(buf); return PAM_SESSION_ERR; } if (_load_cgroup_config() != SLURM_SUCCESS) return rc; /* Check if there are any steps on the node from any user. A failure here * likely means failures everywhere so exit on failure or if no local jobs * exist. */ steps = stepd_available(NULL, opts.node_name); if (!steps) { error("Error obtaining local step information."); goto cleanup; } /* Check to see if this user has only one job on the node. If so, choose * that job and adopt this process into it (unless configured not to) */ user_jobs = _user_job_count(steps, pwd.pw_uid, &stepd); if (user_jobs == 0) { if (opts.action_no_jobs == CALLERID_ACTION_DENY) { send_user_msg(pamh, "Access denied by " PAM_MODULE_NAME ": you have no active jobs on this node"); rc = PAM_PERM_DENIED; } else { debug("uid %u owns no jobs but action_no_jobs=ignore", pwd.pw_uid); rc = PAM_IGNORE; } goto cleanup; } else if (user_jobs == 1) { if (opts.single_job_skip_rpc) { info("Connection by user %s: user has only one job %u", user_name, stepd->jobid); slurmrc = _adopt_process(getpid(), stepd); /* If adoption into the only job fails, it is time to * exit. Return code is based on the * action_adopt_failure setting */ if (slurmrc == SLURM_SUCCESS || (opts.action_adopt_failure == CALLERID_ACTION_ALLOW)) rc = PAM_SUCCESS; else rc = PAM_PERM_DENIED; goto cleanup; } } else { debug("uid %u has %d jobs", pwd.pw_uid, user_jobs); } /* Single job check turned up nothing (or we skipped it). Make RPC call * to slurmd at source IP. If it can tell us the job, the function calls * _adopt_process */ rc = _try_rpc(&pwd); if (rc == PAM_SUCCESS) goto cleanup; /* The source of the connection either didn't reply or couldn't * determine the job ID at the source. Proceed to action_unknown */ rc = _action_unknown(pamh, &pwd, steps); cleanup: FREE_NULL_LIST(steps); xfree(buf); xfree(slurm_cgroup_conf); xfree(opts.node_name); return rc; } #ifdef PAM_STATIC struct pam_module _pam_slurm_adopt_modstruct = { PAM_MODULE_NAME, NULL, NULL, pam_sm_acct_mgmt, NULL, NULL, NULL, }; #endif slurm-slurm-15-08-7-1/contribs/perlapi/000077500000000000000000000000001265000126300176045ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/perlapi/Makefile.am000066400000000000000000000000701265000126300216350ustar00rootroot00000000000000SUBDIRS = libslurm libslurmdb EXTRA_DIST = common/msg.h slurm-slurm-15-08-7-1/contribs/perlapi/Makefile.in000066400000000000000000000563271265000126300216660ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/perlapi DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \ ctags-recursive dvi-recursive html-recursive info-recursive \ install-data-recursive install-dvi-recursive \ install-exec-recursive install-html-recursive \ install-info-recursive install-pdf-recursive \ install-ps-recursive install-recursive installcheck-recursive \ installdirs-recursive pdf-recursive ps-recursive \ tags-recursive uninstall-recursive am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \ distclean-recursive maintainer-clean-recursive am__recursive_targets = \ $(RECURSIVE_TARGETS) \ $(RECURSIVE_CLEAN_TARGETS) \ $(am__extra_recursive_targets) AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \ distdir am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` ETAGS = etags CTAGS = ctags DIST_SUBDIRS = $(SUBDIRS) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) am__relativize = \ dir0=`pwd`; \ sed_first='s,^\([^/]*\)/.*$$,\1,'; \ sed_rest='s,^[^/]*/*,,'; \ sed_last='s,^.*/\([^/]*\)$$,\1,'; \ sed_butlast='s,/*[^/]*$$,,'; \ while test -n "$$dir1"; do \ first=`echo "$$dir1" | sed -e "$$sed_first"`; \ if test "$$first" != "."; then \ if test "$$first" = ".."; then \ dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \ dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \ else \ first2=`echo "$$dir2" | sed -e "$$sed_first"`; \ if test "$$first2" = "$$first"; then \ dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \ else \ dir2="../$$dir2"; \ fi; \ dir0="$$dir0"/"$$first"; \ fi; \ fi; \ dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \ done; \ reldir="$$dir2" ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ SUBDIRS = libslurm libslurmdb EXTRA_DIST = common/msg.h all: all-recursive .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu contribs/perlapi/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu contribs/perlapi/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs # This directory's subdirectories are mostly independent; you can cd # into them and run 'make' without going through this Makefile. # To change the values of 'make' variables: instead of editing Makefiles, # (1) if the variable is set in 'config.status', edit 'config.status' # (which will cause the Makefiles to be regenerated when you run 'make'); # (2) otherwise, pass the desired values on the 'make' command line. $(am__recursive_targets): @fail=; \ if $(am__make_keepgoing); then \ failcom='fail=yes'; \ else \ failcom='exit 1'; \ fi; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ case "$@" in \ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ *) list='$(SUBDIRS)' ;; \ esac; \ for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done; \ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-recursive TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ include_option=--etags-include; \ empty_fix=.; \ else \ include_option=--include; \ empty_fix=; \ fi; \ list='$(SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test ! -f $$subdir/TAGS || \ set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \ fi; \ done; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-recursive CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-recursive cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ $(am__make_dryrun) \ || test -d "$(distdir)/$$subdir" \ || $(MKDIR_P) "$(distdir)/$$subdir" \ || exit 1; \ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \ $(am__relativize); \ new_distdir=$$reldir; \ dir1=$$subdir; dir2="$(top_distdir)"; \ $(am__relativize); \ new_top_distdir=$$reldir; \ echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \ echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \ ($(am__cd) $$subdir && \ $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$$new_top_distdir" \ distdir="$$new_distdir" \ am__remove_distdir=: \ am__skip_length_check=: \ am__skip_mode_fix=: \ distdir) \ || exit 1; \ fi; \ done check-am: all-am check: check-recursive all-am: Makefile installdirs: installdirs-recursive installdirs-am: install: install-recursive install-exec: install-exec-recursive install-data: install-data-recursive uninstall: uninstall-recursive install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-recursive install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-recursive clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-recursive -rm -f Makefile distclean-am: clean-am distclean-generic distclean-tags dvi: dvi-recursive dvi-am: html: html-recursive html-am: info: info-recursive info-am: install-data-am: install-dvi: install-dvi-recursive install-dvi-am: install-exec-am: install-html: install-html-recursive install-html-am: install-info: install-info-recursive install-info-am: install-man: install-pdf: install-pdf-recursive install-pdf-am: install-ps: install-ps-recursive install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-recursive -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-recursive mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-recursive pdf-am: ps: ps-recursive ps-am: uninstall-am: .MAKE: $(am__recursive_targets) install-am install-strip .PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am check \ check-am clean clean-generic clean-libtool cscopelist-am ctags \ ctags-am distclean distclean-generic distclean-libtool \ distclean-tags distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ installdirs-am maintainer-clean maintainer-clean-generic \ mostlyclean mostlyclean-generic mostlyclean-libtool pdf pdf-am \ ps ps-am tags tags-am uninstall uninstall-am # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/perlapi/common/000077500000000000000000000000001265000126300210745ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/perlapi/common/msg.h000066400000000000000000000176431265000126300220460ustar00rootroot00000000000000/* * msg.h - prototypes of msg-hv converting functions */ #ifndef _MSG_H #define _MSG_H #include #include #include #include #include typedef char* charp; /* * store an uint16_t into AV */ inline static int av_store_uint16_t(AV* av, int index, uint16_t val) { SV* sv = NULL; /* Perl has a hard time figuring out the an unsigned int is equal to INFINITE or NO_VAL since they are treated as signed ints so we will handle this here. */ if(val == (uint16_t)INFINITE) sv = newSViv(INFINITE); else if(val == (uint16_t)NO_VAL) sv = newSViv(NO_VAL); else sv = newSViv(val); if (av_store(av, (I32)index, sv) == NULL) { SvREFCNT_dec(sv); return -1; } return 0; } /* * store an uint32_t into AV */ inline static int av_store_uint32_t(AV* av, int index, uint32_t val) { SV* sv = NULL; /* Perl has a hard time figuring out the an unsigned int is equal to INFINITE or NO_VAL since they are treated as signed ints so we will handle this here. */ if(val == (uint32_t)INFINITE) sv = newSViv(INFINITE); else if(val == (uint32_t)NO_VAL) sv = newSViv(NO_VAL); else sv = newSViv(val); if (av_store(av, (I32)index, sv) == NULL) { SvREFCNT_dec(sv); return -1; } return 0; } /* * store an int into AV */ inline static int av_store_int(AV* av, int index, int val) { SV* sv = newSViv(val); if (av_store(av, (I32)index, sv) == NULL) { SvREFCNT_dec(sv); return -1; } return 0; } /* * store an uint32_t into AV */ inline static int av_store_int32_t(AV* av, int index, int32_t val) { return av_store_int(av, index, val); } /* * store a string into HV */ inline static int hv_store_charp(HV* hv, const char *key, charp val) { SV* sv = NULL; if (val) { sv = newSVpv(val, 0); if (!key || hv_store(hv, key, (I32)strlen(key), sv, 0) == NULL) { SvREFCNT_dec(sv); return -1; } } return 0; } /* * store an unsigned 64b int into HV */ inline static int hv_store_uint64_t(HV* hv, const char *key, uint64_t val) { SV* sv = NULL; /* Perl has a hard time figuring out the an unsigned int is equal to INFINITE or NO_VAL since they are treated as signed ints so we will handle this here. */ if(val == (uint64_t)INFINITE) sv = newSViv(INFINITE); else if(val == (uint64_t)NO_VAL) sv = newSViv(NO_VAL); else sv = newSVuv(val); if (!key || hv_store(hv, key, (I32)strlen(key), sv, 0) == NULL) { SvREFCNT_dec(sv); return -1; } return 0; } /* * store an unsigned 32b int into HV */ inline static int hv_store_uint32_t(HV* hv, const char *key, uint32_t val) { SV* sv = NULL; /* Perl has a hard time figuring out the an unsigned int is equal to INFINITE or NO_VAL since they are treated as signed ints so we will handle this here. */ if(val == (uint32_t)INFINITE) sv = newSViv(INFINITE); else if(val == (uint32_t)NO_VAL) sv = newSViv(NO_VAL); else sv = newSVuv(val); if (!key || hv_store(hv, key, (I32)strlen(key), sv, 0) == NULL) { SvREFCNT_dec(sv); return -1; } return 0; } /* * store an unsigned 16b int into HV */ inline static int hv_store_uint16_t(HV* hv, const char *key, uint16_t val) { SV* sv = NULL; /* Perl has a hard time figuring out the an unsigned int is equal to INFINITE or NO_VAL since they are treated as signed ints so we will handle this here. */ if(val == (uint16_t)INFINITE) sv = newSViv(INFINITE); else if(val == (uint16_t)NO_VAL) sv = newSViv(NO_VAL); else sv = newSVuv(val); if (!key || hv_store(hv, key, (I32)strlen(key), sv, 0) == NULL) { SvREFCNT_dec(sv); return -1; } return 0; } /* * store an unsigned 8b int into HV */ inline static int hv_store_uint8_t(HV* hv, const char *key, uint8_t val) { SV* sv = NULL; /* Perl has a hard time figuring out the an unsigned int is equal to INFINITE or NO_VAL since they are treated as signed ints so we will handle this here. */ if(val == (uint8_t)INFINITE) sv = newSViv(INFINITE); else if(val == (uint8_t)NO_VAL) sv = newSViv(NO_VAL); else sv = newSVuv(val); if (!key || hv_store(hv, key, (I32)strlen(key), sv, 0) == NULL) { SvREFCNT_dec(sv); return -1; } return 0; } /* * store a uid_t into HV */ inline static int hv_store_uid_t(HV* hv, const char *key, uid_t val) { SV* sv = newSVuv(val); if (!key || hv_store(hv, key, (I32)strlen(key), sv, 0) == NULL) { SvREFCNT_dec(sv); return -1; } return 0; } /* * store a signed int into HV */ inline static int hv_store_int(HV* hv, const char *key, int val) { SV* sv = newSViv(val); if (!key || hv_store(hv, key, (I32)strlen(key), sv, 0) == NULL) { SvREFCNT_dec(sv); return -1; } return 0; } /* * store a double */ inline static int hv_store_double(HV* hv, const char *key, double val) { SV* sv = newSVnv(val); if (!key || hv_store(hv, key, (I32)strlen(key), sv, 0) == NULL) { SvREFCNT_dec(sv); return -1; } return 0; } /* * store a signed 32b int into HV */ inline static int hv_store_int32_t(HV* hv, const char *key, int32_t val) { return hv_store_int(hv, key, val); } /* * store a bool into HV */ inline static int hv_store_bool(HV* hv, const char *key, bool val) { if (!key || hv_store(hv, key, (I32)strlen(key), (val ? &PL_sv_yes : &PL_sv_no), 0) == NULL) { return -1; } return 0; } /* * store a time_t into HV */ inline static int hv_store_time_t(HV* hv, const char *key, time_t val) { SV* sv = newSVuv(val); if (!key || hv_store(hv, key, (I32)strlen(key), sv, 0) == NULL) { SvREFCNT_dec(sv); return -1; } return 0; } /* * store a SV into HV */ inline static int hv_store_sv(HV* hv, const char *key, SV* sv) { if (!key || hv_store(hv, key, (I32)strlen(key), sv, 0) == NULL) { return -1; } return 0; } /* * store a PTR into HV * set classname to Nullch to avoid blessing the created SV. */ inline static int hv_store_ptr(HV* hv, const char *key, void* ptr, const char *classname) { SV* sv = NULL; /* * if ptr == NULL and we call sv_setref_pv() and store the sv in hv, * sv_isobject() will fail later when FETCH_PTR_FIELD. */ if(ptr) { sv = newSV(0); sv_setref_pv(sv, classname, ptr); if (!key || hv_store(hv, key, (I32)strlen(key), sv, 0) == NULL) { SvREFCNT_dec(sv); return -1; } } return 0; } #define SV2int(sv) SvIV(sv) #define SV2int32_t(sv) SvIV(sv) #define SV2uint64_t(sv) SvUV(sv) #define SV2uint32_t(sv) SvUV(sv) #define SV2uint16_t(sv) SvUV(sv) #define SV2uint8_t(sv) SvUV(sv) #define SV2time_t(sv) SvUV(sv) #define SV2charp(sv) SvPV_nolen(sv) #define SV2bool(sv) SvTRUE(sv) #if 0 /* Error on some 32-bit systems */ #define SV2ptr(sv) SvIV(SvRV(sv)) #else static inline void * SV2ptr(SV *sv) { void * ptr = (void *) ((intptr_t) SvIV(SvRV(sv))); return ptr; } #endif #define FETCH_FIELD(hv, ptr, field, type, required) \ do { \ SV** svp; \ if ( (svp = hv_fetch (hv, #field, strlen(#field), FALSE)) ) { \ ptr->field = (type) (SV2##type (*svp)); \ } else if (required) { \ Perl_warn (aTHX_ "Required field \"" #field "\" missing in HV"); \ return -1; \ } \ } while (0) #define FETCH_PTR_FIELD(hv, ptr, field, classname, required) \ do { \ SV** svp; \ if ( (svp = hv_fetch (hv, #field, strlen(#field), FALSE)) ) { \ if (classname) { \ if (! ( sv_isobject(*svp) && \ SvTYPE(SvRV(*svp)) == SVt_PVMG && \ sv_derived_from(*svp, classname)) ) { \ Perl_croak(aTHX_ "field %s is not an object of %s", #field, classname); \ } \ } \ ptr->field = (typeof(ptr->field)) (SV2ptr (*svp)); \ } else if (required) { \ Perl_warn (aTHX_ "Required field \"" #field "\" missing in HV"); \ return -1; \ } \ } while (0) #define STORE_FIELD(hv, ptr, field, type) \ do { \ if (hv_store_##type(hv, #field, ptr->field)) { \ Perl_warn (aTHX_ "Failed to store field \"" #field "\""); \ return -1; \ } \ } while (0) #define STORE_PTR_FIELD(hv, ptr, field, classname) \ do { \ if (hv_store_ptr(hv, #field, ptr->field, classname)) { \ Perl_warn (aTHX_ "Failed to store field \"" #field "\""); \ return -1; \ } \ } while (0) #endif /* _MSG_H */ slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/000077500000000000000000000000001265000126300214355ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/Makefile.am000066400000000000000000000112651265000126300234760ustar00rootroot00000000000000 AUTOMAKE_OPTIONS = foreign # copied from pidgin # perl_dir = perl perlpath = /usr/bin/perl perl_sources = \ $(perl_dir)/Makefile.PL.in \ $(perl_dir)/ppport.h \ $(perl_dir)/Slurm.xs \ $(perl_dir)/lib/Slurm.pm \ $(perl_dir)/lib/Slurm/Bitstr.pm \ $(perl_dir)/lib/Slurm/Constant.pm \ $(perl_dir)/lib/Slurm/Hostlist.pm \ $(perl_dir)/lib/Slurm/Stepctx.pm \ $(perl_dir)/typemap \ $(perl_dir)/classmap \ $(perl_dir)/bitstr.h \ $(perl_dir)/slurm-perl.h \ $(perl_dir)/alloc.c \ $(perl_dir)/block.c \ $(perl_dir)/conf.c \ $(perl_dir)/job.c \ $(perl_dir)/node.c \ $(perl_dir)/partition.c \ $(perl_dir)/reservation.c \ $(perl_dir)/step.c \ $(perl_dir)/step_ctx.c \ $(perl_dir)/topo.c \ $(perl_dir)/trigger.c test_sources = \ $(perl_dir)/t/00-use.t \ $(perl_dir)/t/01-error.t \ $(perl_dir)/t/02-string.t \ $(perl_dir)/t/03-block.t \ $(perl_dir)/t/04-alloc.c \ $(perl_dir)/t/05-signal.t \ $(perl_dir)/t/06-complete.t \ $(perl_dir)/t/07-spawn.t \ $(perl_dir)/t/08-conf.t \ $(perl_dir)/t/09-resource.t \ $(perl_dir)/t/10-job.t \ $(perl_dir)/t/11-step.t \ $(perl_dir)/t/12-node.t \ $(perl_dir)/t/13-topo.t \ $(perl_dir)/t/14-select.t \ $(perl_dir)/t/15-partition.t \ $(perl_dir)/t/16-reservation.t \ $(perl_dir)/t/17-ping.t \ $(perl_dir)/t/18-suspend.t \ $(perl_dir)/t/19-checkpoint.t \ $(perl_dir)/t/20-trigger.t \ $(perl_dir)/t/21-hostlist.t \ $(perl_dir)/t/22-list.t \ $(perl_dir)/t/23-bitstr.t EXTRA_DIST = $(perl_sources) $(test_sources) $(perl_dir)/Makefile: $(perl_dir)/Makefile.PL @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi @cd $(perl_dir) && $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT= # # Note on linking logic below # # Install at PREFIX and ignore both INSTALL_BASE and PERL_MM_OPT. Having both # more than one installation location specification results in a build error. # AIX needs to use LD to link. It can not use gcc. # Suse Linux compiles with gcc, but picks some other compiler to use for linking. # Since some CFLAGS may be incompatible with this other compiler, the build # may fail, as seen on BlueGene platforms. # Other Linux implementations sems to work fine with the LD specified as below # all-local: $(perl_dir)/Makefile #libslurm if HAVE_AIX @cd $(perl_dir) && \ if [ ! -f Makefile ]; then \ $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT= ; \ fi && \ ($(MAKE) CC="$(CC)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS) || \ $(MAKE) CC="$(CC)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS)) && \ cd ..; else @cd $(perl_dir) && \ if [ ! -f Makefile ]; then \ $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT= ; \ fi && \ ($(MAKE) CC="$(CC)" LD="$(CC) $(CFLAGS) $(LDFLAGS)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS) || \ $(MAKE) CC="$(CC)" LD="$(CC) $(CFLAGS) $(LDFLAGS)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS)) && \ cd ..; endif install-exec-local: @cd $(perl_dir) && \ $(MAKE) DESTDIR=$(DESTDIR) install && \ cd ..; # Evil Hack (TM) # ... which doesn't work with DESTDIR installs. FIXME? uninstall-local: @cd $(perl_dir) && \ `$(MAKE) uninstall | grep unlink | sed -e 's#/usr#${prefix}#' -e 's#unlink#rm -f#'` && \ cd ..; clean-generic: @cd $(perl_dir); \ $(MAKE) clean; \ if test "x${top_srcdir}" != "x${top_builddir}"; then \ rm -fr lib t *c *h *xs typemap classmap; \ fi; \ cd ..; @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi distclean-generic: @cd $(perl_dir); \ $(MAKE) realclean; \ rm -f Makefile.PL; \ rm -f Makefile.old; \ rm -f Makefile; \ cd ..; @rm -f Makefile @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi AM_CPPFLAGS = \ -DVERSION=\"$(VERSION)\" \ -I$(top_srcdir) \ -I$(top_builddir) \ $(DEBUG_CFLAGS) \ $(PERL_CFLAGS) slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/Makefile.in000066400000000000000000000533361265000126300235140ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/perlapi/libslurm DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign # copied from pidgin # perl_dir = perl perlpath = /usr/bin/perl perl_sources = \ $(perl_dir)/Makefile.PL.in \ $(perl_dir)/ppport.h \ $(perl_dir)/Slurm.xs \ $(perl_dir)/lib/Slurm.pm \ $(perl_dir)/lib/Slurm/Bitstr.pm \ $(perl_dir)/lib/Slurm/Constant.pm \ $(perl_dir)/lib/Slurm/Hostlist.pm \ $(perl_dir)/lib/Slurm/Stepctx.pm \ $(perl_dir)/typemap \ $(perl_dir)/classmap \ $(perl_dir)/bitstr.h \ $(perl_dir)/slurm-perl.h \ $(perl_dir)/alloc.c \ $(perl_dir)/block.c \ $(perl_dir)/conf.c \ $(perl_dir)/job.c \ $(perl_dir)/node.c \ $(perl_dir)/partition.c \ $(perl_dir)/reservation.c \ $(perl_dir)/step.c \ $(perl_dir)/step_ctx.c \ $(perl_dir)/topo.c \ $(perl_dir)/trigger.c test_sources = \ $(perl_dir)/t/00-use.t \ $(perl_dir)/t/01-error.t \ $(perl_dir)/t/02-string.t \ $(perl_dir)/t/03-block.t \ $(perl_dir)/t/04-alloc.c \ $(perl_dir)/t/05-signal.t \ $(perl_dir)/t/06-complete.t \ $(perl_dir)/t/07-spawn.t \ $(perl_dir)/t/08-conf.t \ $(perl_dir)/t/09-resource.t \ $(perl_dir)/t/10-job.t \ $(perl_dir)/t/11-step.t \ $(perl_dir)/t/12-node.t \ $(perl_dir)/t/13-topo.t \ $(perl_dir)/t/14-select.t \ $(perl_dir)/t/15-partition.t \ $(perl_dir)/t/16-reservation.t \ $(perl_dir)/t/17-ping.t \ $(perl_dir)/t/18-suspend.t \ $(perl_dir)/t/19-checkpoint.t \ $(perl_dir)/t/20-trigger.t \ $(perl_dir)/t/21-hostlist.t \ $(perl_dir)/t/22-list.t \ $(perl_dir)/t/23-bitstr.t EXTRA_DIST = $(perl_sources) $(test_sources) AM_CPPFLAGS = \ -DVERSION=\"$(VERSION)\" \ -I$(top_srcdir) \ -I$(top_builddir) \ $(DEBUG_CFLAGS) \ $(PERL_CFLAGS) all: all-am .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign contribs/perlapi/libslurm/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign contribs/perlapi/libslurm/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile all-local installdirs: install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-exec-local install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-local .MAKE: install-am install-strip .PHONY: all all-am all-local check check-am clean clean-generic \ clean-libtool cscopelist-am ctags-am distclean \ distclean-generic distclean-libtool distdir dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-exec-local install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags-am uninstall uninstall-am uninstall-local $(perl_dir)/Makefile: $(perl_dir)/Makefile.PL @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi @cd $(perl_dir) && $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT= # # Note on linking logic below # # Install at PREFIX and ignore both INSTALL_BASE and PERL_MM_OPT. Having both # more than one installation location specification results in a build error. # AIX needs to use LD to link. It can not use gcc. # Suse Linux compiles with gcc, but picks some other compiler to use for linking. # Since some CFLAGS may be incompatible with this other compiler, the build # may fail, as seen on BlueGene platforms. # Other Linux implementations sems to work fine with the LD specified as below # all-local: $(perl_dir)/Makefile #libslurm @HAVE_AIX_TRUE@ @cd $(perl_dir) && \ @HAVE_AIX_TRUE@ if [ ! -f Makefile ]; then \ @HAVE_AIX_TRUE@ $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT= ; \ @HAVE_AIX_TRUE@ fi && \ @HAVE_AIX_TRUE@ ($(MAKE) CC="$(CC)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS) || \ @HAVE_AIX_TRUE@ $(MAKE) CC="$(CC)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS)) && \ @HAVE_AIX_TRUE@ cd ..; @HAVE_AIX_FALSE@ @cd $(perl_dir) && \ @HAVE_AIX_FALSE@ if [ ! -f Makefile ]; then \ @HAVE_AIX_FALSE@ $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT= ; \ @HAVE_AIX_FALSE@ fi && \ @HAVE_AIX_FALSE@ ($(MAKE) CC="$(CC)" LD="$(CC) $(CFLAGS) $(LDFLAGS)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS) || \ @HAVE_AIX_FALSE@ $(MAKE) CC="$(CC)" LD="$(CC) $(CFLAGS) $(LDFLAGS)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS)) && \ @HAVE_AIX_FALSE@ cd ..; install-exec-local: @cd $(perl_dir) && \ $(MAKE) DESTDIR=$(DESTDIR) install && \ cd ..; # Evil Hack (TM) # ... which doesn't work with DESTDIR installs. FIXME? uninstall-local: @cd $(perl_dir) && \ `$(MAKE) uninstall | grep unlink | sed -e 's#/usr#${prefix}#' -e 's#unlink#rm -f#'` && \ cd ..; clean-generic: @cd $(perl_dir); \ $(MAKE) clean; \ if test "x${top_srcdir}" != "x${top_builddir}"; then \ rm -fr lib t *c *h *xs typemap classmap; \ fi; \ cd ..; @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi distclean-generic: @cd $(perl_dir); \ $(MAKE) realclean; \ rm -f Makefile.PL; \ rm -f Makefile.old; \ rm -f Makefile; \ cd ..; @rm -f Makefile @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/000077500000000000000000000000001265000126300223775ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/Makefile.PL.in000066400000000000000000000202401265000126300247540ustar00rootroot00000000000000use 5.008; use ExtUtils::MakeMaker; if (!(-e "@prefix@/lib/libslurm.so") && !(-e "@top_builddir@/src/api/.libs/libslurm.so")) { die("I can't seem to find the library files I need in your SLURM installation. Please check that you have SLURM installation has at least one of the following link(s): @top_builddir@/src/api/.libs/libslurm.so @prefix@/lib/libslurm.so\n"); } # Most all the extra code is to deal with MakeMaker < 6.11 not working # correctly to build rpms my( $mm_version, $mm_knows_destdir, $mm_has_destdir, $mm_has_good_destdir, $mm_needs_destdir, ); # Gather some information about what EU::MM offers and/or needs # Store the version for later use $mm_version = $ExtUtils::MakeMaker::VERSION; # MakeMaker prior to 6.11 doesn't support DESTDIR which is needed for # packaging with builddir!=destdir. See bug 2388. $mm_knows_destdir = $ExtUtils::MakeMaker::Recognized_Att_Keys{DESTDIR}; $mm_has_good_destdir = $mm_version >= 6.11; # Add DESTDIR hack only if it's requested (and necessary) $mm_needs_destdir = !$mm_has_good_destdir; $mm_has_destdir = $mm_knows_destdir || $mm_needs_destdir; $ExtUtils::MakeMaker::Recognized_Att_Keys{"DESTDIR"} = 1 if $mm_needs_destdir; if ($mm_needs_destdir) { my $error = < to get an up-to-date version. This should only be necessary if you are creating binary packages. *********************************************************************** DESTDIR_HACK $error =~ s/^ {4}//gm; warn $error; } elsif (!$mm_has_good_destdir) { my $error = < to get an up-to-date version. This should only be necessary if you are creating binary packages. *********************************************************************** DESTDIR_BUG $error =~ s/^ {4}//gm; warn $error; } # AIX has problems with not always having the correct # flags so we have to add some :) my $os = lc(`uname`); my $other_ld_flags = "-Wl,-rpath,@top_builddir@/src/api/.libs -Wl,-rpath,@prefix@/lib"; $other_ld_flags = " -brtl -G -bnoentry -bgcbypass:1000 -bexpfull" if $os =~ "aix"; WriteMakefile( NAME => 'Slurm', VERSION_FROM => 'lib/Slurm.pm', # finds $VERSION PREREQ_PM => {}, # e.g., Module::Name => 1.1 ($] >= 5.005 ? ## Add these new keywords supported since 5.005 (ABSTRACT_FROM => 'lib/Slurm.pm', # retrieve abstract from module AUTHOR => 'Hongjia Cao ') : ()), LIBS => ["-L@top_builddir@/src/api/.libs -L@prefix@/lib -lslurm"], # e.g., '-lm' DEFINE => '', # e.g., '-DHAVE_SOMETHING' INC => "-I. -I@top_srcdir@ -I@top_srcdir@/contribs/perlapi/common -I@top_builddir@", # Un-comment this if you add C files to link with later: OBJECT => '$(O_FILES)', # link all the C files too CCFLAGS => '-g', dynamic_lib => {'OTHERLDFLAGS' => $other_ld_flags}, ); # Override the install routine to add our additional install dirs and # hack DESTDIR support into old EU::MMs. sub MY::install { package MY; my $self = shift; my @code = split(/\n/, $self->SUPER::install(@_)); init_MY_globals($self); foreach (@code) { # Write the correct path to perllocal.pod next if /installed into/; # Replace all other $(INSTALL*) vars (except $(INSTALLDIRS) of course) # with their $(DESTINSTALL*) counterparts s/\Q$(\E(INSTALL(?!DIRS)${MACRO_RE})\Q)\E/\$(DEST$1)/g; } clean_MY_globals($self); return join("\n", @code); } # Now override the constants routine to add our own macros. sub MY::constants { package MY; my $self = shift; my @code = split(/\n/, $self->SUPER::constants(@_)); init_MY_globals($self); foreach my $line (@code) { # Skip comments next if $line =~ /^\s*\#/; # Skip everything which isn't a var assignment. next unless line_has_macro_def($line); #tore the assignment string if necessary. set_EQ_from_line($line); # Add some "dummy" (PERL|SITE|VENDOR)PREFIX macros for later use (only if # necessary for old EU::MMs of course) if (line_has_macro_def($line, 'PREFIX')) { foreach my $r (@REPOSITORIES) { my $rprefix = "${r}PREFIX"; if (!defined(get_macro($rprefix))) { set_macro($rprefix, macro_ref('PREFIX')); $line .= "\n" . macro_def($rprefix); } } } # fix problem with /usr(/local) being used as a prefix # instead of the real thing. if ($line =~ 'INSTALL') { $line =~ s/= \/usr\/local/= \$(PREFIX)/; $line =~ s/= \/usr/= \$(PREFIX)/; } # Add DESTDIR support if necessary if (line_has_macro_def($line, 'INSTALLDIRS')) { if(!get_macro('DESTDIR')) { $line .= "\n" . macro_def('DESTDIR'); } } elsif (line_has_macro_def($line, qr/INSTALL${MACRO_RE}/)) { my $macro = get_macro_name_from_line($line); if(!get_macro('DEST' . $macro, macro_ref('DESTDIR') . macro_ref($macro))) { $line .= "\n" . macro_def('DEST' . $macro, macro_ref('DESTDIR') . macro_ref($macro)); } } } push(@code, qq{}); clean_MY_globals($self); return join("\n", @code); } package MY; use vars qw( @REPOSITORIES $MY_GLOBALS_ARE_SANE $MACRO_RE $EQ_RE $EQ $SELF ); sub line_has_macro_def { my($line, $name) = (@_, undef); $name = $MACRO_RE unless defined $name; return $line =~ /^($name)${EQ_RE}/; } sub macro_def { my($name, $val) = (@_, undef); my $error_message = "Problems building report error."; die $error_message unless defined $name; die $error_message unless defined $EQ; $val = $SELF->{$name} unless defined $val; return $name . $EQ . $val; } sub set_EQ_from_line { my($line) = (@_); return if defined($EQ); $line =~ /\S(${EQ_RE})/; $EQ = $1; } # Reads the name of the macro defined on the given line. # # The first parameter must be the line to be expected. If the line doesn't # contain a macro definition, weird things may happen. So check with # line_has_macro_def() before! sub get_macro_name_from_line { my($line) = (@_); $line =~ /^(${MACRO_RE})${EQ_RE}/; return $1; } sub macro_ref { my($name) = (@_); return sprintf('$(%s)', $name); } # Reads the value of the given macro from the current instance of EU::MM. # # The first parameter must be the name of a macro. sub get_macro { my($name) = (@_); return $SELF->{$name}; } # Sets the value of the macro with the given name to the given value in the # current instance of EU::MM. Just sets, doesn't write to the Makefile! # # The first parameter must be the macro's name, the second the value. sub set_macro { my($name, $val) = (@_); $SELF->{$name} = $val; } # For some reason initializing the vars on the global scope doesn't work; # guess its some weird Perl behaviour in combination with bless(). sub init_MY_globals { my $self = shift; # Keep a reference to ourselves so we don't have to feed it to the helper # scripts. $SELF = $self; return if $MY_GLOBALS_ARE_SANE; $MY_GLOBALS_ARE_SANE = 1; @REPOSITORIES = qw( PERL SITE VENDOR ); # Macro names follow this RE -- at least stricly enough for our purposes. $MACRO_RE = qr/[A-Z0-9_]+/; # Normally macros are assigned via FOO = bar. But the part with the equal # sign might differ from platform to platform. So we use this RE: $EQ_RE = qr/\s*:?=\s*/; # To assign or own macros we'll follow the first assignment string we find; # normally " = ". $EQ = undef; } # Unset $SELF to avoid any leaking memory. sub clean_MY_globals { my $self = shift; $SELF = undef; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/Slurm.xs000066400000000000000000002201701265000126300240570ustar00rootroot00000000000000#include "EXTERN.h" #include "perl.h" #include "XSUB.h" #define NEED_newRV_noinc_GLOBAL #include "ppport.h" #include #include #include #include #include "slurm-perl.h" #include "bitstr.h" extern void slurm_conf_reinit(char *pathname); /* Custom typemap that free's memory after copying to perl stack. */ typedef char char_xfree; typedef char char_free; struct slurm { }; typedef struct slurm * slurm_t; /* * default slurm object, for backward compatibility with "Slurm->method()". */ static struct slurm default_slurm_object; static slurm_t new_slurm(void) { return xmalloc(sizeof(struct slurm)); } static void free_slurm(slurm_t self) { xfree(self); } /********************************************************************/ MODULE = Slurm PACKAGE = Slurm PREFIX=slurm_ PROTOTYPES: ENABLE ###################################################################### # CONSTRUCTOR/DESTRUCTOR FUNCTIONS ###################################################################### # # $slurm = Slurm::new($conf_file); # slurm_t slurm_new(char *conf_file=NULL) CODE: if(conf_file) { slurm_conf_reinit(conf_file); } RETVAL = new_slurm(); if (RETVAL == NULL) { XSRETURN_UNDEF; } OUTPUT: RETVAL void slurm_DESTROY(slurm_t self) CODE: if (self != &default_slurm_object) { free_slurm(self); } ###################################################################### # ERROR INFORMATION FUNCTIONS ###################################################################### int slurm_get_errno(slurm_t self) C_ARGS: INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ char * slurm_strerror(slurm_t self, int errnum=0) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (errnum == 0) errnum = slurm_get_errno(); RETVAL = slurm_strerror(errnum); OUTPUT: RETVAL ###################################################################### # ENTITY STATE/REASON/FLAG/TYPE STRING FUNCTIONS ###################################################################### # # These functions are made object method instead of class method. char * slurm_preempt_mode_string(slurm_t self, uint16_t preempt_mode); CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ RETVAL = slurm_preempt_mode_string(preempt_mode); OUTPUT: RETVAL uint16_t slurm_preempt_mode_num(slurm_t self, char *preempt_mode) C_ARGS: preempt_mode INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ char * slurm_job_reason_string(slurm_t self, uint32_t inx) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ RETVAL = slurm_job_reason_string(inx); OUTPUT: RETVAL char * slurm_job_state_string(slurm_t self, uint32_t inx) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ RETVAL = slurm_job_state_string(inx); OUTPUT: RETVAL char * slurm_job_state_string_compact(slurm_t self, uint32_t inx) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ RETVAL = slurm_job_state_string_compact(inx); OUTPUT: RETVAL int slurm_job_state_num(slurm_t self, char *state_name) C_ARGS: state_name INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ char_xfree * slurm_reservation_flags_string(slurm_t self, uint16_t flags) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ RETVAL = slurm_reservation_flags_string(flags); OUTPUT: RETVAL char * slurm_node_state_string(slurm_t self, uint32_t inx) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ RETVAL = slurm_node_state_string(inx); OUTPUT: RETVAL char * slurm_node_state_string_compact(slurm_t self, uint32_t inx) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ RETVAL = slurm_node_state_string_compact(inx); OUTPUT: RETVAL char * slurm_private_data_string(slurm_t self, uint16_t private_data) PREINIT: char tmp_str[128]; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ slurm_private_data_string(private_data, tmp_str, sizeof(tmp_str)); RETVAL = tmp_str; OUTPUT: RETVAL char * slurm_accounting_enforce_string(slurm_t self, uint16_t enforce) PREINIT: char tmp_str[128]; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ slurm_accounting_enforce_string(enforce, tmp_str, sizeof(tmp_str)); RETVAL = tmp_str; OUTPUT: RETVAL char * slurm_conn_type_string(slurm_t self, uint16_t conn_type) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ RETVAL = slurm_conn_type_string((enum connection_type)conn_type); OUTPUT: RETVAL char * slurm_node_use_string(slurm_t self, uint16_t node_use) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ RETVAL = slurm_node_use_string((enum node_use_type)node_use); OUTPUT: RETVAL char * slurm_bg_block_state_string(slurm_t self, uint16_t state) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ RETVAL = slurm_bg_block_state_string(state); OUTPUT: RETVAL ###################################################################### # BLUEGENE BLOCK INFO FUNCTIONS ###################################################################### void slurm_print_block_info_msg(slurm_t self, FILE *out, HV *block_info_msg, int one_liner=0) PREINIT: block_info_msg_t bi_msg; INIT: if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_block_info_msg(block_info_msg, &bi_msg) < 0) { XSRETURN_UNDEF; } if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: out, &bi_msg, one_liner CLEANUP: xfree(bi_msg.block_array); void slurm_print_block_info(slurm_t self, FILE *out, HV *block_info, int one_liner=0) PREINIT: block_info_t bi; INIT: if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_block_info(block_info, &bi) < 0) { XSRETURN_UNDEF; } if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: out, &bi, one_liner char_xfree * slurm_sprint_block_info(slurm_t self, HV *block_info, int one_liner=0) PREINIT: block_info_t bi; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_block_info(block_info, &bi) < 0) { XSRETURN_UNDEF; } RETVAL = slurm_sprint_block_info(&bi, one_liner); OUTPUT: RETVAL HV * slurm_load_block_info(slurm_t self, time_t update_time=0, uint16_t show_flags=0) PREINIT: block_info_msg_t *bi_msg = NULL; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_load_block_info(update_time, &bi_msg, show_flags); if(rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = block_info_msg_to_hv(bi_msg, RETVAL); if (rc < 0) { XSRETURN_UNDEF; } slurm_free_block_info_msg(bi_msg); } else { XSRETURN_UNDEF; } OUTPUT: RETVAL int slurm_update_block(slurm_t self, HV *update_req) PREINIT: update_block_msg_t block_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_update_block_msg(update_req, &block_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: &block_msg ###################################################################### # RESOURCE ALLOCATION FUNCTIONS ###################################################################### # # $resp = $slurm->allocate_resources($desc); HV * slurm_allocate_resources(slurm_t self, HV *job_desc) PREINIT: job_desc_msg_t jd_msg; resource_allocation_response_msg_t* resp_msg = NULL; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (hv_to_job_desc_msg(job_desc, &jd_msg) < 0) { XSRETURN_UNDEF; } rc = slurm_allocate_resources(&jd_msg, &resp_msg); free_job_desc_msg_memory(&jd_msg); if (resp_msg == NULL) { XSRETURN_UNDEF; } if(rc != SLURM_SUCCESS) { slurm_free_resource_allocation_response_msg(resp_msg); XSRETURN_UNDEF; } RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = resource_allocation_response_msg_to_hv(resp_msg, RETVAL); slurm_free_resource_allocation_response_msg(resp_msg); if (rc < 0) { XSRETURN_UNDEF; } OUTPUT: RETVAL HV * slurm_allocate_resources_blocking(slurm_t self, HV *user_req, time_t timeout=0, SV *pending_callback=NULL) PREINIT: job_desc_msg_t jd_msg; resource_allocation_response_msg_t *resp_msg = NULL; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (hv_to_job_desc_msg(user_req, &jd_msg) < 0) { XSRETURN_UNDEF; } set_sarb_cb(pending_callback); resp_msg = slurm_allocate_resources_blocking(&jd_msg, timeout, pending_callback == NULL ? NULL : sarb_cb); free_job_desc_msg_memory(&jd_msg); if (resp_msg != NULL) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); resource_allocation_response_msg_to_hv(resp_msg, RETVAL); slurm_free_resource_allocation_response_msg(resp_msg); } else { XSRETURN_UNDEF; } OUTPUT: RETVAL HV * slurm_allocation_lookup(slurm_t self, uint32_t job_id) PREINIT: job_alloc_info_response_msg_t *resp_msg = NULL; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_allocation_lookup(job_id, &resp_msg); if(rc != SLURM_SUCCESS) { slurm_free_job_alloc_info_response_msg(resp_msg); XSRETURN_UNDEF; } RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = job_alloc_info_response_msg_to_hv(resp_msg, RETVAL); slurm_free_job_alloc_info_response_msg(resp_msg); if (rc < 0) { XSRETURN_UNDEF; } OUTPUT: RETVAL HV * slurm_allocation_lookup_lite(slurm_t self, uint32_t job_id) PREINIT: resource_allocation_response_msg_t *resp_msg = NULL; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_allocation_lookup_lite(job_id, &resp_msg); if(rc != SLURM_SUCCESS) { slurm_free_resource_allocation_response_msg(resp_msg); XSRETURN_UNDEF; } RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = resource_allocation_response_msg_to_hv(resp_msg, RETVAL); slurm_free_resource_allocation_response_msg(resp_msg); if (rc < 0) { XSRETURN_UNDEF; } OUTPUT: RETVAL char_free * slurm_read_hostfile(slurm_t self, char *filename, int n) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ RETVAL = slurm_read_hostfile(filename, n); if(RETVAL == NULL) { XSRETURN_UNDEF; } OUTPUT: RETVAL allocation_msg_thread_t * slurm_allocation_msg_thr_create(slurm_t self, OUT uint16_t port, HV *callbacks) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ set_sacb(callbacks); C_ARGS: &port, &sacb void slurm_allocation_msg_thr_destroy(slurm_t self, allocation_msg_thread_t * msg_thr) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: msg_thr HV * slurm_submit_batch_job(slurm_t self, HV *job_desc) PREINIT: job_desc_msg_t jd_msg; submit_response_msg_t *resp_msg = NULL; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_job_desc_msg(job_desc, &jd_msg) < 0) { XSRETURN_UNDEF; } rc = slurm_submit_batch_job(&jd_msg, &resp_msg); free_job_desc_msg_memory(&jd_msg); if(rc != SLURM_SUCCESS) { slurm_free_submit_response_response_msg(resp_msg); XSRETURN_UNDEF; } RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = submit_response_msg_to_hv(resp_msg, RETVAL); slurm_free_submit_response_response_msg(resp_msg); if (rc < 0) { XSRETURN_UNDEF; } OUTPUT: RETVAL int slurm_job_will_run(slurm_t self, HV *job_desc) PREINIT: job_desc_msg_t jd_msg; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (hv_to_job_desc_msg(job_desc, &jd_msg) < 0) { XSRETURN_UNDEF; } RETVAL = slurm_job_will_run(&jd_msg); free_job_desc_msg_memory(&jd_msg); OUTPUT: RETVAL HV * slurm_sbcast_lookup(slurm_t self, uint32_t job_id, uint32_t step_id) PREINIT: job_sbcast_cred_msg_t *info; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_sbcast_lookup(job_id, step_id, &info); if (rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = job_sbcast_cred_msg_to_hv(info, RETVAL); slurm_free_sbcast_cred_msg(info); if (rc < 0) { XSRETURN_UNDEF; } } else { XSRETURN_UNDEF; } OUTPUT: RETVAL ###################################################################### # JOB/STEP SIGNALING FUNCTIONS ###################################################################### int slurm_kill_job(slurm_t self, uint32_t job_id, uint16_t signal, uint16_t batch_flag=0) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, signal, batch_flag int slurm_kill_job_step(slurm_t self, uint32_t job_id, uint32_t step_id, uint16_t signal) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, step_id, signal int slurm_signal_job(slurm_t self, uint32_t job_id, uint16_t signal) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, signal int slurm_signal_job_step(slurm_t self, uint32_t job_id, uint32_t step_id, uint16_t signal) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, step_id, signal ###################################################################### # JOB/STEP COMPLETION FUNCTIONS ###################################################################### int slurm_complete_job(slurm_t self, uint32_t job_id, uint32_t job_rc=0) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, job_rc int slurm_terminate_job_step(slurm_t self, uint32_t job_id, uint32_t step_id) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, step_id ###################################################################### # SLURM TASK SPAWNING FUNCTIONS ###################################################################### MODULE=Slurm PACKAGE=Slurm PREFIX=slurm_ # $ctx = $slurm->step_ctx_create($params); slurm_step_ctx_t * slurm_step_ctx_create(slurm_t self, HV *step_params) PREINIT: slurm_step_ctx_params_t sp; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (hv_to_slurm_step_ctx_params(step_params, &sp) < 0) { XSRETURN_UNDEF; } RETVAL = slurm_step_ctx_create(&sp); if (RETVAL == NULL) { XSRETURN_UNDEF; } OUTPUT: RETVAL slurm_step_ctx_t * slurm_step_ctx_create_no_alloc(slurm_t self, HV *step_params, uint32_t step_id) PREINIT: slurm_step_ctx_params_t sp; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (hv_to_slurm_step_ctx_params(step_params, &sp) < 0) { XSRETURN_UNDEF; } RETVAL = slurm_step_ctx_create_no_alloc(&sp, step_id); if (RETVAL == NULL) { XSRETURN_UNDEF; } OUTPUT: RETVAL ###################################################################### MODULE=Slurm PACKAGE=Slurm::Stepctx PREFIX=slurm_step_ctx_ int slurm_step_ctx_get(slurm_step_ctx_t *ctx, int ctx_key, INOUT ...) PREINIT: uint32_t tmp_32, *tmp_32_ptr; uint16_t tmp_16, *tmp_16_ptr; #if 0 /* TODO: job_step_create_response_msg_t not exported in slurm.h */ job_step_create_response_msg_t *resp_msg; #endif slurm_cred_t *cred; switch_jobinfo_t *switch_info; char *tmp_str; int i, tmp_int, *tmp_int_ptr; CODE: switch(ctx_key) { case SLURM_STEP_CTX_JOBID: /* uint32_t* */ case SLURM_STEP_CTX_STEPID: /* uint32_t* */ case SLURM_STEP_CTX_NUM_HOSTS: /* uint32_t* */ if (items != 3) { Perl_warn( aTHX_ "error number of parameters"); errno = EINVAL; RETVAL = SLURM_ERROR; break; } RETVAL = slurm_step_ctx_get(ctx, ctx_key, &tmp_32); if (RETVAL == SLURM_SUCCESS) { sv_setuv(ST(2), (UV)tmp_32); } break; case SLURM_STEP_CTX_TASKS: /* uint16_t** */ if (items != 3) { Perl_warn( aTHX_ "error number of parameters"); errno = EINVAL; RETVAL = SLURM_ERROR; break; } RETVAL = slurm_step_ctx_get(ctx, SLURM_STEP_CTX_NUM_HOSTS, &tmp_32); if (RETVAL != SLURM_SUCCESS) break; RETVAL = slurm_step_ctx_get(ctx, ctx_key, &tmp_16_ptr); if (RETVAL == SLURM_SUCCESS) { AV* av = newAV(); for(i = 0; i < tmp_32; i ++) { av_store_uint16_t(av, i, tmp_16_ptr[i]); } sv_setsv(ST(2), newRV_noinc((SV*)av)); } break; case SLURM_STEP_CTX_TID: /* uint32_t, uint32_t** */ if (items != 4) { Perl_warn( aTHX_ "error number of parameters"); errno = EINVAL; RETVAL = SLURM_ERROR; break; } tmp_32 = (uint32_t)SvUV(ST(2)); RETVAL = slurm_step_ctx_get(ctx, SLURM_STEP_CTX_TASKS, &tmp_16_ptr); if (RETVAL != SLURM_SUCCESS) break; tmp_16 = tmp_16_ptr[tmp_32]; RETVAL = slurm_step_ctx_get(ctx, ctx_key, tmp_32, &tmp_32_ptr); if (RETVAL == SLURM_SUCCESS) { AV* av = newAV(); for(i = 0; i < tmp_16; i ++) { av_store_uint32_t(av, i, tmp_32_ptr[i]); } sv_setsv(ST(3), newRV_noinc((SV*)av)); } break; #if 0 case SLURM_STEP_CTX_RESP: /* job_step_create_response_msg_t** */ if (items != 3) { Perl_warn( aTHX_ "error number of parameters"); errno = EINVAL; RETVAL = SLURM_ERROR; break; } RETVAL = slurm_step_ctx_get(ctx, ctx_key, &resp_msg); if (RETVAL == SLURM_SUCCESS) { HV *hv = newHV(); if (job_step_create_response_msg_to_hv(resp_msg, hv) < 0) { SVdev_REFCNT((SV*)hv); RETVAL = SLURM_ERROR; break; } sv_setsv(ST(2), newRV_noinc((SV*)hv)); } break; #endif case SLURM_STEP_CTX_CRED: /* slurm_cred_t** */ if (items != 3) { Perl_warn( aTHX_ "error number of parameters"); errno = EINVAL; RETVAL = SLURM_ERROR; break; } RETVAL = slurm_step_ctx_get(ctx, ctx_key, &cred); if (RETVAL == SLURM_SUCCESS && cred) { sv_setref_pv(ST(2), "Slurm::slurm_cred_t", (void*)cred); } else if (RETVAL == SLURM_SUCCESS) { /* the returned cred is NULL */ sv_setsv(ST(2), &PL_sv_undef); } break; case SLURM_STEP_CTX_SWITCH_JOB: /* switch_jobinfo_t** */ if (items != 3) { Perl_warn( aTHX_ "error number of parameters"); errno = EINVAL; RETVAL = SLURM_ERROR; break; } RETVAL = slurm_step_ctx_get(ctx, ctx_key, &switch_info); if (RETVAL == SLURM_SUCCESS && switch_info) { sv_setref_pv(ST(2), "Slurm::switch_jobinfo_t", (void*)switch_info); } else if (RETVAL == SLURM_SUCCESS) { /* the returned switch_info is NULL */ sv_setsv(ST(2), &PL_sv_undef); } break; case SLURM_STEP_CTX_HOST: /* uint32_t, char** */ if (items != 4) { Perl_warn( aTHX_ "error number of parameters"); errno = EINVAL; RETVAL = SLURM_ERROR; break; } tmp_32 = (uint32_t)SvUV(ST(2)); RETVAL = slurm_step_ctx_get(ctx, ctx_key, tmp_32, &tmp_str); if (RETVAL == SLURM_SUCCESS) { sv_setpv(ST(3), tmp_str); } break; case SLURM_STEP_CTX_USER_MANAGED_SOCKETS: /* int*, int** */ if (items != 4) { Perl_warn( aTHX_ "error number of parameters"); errno = EINVAL; RETVAL = SLURM_ERROR; break; } RETVAL = slurm_step_ctx_get(ctx, ctx_key, &tmp_int, &tmp_int_ptr); if (RETVAL == SLURM_SUCCESS) { AV *av = newAV(); for (i = 0; i < tmp_int; i ++) { av_store_int(av, i, tmp_int_ptr[i]); } sv_setiv(ST(2), tmp_int); sv_setsv(ST(3), newRV_noinc((SV*)av)); } else { /* returned val: 0, NULL */ sv_setiv(ST(2), tmp_int); sv_setsv(ST(3), &PL_sv_undef); } break; default: RETVAL = slurm_step_ctx_get(ctx, ctx_key); } OUTPUT: RETVAL # TODO: data_type not exported in slurm.h #int #slurm_job_info_ctx_get(switch_jobinfo_t *jobinfo, int data_type, void *data) void slurm_step_ctx_DESTROY(slurm_step_ctx_t *ctx) CODE: slurm_step_ctx_destroy(ctx); int slurm_step_ctx_daemon_per_node_hack(slurm_step_ctx_t *ctx, char *node_list, uint32_t node_cnt, void *curr_task_num) PREINIT: uint32_t *tmp32; CODE: tmp32 = (uint32_t *)curr_task_num; RETVAL = slurm_step_ctx_daemon_per_node_hack(ctx, node_list, node_cnt, tmp32); OUTPUT: RETVAL ##################################################################### MODULE=Slurm PACKAGE=Slurm::Stepctx PREFIX=slurm_step_ int slurm_step_launch(slurm_step_ctx_t *ctx, HV *params, HV *callbacks=NULL) PREINIT: slurm_step_launch_params_t lp; slurm_step_launch_callbacks_t *cb = NULL; CODE: if (hv_to_slurm_step_launch_params(params, &lp) < 0) { Perl_warn( aTHX_ "failed to convert slurm_step_launch_params_t"); RETVAL = SLURM_ERROR; } else { if (callbacks) { set_slcb(callbacks); cb = &slcb; } RETVAL = slurm_step_launch(ctx, &lp, cb); free_slurm_step_launch_params_memory(&lp); } OUTPUT: RETVAL int slurm_step_launch_wait_start(slurm_step_ctx_t *ctx) void slurm_step_launch_wait_finish(slurm_step_ctx_t *ctx) void slurm_step_launch_abort(slurm_step_ctx_t *ctx) void slurm_step_launch_fwd_signal(slurm_step_ctx_t *ctx, uint16_t signo) # TODO: this function is not implemented in libslurm #void #slurm_step_launch_fwd_wake(slurm_step_ctx_t *ctx) ###################################################################### # SLURM CONTROL CONFIGURATION READ/PRINT/UPDATE FUNCTIONS ###################################################################### MODULE = Slurm PACKAGE = Slurm PREFIX=slurm_ # # ($major, $minor, $micro) = $slurm->api_version(); # void slurm_api_version(slurm_t self, OUTLIST int major, OUTLIST int minor, OUTLIST int micro) PREINIT: long version; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ version = slurm_api_version(); major = SLURM_VERSION_MAJOR(version); minor = SLURM_VERSION_MINOR(version); micro = SLURM_VERSION_MICRO(version); HV * slurm_load_ctl_conf(slurm_t self, time_t update_time=0) PREINIT: slurm_ctl_conf_t *ctl_conf; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_load_ctl_conf(update_time, &ctl_conf); if(rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = slurm_ctl_conf_to_hv(ctl_conf, RETVAL); slurm_free_ctl_conf(ctl_conf); if (rc < 0) { XSRETURN_UNDEF; } } else { XSRETURN_UNDEF; } OUTPUT: RETVAL void slurm_print_ctl_conf(slurm_t self, FILE *out, HV *conf) PREINIT: slurm_ctl_conf_t cc; INIT: if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if (hv_to_slurm_ctl_conf(conf, &cc) < 0) { XSRETURN_UNDEF; } if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: out, &cc # # $key_pairs = $slurm->ctl_conf_2_key_pairs($conf); # XXX: config_key_pair_t not exported # List slurm_ctl_conf_2_key_pairs(slurm_t self, HV *conf) PREINIT: slurm_ctl_conf_t cc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (hv_to_slurm_ctl_conf(conf, &cc) < 0) { XSRETURN_UNDEF; } RETVAL = (List)slurm_ctl_conf_2_key_pairs(&cc); if(RETVAL == NULL) { XSRETURN_UNDEF; } OUTPUT: RETVAL # # $status = $slurm->load_slurmd_status(); # HV * slurm_load_slurmd_status(slurm_t self) PREINIT: slurmd_status_t *status; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_load_slurmd_status(&status); if (rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = slurmd_status_to_hv(status, RETVAL); slurm_free_slurmd_status(status); if (rc < 0) { XSRETURN_UNDEF; } } else { XSRETURN_UNDEF; } OUTPUT: RETVAL void slurm_print_slurmd_status(slurm_t self, FILE *out, HV *slurmd_status) PREINIT: slurmd_status_t st; INIT: if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if (hv_to_slurmd_status(slurmd_status, &st) < 0) { XSRETURN_UNDEF; } if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: out, &st void slurm_print_key_pairs(slurm_t self, FILE *out, List key_pairs, char *title) INIT: if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: out, key_pairs, title int slurm_update_step(slurm_t self, HV *step_msg) PREINIT: step_update_request_msg_t su_msg; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (hv_to_step_update_request_msg(step_msg, &su_msg) < 0) { RETVAL = SLURM_ERROR; } else { RETVAL = slurm_update_step(&su_msg); } OUTPUT: RETVAL ###################################################################### # SLURM JOB RESOURCES READ/PRINT FUNCTIONS ###################################################################### int slurm_job_cpus_allocated_on_node_id(slurm_t self, SV *job_res, int node_id) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(job_res) { RETVAL = slurm_job_cpus_allocated_on_node_id( (job_resources_t *)SV2ptr(job_res), node_id); } else { RETVAL = 0; } OUTPUT: RETVAL int slurm_job_cpus_allocated_on_node(slurm_t self, SV *job_res, char *node_name) CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(job_res) { RETVAL = slurm_job_cpus_allocated_on_node( (job_resources_t *)SV2ptr(job_res), node_name); } else { RETVAL = 0; } OUTPUT: RETVAL ###################################################################### # SLURM JOB CONFIGURATION READ/PRINT/UPDATE FUNCTIONS ###################################################################### MODULE = Slurm PACKAGE = Slurm::job_info_msg_t PREFIX=job_info_msg_t_ void job_info_msg_t_DESTROY(job_info_msg_t *ji_msg) CODE: slurm_free_job_info_msg(ji_msg); ###################################################################### MODULE = Slurm PACKAGE = Slurm PREFIX=slurm_ # $time = $slurm->get_end_time($job_id); time_t slurm_get_end_time(slurm_t self, uint32_t job_id) PREINIT: time_t tmp_time; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_get_end_time(job_id, &tmp_time); if (rc == SLURM_SUCCESS) { RETVAL = tmp_time; } else { XSRETURN_UNDEF; } OUTPUT: RETVAL long slurm_get_rem_time(slurm_t self, uint32_t job_id) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id int slurm_job_node_ready(slurm_t self, uint32_t job_id) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id # # $resp = $slurm->load_job($job_id, $show_flags); # HV * slurm_load_job(slurm_t self, uint32_t job_id, uint16_t show_flags=0) PREINIT: job_info_msg_t *ji_msg; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_load_job(&ji_msg, job_id, show_flags); if (rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = job_info_msg_to_hv(ji_msg, RETVAL); /* cannot free ji_msg because RETVAL holds data in it */ if (rc >= 0) { hv_store_ptr(RETVAL, "job_info_msg", ji_msg, "Slurm::job_info_msg_t"); } if (rc < 0) { XSRETURN_UNDEF; } } else { XSRETURN_UNDEF; } OUTPUT: RETVAL # # $resp = $slurm->load_jobs($update_time, $show_flags); # HV * slurm_load_jobs(slurm_t self, time_t update_time=0, uint16_t show_flags=0) PREINIT: job_info_msg_t *ji_msg; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_load_jobs(update_time, &ji_msg, show_flags); if (rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = job_info_msg_to_hv(ji_msg, RETVAL); /* cannot free ji_msg because RETVAL holds data in it */ if (rc >= 0) { hv_store_ptr(RETVAL, "job_info_msg", ji_msg, "Slurm::job_info_msg_t"); } if (rc < 0) { XSRETURN_UNDEF; } } else { XSRETURN_UNDEF; } OUTPUT: RETVAL int slurm_notify_job(slurm_t self, uint32_t job_id, char *message) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, message # # $job_id = $slurm->pid2jobid($job_pid); # uint32_t slurm_pid2jobid(slurm_t self, pid_t job_pid) PREINIT: uint32_t tmp_pid; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_pid2jobid(job_pid, &tmp_pid); if (rc == SLURM_SUCCESS) { RETVAL = tmp_pid; } else { XSRETURN_UNDEF; } OUTPUT: RETVAL void slurm_print_job_info(slurm_t self, FILE* out, HV *job_info, int one_liner=0) PREINIT: job_info_t ji; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if (hv_to_job_info(job_info, &ji) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &ji, one_liner CLEANUP: xfree(ji.exc_node_inx); xfree(ji.node_inx); xfree(ji.req_node_inx); void slurm_print_job_info_msg(slurm_t self, FILE *out, HV *job_info_msg, int one_liner=0) PREINIT: job_info_msg_t ji_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if (hv_to_job_info_msg(job_info_msg, &ji_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &ji_msg, one_liner CLEANUP: xfree(ji_msg.job_array); char_xfree * slurm_sprint_job_info(slurm_t self, HV *job_info, int one_liner=0) PREINIT: job_info_t ji; char *tmp_str = NULL; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_job_info(job_info, &ji) < 0) { XSRETURN_UNDEF; } tmp_str = slurm_sprint_job_info(&ji, one_liner); xfree(ji.exc_node_inx); xfree(ji.node_inx); xfree(ji.req_node_inx); RETVAL = tmp_str; OUTPUT: RETVAL int slurm_update_job(slurm_t self, HV *job_info) PREINIT: job_desc_msg_t update_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_job_desc_msg(job_info, &update_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: &update_msg CLEANUP: free_job_desc_msg_memory(&update_msg); ###################################################################### # SLURM JOB STEP CONFIGURATION READ/PRINT/UPDATE FUNCTIONS ###################################################################### HV * slurm_get_job_steps(slurm_t self, time_t update_time=0, uint32_t job_id=NO_VAL, uint32_t step_id=NO_VAL, uint16_t show_flags=0) PREINIT: int rc; job_step_info_response_msg_t *resp_msg; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_get_job_steps(update_time, job_id, step_id, &resp_msg, show_flags); if(rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = job_step_info_response_msg_to_hv(resp_msg, RETVAL); slurm_free_job_step_info_response_msg(resp_msg); if (rc < 0) { XSRETURN_UNDEF; } } else { XSRETURN_UNDEF; } OUTPUT: RETVAL void slurm_print_job_step_info_msg(slurm_t self, FILE *out, HV *step_info_msg, int one_liner=0) PREINIT: job_step_info_response_msg_t si_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_job_step_info_response_msg(step_info_msg, &si_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &si_msg, one_liner CLEANUP: xfree(si_msg.job_steps); void slurm_print_job_step_info(slurm_t self, FILE *out, HV *step_info, int one_liner=0) PREINIT: job_step_info_t si; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_job_step_info(step_info, &si) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &si, one_liner CLEANUP: xfree(si.node_inx); char_xfree * slurm_sprint_job_step_info(slurm_t self, HV *step_info, int one_liner=0) PREINIT: job_step_info_t si; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_job_step_info(step_info, &si) < 0) { XSRETURN_UNDEF; } RETVAL = slurm_sprint_job_step_info(&si, one_liner); xfree(si.node_inx); OUTPUT: RETVAL HV * slurm_job_step_layout_get(slurm_t self, uint32_t job_id, uint32_t step_id) PREINIT: int rc; slurm_step_layout_t *layout; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ layout = slurm_job_step_layout_get(job_id, step_id); if(layout == NULL) { XSRETURN_UNDEF; } else { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = slurm_step_layout_to_hv(layout, RETVAL); slurm_job_step_layout_free(layout); if (rc < 0) { XSRETURN_UNDEF; } } OUTPUT: RETVAL HV * slurm_job_step_stat(slurm_t self, uint32_t job_id, uint32_t step_id, char *nodelist=NULL) PREINIT: int rc; job_step_stat_response_msg_t *resp_msg; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_job_step_stat(job_id, step_id, nodelist, &resp_msg); if (rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = job_step_stat_response_msg_to_hv(resp_msg, RETVAL); slurm_job_step_stat_response_msg_free(resp_msg); if (rc < 0) { XSRETURN_UNDEF; } } else { errno = rc; XSRETURN_UNDEF; } OUTPUT: RETVAL HV * slurm_job_step_get_pids(slurm_t self, uint32_t job_id, uint32_t step_id, char *nodelist=NULL) PREINIT: int rc; job_step_pids_response_msg_t *resp_msg; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_job_step_get_pids(job_id, step_id, nodelist, &resp_msg); if (rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = job_step_pids_response_msg_to_hv(resp_msg, RETVAL); slurm_job_step_pids_response_msg_free(resp_msg); if (rc < 0) { XSRETURN_UNDEF; } } else { errno = rc; XSRETURN_UNDEF; } OUTPUT: RETVAL ###################################################################### # SLURM NODE CONFIGURATION READ/PRINT/UPDATE FUNCTIONS ###################################################################### MODULE = Slurm PACKAGE = Slurm::node_info_msg_t PREFIX=node_info_msg_t_ void node_info_msg_t_DESTROY(node_info_msg_t *ni_msg) CODE: slurm_free_node_info_msg(ni_msg); ###################################################################### MODULE = Slurm PACKAGE = Slurm PREFIX=slurm_ HV * slurm_load_node(slurm_t self, time_t update_time=0, uint16_t show_flags=0) PREINIT: node_info_msg_t *ni_msg = NULL; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_load_node(update_time, &ni_msg, show_flags | SHOW_MIXED); if (rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); /* RETVAL holds ni_msg->select_nodeinfo, so delay free-ing the msg */ rc = node_info_msg_to_hv(ni_msg, RETVAL); if (rc >= 0) { rc = hv_store_ptr(RETVAL, "node_info_msg", ni_msg, "Slurm::node_info_msg_t"); } if (rc < 0) { XSRETURN_UNDEF; } } else { XSRETURN_UNDEF; } OUTPUT: RETVAL void slurm_print_node_info_msg(slurm_t self, FILE *out, HV *node_info_msg, int one_liner=0) PREINIT: node_info_msg_t ni_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_node_info_msg(node_info_msg, &ni_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &ni_msg, one_liner CLEANUP: xfree(ni_msg.node_array); void slurm_print_node_table(slurm_t self, FILE *out, HV *node_info, int node_scaling=1, int one_liner=0) PREINIT: node_info_t ni; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_node_info(node_info, &ni) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &ni, node_scaling, one_liner char_xfree * slurm_sprint_node_table(slurm_t self, HV *node_info, int node_scaling=1, int one_liner=0) PREINIT: node_info_t ni; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_node_info(node_info, &ni) < 0) { XSRETURN_UNDEF; } RETVAL = slurm_sprint_node_table(&ni, node_scaling, one_liner); OUTPUT: RETVAL int slurm_update_node(slurm_t self, HV *update_req) PREINIT: update_node_msg_t node_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_update_node_msg(update_req, &node_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: &node_msg ###################################################################### # SLURM SWITCH TOPOLOGY CONFIGURATION READ/PRINT FUNCTIONS ###################################################################### HV * slurm_load_topo(slurm_t self) PREINIT: topo_info_response_msg_t *topo_info_msg = NULL; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_load_topo( &topo_info_msg); if(rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = topo_info_response_msg_to_hv(topo_info_msg, RETVAL); slurm_free_topo_info_msg(topo_info_msg); if (rc < 0) { XSRETURN_UNDEF; } } else { XSRETURN_UNDEF; } OUTPUT: RETVAL void slurm_print_topo_info_msg(slurm_t self, FILE *out, HV *topo_info_msg, int one_liner=0) PREINIT: topo_info_response_msg_t ti_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_topo_info_response_msg(topo_info_msg, &ti_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &ti_msg, one_liner CLEANUP: xfree(ti_msg.topo_array); void slurm_print_topo_record(slurm_t self, FILE *out, HV *topo_info, int one_liner=0) PREINIT: topo_info_t ti; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_topo_info(topo_info, &ti) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &ti, one_liner ###################################################################### # SLURM SELECT READ/PRINT/UPDATE FUNCTIONS ###################################################################### int slurm_get_select_jobinfo(slurm_t self, dynamic_plugin_data_t *jobinfo, uint32_t data_type, SV *data) PREINIT: uint16_t tmp_16, tmp_array[SYSTEM_DIMENSIONS]; uint32_t tmp_32; char *tmp_str; select_jobinfo_t *tmp_ptr; int i; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ switch(data_type) { case SELECT_JOBDATA_GEOMETRY: /* data-> uint16_t geometry[SYSTEM_DIMENSIONS] */ RETVAL = slurm_get_select_jobinfo(jobinfo, data_type, tmp_array); if (RETVAL == 0) { AV *avp = newAV(); for (i = 0; i < SYSTEM_DIMENSIONS; i ++) { av_store_uint16_t(avp, i, tmp_array[i]); } sv_setsv(data, (SV*)newRV_noinc((SV*)avp)); } break; case SELECT_JOBDATA_ROTATE: /* data-> uint16_t rotate */ case SELECT_JOBDATA_CONN_TYPE: /* data-> uint16_t connection_type */ case SELECT_JOBDATA_ALTERED: /* data-> uint16_t altered */ case SELECT_JOBDATA_REBOOT: /* data-> uint16_t reboot */ RETVAL = slurm_get_select_jobinfo(jobinfo, data_type, &tmp_16); if (RETVAL == 0) { sv_setuv(data, (UV)tmp_16); } break; case SELECT_JOBDATA_NODE_CNT: /* data-> uint32_t node_cnt */ case SELECT_JOBDATA_RESV_ID: /* data-> uint32_t reservation_id */ RETVAL = slurm_get_select_jobinfo(jobinfo, data_type, &tmp_32); if (RETVAL == 0) { sv_setuv(data, (UV)tmp_32); } break; case SELECT_JOBDATA_BLOCK_ID: /* data-> char *bg_block_id */ case SELECT_JOBDATA_NODES: /* data-> char *nodes */ case SELECT_JOBDATA_IONODES: /* data-> char *ionodes */ case SELECT_JOBDATA_BLRTS_IMAGE: /* data-> char *blrtsimage */ case SELECT_JOBDATA_LINUX_IMAGE: /* data-> char *linuximage */ case SELECT_JOBDATA_MLOADER_IMAGE: /* data-> char *mloaderimage */ case SELECT_JOBDATA_RAMDISK_IMAGE: /* data-> char *ramdiskimage */ RETVAL = slurm_get_select_jobinfo(jobinfo, data_type, &tmp_str); if (RETVAL == 0) { char *str; int len = strlen(tmp_str) + 1; New(0, str, len, char); Copy(tmp_str, str, len, char); xfree(tmp_str); sv_setpvn(data, str, len); } break; case SELECT_JOBDATA_PTR: /* data-> select_jobinfo_t *jobinfo */ RETVAL = slurm_get_select_jobinfo(jobinfo, data_type, &tmp_ptr); if (RETVAL == 0) { sv_setref_pv(data, "Slurm::select_jobinfo_t", (void*)tmp_ptr); } break; default: RETVAL = slurm_get_select_jobinfo(jobinfo, data_type, NULL); } OUTPUT: RETVAL # # $rc = $slurm->get_select_nodeinfo($nodeinfo, $data_type, $state, $data); # int slurm_get_select_nodeinfo(slurm_t self, dynamic_plugin_data_t *nodeinfo, uint32_t data_type, uint32_t state, SV *data) PREINIT: uint16_t tmp_16; char *tmp_str; bitstr_t *tmp_bitmap; select_nodeinfo_t *tmp_ptr; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ switch(data_type) { case SELECT_NODEDATA_BITMAP_SIZE: /* data-> uint16_t */ case SELECT_NODEDATA_SUBGRP_SIZE: /* data-> uint16_t */ case SELECT_NODEDATA_SUBCNT: /* data-> uint16_t */ RETVAL = slurm_get_select_nodeinfo(nodeinfo, data_type, state, &tmp_16); if (RETVAL == 0) { sv_setuv(data, (UV)tmp_16); } break; case SELECT_NODEDATA_BITMAP: /* data-> bitstr_t * needs to be * freed with FREE_NULL_BITMAP */ RETVAL = slurm_get_select_nodeinfo(nodeinfo, data_type, state, &tmp_bitmap); if (RETVAL == 0) { sv_setref_pv(data, "Slurm::Bitstr", tmp_bitmap); } break; case SELECT_NODEDATA_STR: /* data-> char * needs to be freed with xfree */ RETVAL = slurm_get_select_nodeinfo(nodeinfo, data_type, state, &tmp_str); if (RETVAL == 0) { char *str; int len = strlen(tmp_str) + 1; New(0, str, len, char); Copy(tmp_str, str, len, char); xfree(tmp_str); sv_setpvn(data, str, len); } break; case SELECT_NODEDATA_PTR: /* data-> select_nodeinfo_t *nodeinfo */ RETVAL = slurm_get_select_nodeinfo(nodeinfo, data_type, state, &tmp_ptr); if (RETVAL == 0) { sv_setref_pv(data, "Slurm::select_nodeinfo_t", (void*)tmp_ptr); } break; default: RETVAL = slurm_get_select_nodeinfo(nodeinfo, data_type, state, NULL); } OUTPUT: RETVAL ###################################################################### # SLURM PARTITION CONFIGURATION READ/PRINT/UPDATE FUNCTIONS ###################################################################### HV * slurm_load_partitions(slurm_t self, time_t update_time=0, uint16_t show_flags=0) PREINIT: partition_info_msg_t *part_info_msg; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_load_partitions(update_time, &part_info_msg, show_flags); if(rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = partition_info_msg_to_hv(part_info_msg, RETVAL); slurm_free_partition_info_msg(part_info_msg); if (rc < 0) { XSRETURN_UNDEF; } } else { XSRETURN_UNDEF; } OUTPUT: RETVAL void slurm_print_partition_info_msg(slurm_t self, FILE *out, HV *part_info_msg, int one_liner=0) PREINIT: partition_info_msg_t pi_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_partition_info_msg(part_info_msg, &pi_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &pi_msg, one_liner CLEANUP: xfree(pi_msg.partition_array); void slurm_print_partition_info(slurm_t self, FILE *out, HV *part_info, int one_liner=0) PREINIT: partition_info_t pi; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_partition_info(part_info, &pi) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &pi, one_liner CLEANUP: xfree(pi.node_inx); char_xfree * slurm_sprint_partition_info(slurm_t self, HV *part_info, int one_liner=0) PREINIT: partition_info_t pi; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_partition_info(part_info, &pi) < 0) { XSRETURN_UNDEF; } RETVAL = slurm_sprint_partition_info(&pi, one_liner); xfree(pi.node_inx); OUTPUT: RETVAL int slurm_create_partition(slurm_t self, HV *part_info) PREINIT: update_part_msg_t update_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_update_part_msg(part_info, &update_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: &update_msg int slurm_update_partition(slurm_t self, HV *part_info) PREINIT: update_part_msg_t update_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_update_part_msg(part_info, &update_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: &update_msg int slurm_delete_partition(slurm_t self, HV *delete_part_msg) PREINIT: delete_part_msg_t dp_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_delete_part_msg(delete_part_msg, &dp_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: &dp_msg ###################################################################### # SLURM RESERVATION CONFIGURATION READ/PRINT/UPDATE FUNCTIONS ###################################################################### HV * slurm_load_reservations(slurm_t self, time_t update_time=0) PREINIT: reserve_info_msg_t *resv_info_msg = NULL; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_load_reservations(update_time, &resv_info_msg); if(rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = reserve_info_msg_to_hv(resv_info_msg, RETVAL); slurm_free_reservation_info_msg(resv_info_msg); if (rc < 0) { XSRETURN_UNDEF; } } else { XSRETURN_UNDEF; } OUTPUT: RETVAL char_free * slurm_create_reservation(slurm_t self, HV *res_info) PREINIT: resv_desc_msg_t resv_msg; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_update_reservation_msg(res_info, &resv_msg) < 0) { XSRETURN_UNDEF; } RETVAL = slurm_create_reservation(&resv_msg); if (RETVAL == NULL) { XSRETURN_UNDEF; } OUTPUT: RETVAL int slurm_update_reservation(slurm_t self, HV *res_info) PREINIT: resv_desc_msg_t resv_msg; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_update_reservation_msg(res_info, &resv_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: &resv_msg int slurm_delete_reservation(slurm_t self, HV *res_info) PREINIT: reservation_name_msg_t resv_name; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_delete_reservation_msg(res_info, &resv_name) < 0) { XSRETURN_UNDEF; } C_ARGS: &resv_name void slurm_print_reservation_info_msg(slurm_t self, FILE *out, HV *resv_info_msg, int one_liner=0) PREINIT: reserve_info_msg_t ri_msg; int i; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_reserve_info_msg(resv_info_msg, &ri_msg) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &ri_msg, one_liner CLEANUP: for (i = 0; i < ri_msg.record_count; i ++) xfree(ri_msg.reservation_array[i]); xfree(ri_msg.reservation_array); void slurm_print_reservation_info(slurm_t self, FILE *out, HV *resv_info, int one_liner=0) PREINIT: reserve_info_t ri; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if (out == NULL) { Perl_croak (aTHX_ "Invalid output stream specified: FILE not found"); } if(hv_to_reserve_info(resv_info, &ri) < 0) { XSRETURN_UNDEF; } C_ARGS: out, &ri, one_liner CLEANUP: xfree(ri.node_inx); char_xfree * slurm_sprint_reservation_info(slurm_t self, HV *resv_info, int one_liner=0) PREINIT: reserve_info_t ri; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_reserve_info(resv_info, &ri) < 0) { XSRETURN_UNDEF; } RETVAL = slurm_sprint_reservation_info(&ri, one_liner); xfree(ri.node_inx); OUTPUT: RETVAL ###################################################################### # SLURM PING/RECONFIGURE/SHUTDOWN FUNCTIONS ###################################################################### int slurm_ping(slurm_t self, uint16_t primary=1) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: primary int slurm_reconfigure(slurm_t self) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: int slurm_shutdown(slurm_t self, uint16_t options=0) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: options int slurm_takeover(slurm_t self) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: int slurm_set_debug_level(slurm_t self, uint32_t debug_level) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: debug_level int slurm_set_schedlog_level(slurm_t self, uint32_t schedlog_level) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: schedlog_level ###################################################################### # SLURM JOB SUSPEND FUNCTIONS ###################################################################### int slurm_suspend(slurm_t self, uint32_t job_id) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id int slurm_resume(slurm_t self, uint32_t job_id) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id int slurm_requeue(slurm_t self, uint32_t job_id, uint32_t state) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, state ###################################################################### # SLURM JOB CHECKPOINT FUNCTIONS ###################################################################### int slurm_checkpoint_able(slurm_t self, uint32_t job_id, uint32_t step_id, OUT time_t start_time) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, step_id, &start_time int slurm_checkpoint_disable(slurm_t self, uint32_t job_id, uint32_t step_id) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, step_id int slurm_checkpoint_enable(slurm_t self, uint32_t job_id, uint32_t step_id) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, step_id int slurm_checkpoint_create(slurm_t self, uint32_t job_id, uint32_t step_id, uint16_t max_wait, char *image_dir) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, step_id, max_wait, image_dir int slurm_checkpoint_requeue(slurm_t self, uint32_t job_id, uint16_t max_wait, char *image_dir) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, max_wait, image_dir int slurm_checkpoint_vacate(slurm_t self, uint32_t job_id, uint32_t step_id, uint16_t max_wait, char *image_dir) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, step_id, max_wait, image_dir int slurm_checkpoint_restart(slurm_t self, uint32_t job_id, uint32_t step_id, uint16_t stick, char *image_dir) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, step_id, stick, image_dir int slurm_checkpoint_complete(slurm_t self, uint32_t job_id, uint32_t step_id, time_t begin_time, uint32_t error_code, char *error_msg) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, step_id, begin_time, error_code, error_msg int slurm_checkpoint_error(slurm_t self, uint32_t job_id, uint32_t step_id, OUT uint32_t error_code, OUT char *error_msg) PREINIT: char* err_msg = NULL; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ error_code = SLURM_SUCCESS; RETVAL = slurm_checkpoint_error(job_id, step_id, (uint32_t *)&error_code, &err_msg); Newz(0, error_msg, strlen(err_msg), char); Copy(err_msg, error_msg, strlen(err_msg), char); xfree(err_msg); OUTPUT: RETVAL int slurm_checkpoint_tasks(slurm_t self, uint32_t job_id, uint16_t step_id, time_t begin_time, char *image_dir, uint16_t max_wait, char *nodelist) INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ C_ARGS: job_id, step_id, begin_time, image_dir, max_wait, nodelist ###################################################################### # SLURM TRIGGER FUNCTIONS ###################################################################### int slurm_set_trigger(slurm_t self, HV *trigger_info) PREINIT: trigger_info_t ti; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_trigger_info(trigger_info, &ti) < 0) { XSRETURN_UNDEF; } C_ARGS: &ti int slurm_clear_trigger(slurm_t self, HV *trigger_info) PREINIT: trigger_info_t ti; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_trigger_info(trigger_info, &ti) < 0) { XSRETURN_UNDEF; } C_ARGS: &ti HV * slurm_get_triggers(slurm_t self) PREINIT: trigger_info_msg_t *ti_msg; int rc; CODE: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ rc = slurm_get_triggers(&ti_msg); if(rc == SLURM_SUCCESS) { RETVAL = newHV(); sv_2mortal((SV*)RETVAL); rc = trigger_info_msg_to_hv(ti_msg, RETVAL); slurm_free_trigger_msg(ti_msg); if (rc < 0) { XSRETURN_UNDEF; } } else { XSRETURN_UNDEF; } OUTPUT: RETVAL int slurm_pull_trigger(slurm_t self, HV *trigger_info) PREINIT: trigger_info_t ti; INIT: if (self); /* this is needed to avoid a warning about unused variables. But if we take slurm_t self out of the mix Slurm-> doesn't work, only Slurm:: */ if(hv_to_trigger_info(trigger_info, &ti) < 0) { XSRETURN_UNDEF; } C_ARGS: &ti ###################################################################### # SLURM HOSTLIST FUNCTIONS ###################################################################### MODULE=Slurm PACKAGE=Slurm::Hostlist PREFIX=slurm_hostlist_ hostlist_t slurm_hostlist_create(char* hostlist) int slurm_hostlist_count(hostlist_t hl) int slurm_hostlist_find(hostlist_t hl, char* hostname) int slurm_hostlist_push(hostlist_t hl, char* hosts) int slurm_hostlist_push_host(hostlist_t hl, char* host) char_xfree * slurm_hostlist_ranged_string(hostlist_t hl) CODE: RETVAL = slurm_hostlist_ranged_string_xmalloc(hl); if (RETVAL == NULL) { XSRETURN_UNDEF; } OUTPUT: RETVAL char_free * slurm_hostlist_shift(hostlist_t hl = NULL) CODE: RETVAL = slurm_hostlist_shift(hl); if (RETVAL == NULL) { XSRETURN_UNDEF; } OUTPUT: RETVAL void slurm_hostlist_uniq(hostlist_t hl) void slurm_hostlist_DESTROY(hostlist_t hl) CODE: slurm_hostlist_destroy(hl); # TODO: add some non-exported functions ###################################################################### # LIST FUNCTIONS ###################################################################### MODULE = Slurm PACKAGE = Slurm::List PREFIX=slurm_list_ #void #slurm_list_append(List l, void *x) int slurm_list_count(List l) int slurm_list_is_empty(List l) #List #slurm_list_create(ListDelF f) #void #slurm_list_sort(List l, ListCmpF f) void slurm_list_DESTROY(List l) CODE: slurm_list_destroy(l); ################################################################################## MODULE = Slurm PACKAGE = Slurm::ListIterator PREFIX=slurm_list_iterator_ #void * #slurm_list_iterator_find(ListIterator i, ListFindF f, void *key) # CODE: # RETVAL = slurm_list_find(i, f, key) # OUTPUT: # RETVAL ListIterator slurm_list_iterator_create(List l) void slurm_list_iterator_reset(ListIterator i) #void * #slurm_list_iterator_next(ListIterator i) # CODE: # RETVAL = slurm_list_next(i) # OUTPUT: # RETVAL void slurm_list_iterator_DESTROY(ListIterator i) CODE: slurm_list_iterator_destroy(i); ###################################################################### # BITSTRING FUNCTIONS ###################################################################### MODULE = Slurm PACKAGE = Slurm::Bitstr PREFIX=slurm_bit_ # # $bitmap = Slurm::Bitstr::alloc($nbits); bitstr_t * slurm_bit_alloc(bitoff_t nbits) POSTCALL: if(RETVAL == NULL) { XSRETURN_UNDEF; } bitstr_t * slurm_bit_copy(bitstr_t *b) POSTCALL: if(RETVAL == NULL) { XSRETURN_UNDEF; } int slurm_bit_test(bitstr_t *b, bitoff_t bit) void slurm_bit_set(bitstr_t *b, bitoff_t bit) void slurm_bit_clear(bitstr_t *b, bitoff_t bit) void slurm_bit_nset(bitstr_t *b, bitoff_t start, bitoff_t stop) void slurm_bit_nclear(bitstr_t *b, bitoff_t start, bitoff_t stop) bitoff_t slurm_bit_ffc(bitstr_t *b) bitoff_t slurm_bit_ffs(bitstr_t *b) bitoff_t slurm_bit_fls(bitstr_t *b) bitoff_t slurm_bit_nffc(bitstr_t *b, int n) bitoff_t slurm_bit_nffs(bitstr_t *b, int n) bitoff_t slurm_bit_noc(bitstr_t *b, int n, int seed) bitoff_t slurm_bit_size(bitstr_t *b) void slurm_bit_and(bitstr_t *b1, bitstr_t *b2) void slurm_bit_not(bitstr_t *b) void slurm_bit_or(bitstr_t *b1, bitstr_t *b2) void slurm_bit_copybits(bitstr_t *b1, bitstr_t *b2) int slurm_bit_set_count(bitstr_t *b) int slurm_bit_set_count_range(bitstr_t *b, int start, int end) int slurm_bit_clear_count(bitstr_t *b) int slurm_bit_nset_max_count(bitstr_t *b) bitstr_t * slurm_bit_rotate_copy(bitstr_t *b, int n, bitoff_t nbits) POSTCALL: if(RETVAL == NULL) { XSRETURN_UNDEF; } void slurm_bit_rotate(bitstr_t *b, int n) # $str = $bitmap->fmt(); char * slurm_bit_fmt(bitstr_t *b) PREINIT: int len = 1, bits; char *tmp_str; CODE: bits = slurm_bit_size(b); while(bits > 0) { bits /= 10; len ++; } bits = slurm_bit_size(b); len *= bits; New(0, tmp_str, len, char); slurm_bit_fmt(tmp_str, len, b); len = strlen(tmp_str) + 1; New(0, RETVAL, len, char); Copy(tmp_str, RETVAL, len, char); Safefree(tmp_str); OUTPUT: RETVAL int slurm_bit_unfmt(bitstr_t *b, char *str) # $array = Slurm::Bitstr::fmt2int($str); AV * slurm_bit_fmt2int(char *str) PREINIT: int i = 0, *array; CODE: array = slurm_bitfmt2int(str); RETVAL = newAV(); while (array[i] != -1) { av_store_int(RETVAL, i, array[i]); i ++; } xfree(array); OUTPUT: RETVAL char * slurm_bit_fmt_hexmask(bitstr_t *b) PREINIT: char *tmp_str; int len; CODE: tmp_str = slurm_bit_fmt_hexmask(b); len = strlen(tmp_str) + 1; New(0, RETVAL, len, char); Copy(tmp_str, RETVAL, len, char); xfree(tmp_str); OUTPUT: RETVAL # XXX: only bits set in "str" are copied to "b". # bits set originally in "b" stay set after unfmt. # maybe this is a bug int slurm_bit_unfmt_hexmask(bitstr_t *b, char *str) char * slurm_bit_fmt_binmask(bitstr_t *b) PREINIT: char *tmp_str; int len; CODE: tmp_str = slurm_bit_fmt_binmask(b); len = strlen(tmp_str) + 1; New(0, RETVAL, len, char); Copy(tmp_str, RETVAL, len, char); xfree(tmp_str); OUTPUT: RETVAL # ditto int slurm_bit_unfmt_binmask(bitstr_t *b, char *str) void slurm_bit_fill_gaps(bitstr_t *b) int slurm_bit_super_set(bitstr_t *b1, bitstr_t *b2) int slurm_bit_overlap(bitstr_t *b1, bitstr_t *b2) int slurm_bit_equal(bitstr_t *b1, bitstr_t *b2) bitstr_t * slurm_bit_pick_cnt(bitstr_t *b, bitoff_t nbits) POSTCALL: if(RETVAL == NULL) { XSRETURN_UNDEF; } bitoff_t slurm_bit_get_bit_num(bitstr_t *b, int pos) int slurm_bit_get_pos_num(bitstr_t *b, bitoff_t pos) void slurm_bit_DESTROY(bitstr_t *b) CODE: FREE_NULL_BITMAP(b); slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/alloc.c000066400000000000000000000422121265000126300236360ustar00rootroot00000000000000/* * alloc.c - convert data between resource allocation related messages and perl HVs */ #include #include #include #include #define NEED_sv_2pv_flags_GLOBAL #include "ppport.h" #include "slurm-perl.h" static void _free_environment(char** environ); /* * convert perl HV to job_desc_msg_t * return 0 on success, -1 on failure */ int hv_to_job_desc_msg(HV *hv, job_desc_msg_t *job_desc) { SV **svp; HV *environ_hv; AV *argv_av; SV *val; char *env_key, *env_val; I32 klen; STRLEN vlen; int num_keys, i; slurm_init_job_desc_msg(job_desc); FETCH_FIELD(hv, job_desc, account, charp, FALSE); FETCH_FIELD(hv, job_desc, acctg_freq, charp, FALSE); FETCH_FIELD(hv, job_desc, alloc_node, charp, FALSE); FETCH_FIELD(hv, job_desc, alloc_resp_port, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, alloc_sid, uint32_t, FALSE); /* argv, argc */ if((svp = hv_fetch(hv, "argv", 4, FALSE))) { if(SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { argv_av = (AV*)SvRV(*svp); job_desc->argc = av_len(argv_av) + 1; if (job_desc->argc > 0) { Newz(0, job_desc->argv, (int32_t)(job_desc->argc + 1), char*); for(i = 0; i < job_desc->argc; i ++) { if((svp = av_fetch(argv_av, i, FALSE))) *(job_desc->argv + i) = (char*) SvPV_nolen(*svp); else { Perl_warn(aTHX_ "error fetching `argv' of job descriptor"); free_job_desc_msg_memory(job_desc); return -1; } } } } else { Perl_warn(aTHX_ "`argv' of job descriptor is not an array reference, ignored"); } } FETCH_FIELD(hv, job_desc, array_inx, charp, FALSE); FETCH_FIELD(hv, job_desc, begin_time, time_t, FALSE); FETCH_FIELD(hv, job_desc, ckpt_interval, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, ckpt_dir, charp, FALSE); FETCH_FIELD(hv, job_desc, comment, charp, FALSE); FETCH_FIELD(hv, job_desc, contiguous, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, cpu_bind, charp, FALSE); FETCH_FIELD(hv, job_desc, cpu_bind_type, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, dependency, charp, FALSE); FETCH_FIELD(hv, job_desc, end_time, time_t, FALSE); /* environment, env_size */ if((svp = hv_fetch(hv, "environment", 11, FALSE))) { if(SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV) { environ_hv = (HV*)SvRV(*svp); num_keys = HvKEYS(environ_hv); job_desc->env_size = num_keys; Newz(0, job_desc->environment, num_keys + 1, char*); hv_iterinit(environ_hv); i = 0; while((val = hv_iternextsv(environ_hv, &env_key, &klen))) { env_val = SvPV(val, vlen); Newz(0, (*(job_desc->environment + i)), klen + vlen + 2, char); sprintf(*(job_desc->environment + i), "%s=%s", env_key, env_val); i ++; } } else { Perl_warn(aTHX_ "`environment' of job descriptor is not a hash reference, ignored"); } } FETCH_FIELD(hv, job_desc, exc_nodes, charp, FALSE); FETCH_FIELD(hv, job_desc, features, charp, FALSE); FETCH_FIELD(hv, job_desc, gres, charp, FALSE); FETCH_FIELD(hv, job_desc, group_id, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, immediate, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, job_id, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, kill_on_node_fail, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, licenses, charp, FALSE); FETCH_FIELD(hv, job_desc, mail_type, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, mail_user, charp, FALSE); FETCH_FIELD(hv, job_desc, mem_bind, charp, FALSE); FETCH_FIELD(hv, job_desc, mem_bind_type, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, name, charp, FALSE); FETCH_FIELD(hv, job_desc, network, charp, FALSE); FETCH_FIELD(hv, job_desc, nice, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, num_tasks, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, open_mode, uint8_t, FALSE); FETCH_FIELD(hv, job_desc, other_port, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, overcommit, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, partition, charp, FALSE); FETCH_FIELD(hv, job_desc, plane_size, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, priority, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, profile, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, qos, charp, FALSE); FETCH_FIELD(hv, job_desc, resp_host, charp, FALSE); FETCH_FIELD(hv, job_desc, req_nodes, charp, FALSE); FETCH_FIELD(hv, job_desc, requeue, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, reservation, charp, FALSE); FETCH_FIELD(hv, job_desc, script, charp, FALSE); FETCH_FIELD(hv, job_desc, shared, uint16_t, FALSE); /* spank_job_env, spank_job_env_size */ if((svp = hv_fetch(hv, "spank_job_env", 13, FALSE))) { if(SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV) { environ_hv = (HV*)SvRV(*svp); num_keys = HvKEYS(environ_hv); job_desc->spank_job_env_size = num_keys; Newz(0, job_desc->spank_job_env, num_keys + 1, char*); hv_iterinit(environ_hv); i = 0; while((val = hv_iternextsv(environ_hv, &env_key, &klen))) { env_val = SvPV(val, vlen); Newz(0, (*(job_desc->spank_job_env + i)), klen + vlen + 2, char); sprintf(*(job_desc->spank_job_env + i), "%s=%s", env_key, env_val); i ++; } } else { Perl_warn(aTHX_ "`spank_job_env' of job descriptor is not a hash reference, ignored"); } } FETCH_FIELD(hv, job_desc, task_dist, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, time_limit, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, time_min, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, user_id, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, wait_all_nodes, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, warn_signal, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, warn_time, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, work_dir, charp, FALSE); /* job constraints: */ FETCH_FIELD(hv, job_desc, cpu_freq_min, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, cpu_freq_max, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, cpu_freq_gov, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, cpus_per_task, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, min_cpus, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, max_cpus, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, min_nodes, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, max_nodes, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, sockets_per_node, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, cores_per_socket, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, threads_per_core, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, ntasks_per_node, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, ntasks_per_socket, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, ntasks_per_core, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, pn_min_cpus, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, pn_min_memory, uint32_t, FALSE); FETCH_FIELD(hv, job_desc, pn_min_tmp_disk, uint32_t, FALSE); /* geometry */ if((svp = hv_fetch(hv, "geometry", 8, FALSE))) { AV *av; if (!SvROK(*svp) || SvTYPE(SvRV(*svp)) != SVt_PVAV) { Perl_warn(aTHX_ "`geometry' is not an array reference in job descriptor"); free_job_desc_msg_memory(job_desc); return -1; } av = (AV*)SvRV(*svp); for(i = 0; i < HIGHEST_DIMENSIONS; i ++) { if(! (svp = av_fetch(av, i, FALSE))) { Perl_warn(aTHX_ "geometry of dimension %d missing in job descriptor", i); free_job_desc_msg_memory(job_desc); return -1; } job_desc->geometry[i] = SvUV(*svp); } } if((svp = hv_fetch(hv, "conn_type", 9, FALSE))) { AV *av; if (!SvROK(*svp) || SvTYPE(SvRV(*svp)) != SVt_PVAV) { Perl_warn(aTHX_ "`conn_type' is not an array reference in job descriptor"); free_job_desc_msg_memory(job_desc); return -1; } av = (AV*)SvRV(*svp); for(i = 0; i < HIGHEST_DIMENSIONS; i ++) { if(! (svp = av_fetch(av, i, FALSE))) { Perl_warn(aTHX_ "conn_type of dimension %d missing in job descriptor", i); free_job_desc_msg_memory(job_desc); return -1; } job_desc->conn_type[i] = SvUV(*svp); } } FETCH_FIELD(hv, job_desc, reboot, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, rotate, uint16_t, FALSE); FETCH_FIELD(hv, job_desc, blrtsimage, charp, FALSE); FETCH_FIELD(hv, job_desc, linuximage, charp, FALSE); FETCH_FIELD(hv, job_desc, mloaderimage, charp, FALSE); FETCH_FIELD(hv, job_desc, ramdiskimage, charp, FALSE); FETCH_PTR_FIELD(hv, job_desc, select_jobinfo, "Slurm::dynamic_plugin_data_t", FALSE); FETCH_FIELD(hv, job_desc, std_err, charp, FALSE); FETCH_FIELD(hv, job_desc, std_in, charp, FALSE); FETCH_FIELD(hv, job_desc, std_out, charp, FALSE); FETCH_FIELD(hv, job_desc, wckey, charp, FALSE); return 0; } /* * free allocated environment variable memory for job_desc_msg_t */ static void _free_environment(char** environ) { int i; if(! environ) return; for(i = 0; *(environ + i) ; i ++) Safefree(*(environ + i)); Safefree(environ); } /* * free allocate memory for job_desc_msg_t */ void free_job_desc_msg_memory(job_desc_msg_t *msg) { if (msg->argv) Safefree (msg->argv); _free_environment(msg->environment); _free_environment(msg->spank_job_env); } /* * convert resource_allocation_resource_msg_t to perl HV */ int resource_allocation_response_msg_to_hv(resource_allocation_response_msg_t *resp_msg, HV *hv) { AV *av; int i; STORE_FIELD(hv, resp_msg, job_id, uint32_t); if(resp_msg->node_list) STORE_FIELD(hv, resp_msg, node_list, charp); STORE_FIELD(hv, resp_msg, num_cpu_groups, uint16_t); if(resp_msg->num_cpu_groups) { av = newAV(); for(i = 0; i < resp_msg->num_cpu_groups; i ++) { av_store_uint16_t(av, i, resp_msg->cpus_per_node[i]); } hv_store_sv(hv, "cpus_per_node", newRV_noinc((SV*)av)); av = newAV(); for(i = 0; i < resp_msg->num_cpu_groups; i ++) { av_store_uint32_t(av, i, resp_msg->cpu_count_reps[i]); } hv_store_sv(hv, "cpu_count_reps", newRV_noinc((SV*)av)); } STORE_FIELD(hv, resp_msg, node_cnt, uint32_t); STORE_FIELD(hv, resp_msg, error_code, uint32_t); STORE_PTR_FIELD(hv, resp_msg, select_jobinfo, "Slurm::dynamic_plugin_data_t"); return 0; } /* * convert job_alloc_info_response_msg_t to perl HV */ int job_alloc_info_response_msg_to_hv(job_alloc_info_response_msg_t *resp_msg, HV* hv) { AV* av; int i; STORE_FIELD(hv, resp_msg, job_id, uint32_t); if(resp_msg->node_list) STORE_FIELD(hv, resp_msg, node_list, charp); STORE_FIELD(hv, resp_msg, num_cpu_groups, uint16_t); if(resp_msg->num_cpu_groups) { av = newAV(); for(i = 0; i < resp_msg->num_cpu_groups; i ++) { av_store_uint16_t(av, i, resp_msg->cpus_per_node[i]); } hv_store_sv(hv, "cpus_per_node", newRV_noinc((SV*)av)); av = newAV(); for(i = 0; i < resp_msg->num_cpu_groups; i ++) { av_store_uint32_t(av, i, resp_msg->cpu_count_reps[i]); } hv_store_sv(hv, "cpu_count_reps", newRV_noinc((SV*)av)); } STORE_FIELD(hv, resp_msg, node_cnt, uint32_t); if(resp_msg->node_cnt) { av = newAV(); for(i = 0; i < resp_msg->node_cnt; i ++) { /* XXX: This is a packed inet address */ av_store(av, i, newSVpvn((char*)(resp_msg->node_addr + i), sizeof(slurm_addr_t))); } hv_store_sv(hv, "node_addr", newRV_noinc((SV*)av)); } STORE_FIELD(hv, resp_msg, error_code, uint32_t); STORE_PTR_FIELD(hv, resp_msg, select_jobinfo, "Slurm::dynamic_plugin_data_t"); return 0; } /* * convert submit_response_msg_t to perl HV */ int submit_response_msg_to_hv(submit_response_msg_t *resp_msg, HV* hv) { STORE_FIELD(hv, resp_msg, job_id, uint32_t); STORE_FIELD(hv, resp_msg, step_id, uint32_t); STORE_FIELD(hv, resp_msg, error_code, uint32_t); return 0; } /* * convert job_sbcast_cred_msg_t to perl HV */ int job_sbcast_cred_msg_to_hv(job_sbcast_cred_msg_t *msg, HV *hv) { AV *av; int i; STORE_FIELD(hv, msg, job_id, uint32_t); STORE_FIELD(hv, msg, node_cnt, uint32_t); if(msg->node_cnt) { av = newAV(); for(i = 0; i < msg->node_cnt; i ++) { /* XXX: This is a packed inet address */ av_store(av, i, newSVpvn((char*)(msg->node_addr + i), sizeof(slurm_addr_t))); } hv_store_sv(hv, "node_addr", newRV_noinc((SV*)av)); } if (msg->node_list) STORE_FIELD(hv, msg, node_list, charp); STORE_PTR_FIELD(hv, msg, sbcast_cred, "Slurm::sbcast_cred_t"); return 0; } int srun_job_complete_msg_to_hv(srun_job_complete_msg_t *msg, HV *hv) { STORE_FIELD(hv, msg, job_id, uint32_t); STORE_FIELD(hv, msg, step_id, uint32_t); return 0; } int srun_timeout_msg_to_hv(srun_timeout_msg_t *msg, HV *hv) { STORE_FIELD(hv, msg, job_id, uint32_t); STORE_FIELD(hv, msg, step_id, uint32_t); STORE_FIELD(hv, msg, timeout, time_t); return 0; } /********** pending_callback for slurm_allocate_resources_blocking() **********/ static SV* sarb_cb_sv = NULL; void set_sarb_cb(SV *callback) { if (callback == NULL) { if (sarb_cb_sv != NULL) sv_setsv(sarb_cb_sv, &PL_sv_undef); } else { if (sarb_cb_sv == NULL) sarb_cb_sv = newSVsv(callback); else sv_setsv(sarb_cb_sv, callback); } } void sarb_cb(uint32_t job_id) { dSP; if (sarb_cb_sv == NULL || sarb_cb_sv == &PL_sv_undef) return; ENTER; SAVETMPS; PUSHMARK(SP); XPUSHs(sv_2mortal(newSVuv(job_id))); PUTBACK; call_sv(sarb_cb_sv, G_VOID | G_DISCARD); FREETMPS; LEAVE; } /********** convert functions for callbacks **********/ static int srun_ping_msg_to_hv(srun_ping_msg_t *msg, HV *hv) { STORE_FIELD(hv, msg, job_id, uint32_t); STORE_FIELD(hv, msg, step_id, uint32_t); return 0; } static int srun_user_msg_to_hv(srun_user_msg_t *msg, HV *hv) { STORE_FIELD(hv, msg, job_id, uint32_t); STORE_FIELD(hv, msg, msg, charp); return 0; } static int srun_node_fail_msg_to_hv(srun_node_fail_msg_t *msg, HV *hv) { STORE_FIELD(hv, msg, job_id, uint32_t); STORE_FIELD(hv, msg, nodelist, charp); STORE_FIELD(hv, msg, step_id, uint32_t); return 0; } /*********** callbacks for slurm_allocation_msg_thr_create() **********/ static SV *ping_cb_sv = NULL; static SV *job_complete_cb_sv = NULL; static SV *timeout_cb_sv = NULL; static SV *user_msg_cb_sv = NULL; static SV *node_fail_cb_sv = NULL; void set_sacb(HV *callbacks) { SV **svp, *cb; if (callbacks == NULL) { if (ping_cb_sv != NULL) sv_setsv(ping_cb_sv, &PL_sv_undef); if (job_complete_cb_sv != NULL) sv_setsv(job_complete_cb_sv, &PL_sv_undef); if (timeout_cb_sv != NULL) sv_setsv(timeout_cb_sv, &PL_sv_undef); if (user_msg_cb_sv != NULL) sv_setsv(user_msg_cb_sv, &PL_sv_undef); if (node_fail_cb_sv != NULL) sv_setsv(node_fail_cb_sv, &PL_sv_undef); return; } svp = hv_fetch(callbacks, "ping", 4, FALSE); cb = svp ? *svp : &PL_sv_undef; if (ping_cb_sv == NULL) { ping_cb_sv = newSVsv(cb); } else { sv_setsv(ping_cb_sv, cb); } svp = hv_fetch(callbacks, "job_complete", 4, FALSE); cb = svp ? *svp : &PL_sv_undef; if (job_complete_cb_sv == NULL) { job_complete_cb_sv = newSVsv(cb); } else { sv_setsv(job_complete_cb_sv, cb); } svp = hv_fetch(callbacks, "timeout", 4, FALSE); cb = svp ? *svp : &PL_sv_undef; if (timeout_cb_sv == NULL) { timeout_cb_sv = newSVsv(cb); } else { sv_setsv(timeout_cb_sv, cb); } svp = hv_fetch(callbacks, "user_msg", 4, FALSE); cb = svp ? *svp : &PL_sv_undef; if (user_msg_cb_sv == NULL) { user_msg_cb_sv = newSVsv(cb); } else { sv_setsv(user_msg_cb_sv, cb); } svp = hv_fetch(callbacks, "node_fail", 4, FALSE); cb = svp ? *svp : &PL_sv_undef; if (node_fail_cb_sv == NULL) { node_fail_cb_sv = newSVsv(cb); } else { sv_setsv(node_fail_cb_sv, cb); } } static void ping_cb(srun_ping_msg_t *msg) { HV *hv; dSP; if (ping_cb_sv == NULL || ping_cb_sv == &PL_sv_undef) { return; } hv = newHV(); if (srun_ping_msg_to_hv(msg, hv) < 0) { Perl_warn( aTHX_ "failed to convert surn_ping_msg_t to perl HV"); SvREFCNT_dec(hv); return; } ENTER; SAVETMPS; PUSHMARK(SP); XPUSHs(sv_2mortal(newRV_noinc((SV*)hv))); PUTBACK; call_sv(ping_cb_sv, G_VOID); FREETMPS; LEAVE; } static void job_complete_cb(srun_job_complete_msg_t *msg) { HV *hv; dSP; if (job_complete_cb_sv == NULL || job_complete_cb_sv == &PL_sv_undef) { return; } hv = newHV(); if (srun_job_complete_msg_to_hv(msg, hv) < 0) { Perl_warn( aTHX_ "failed to convert surn_job_complete_msg_t to perl HV"); SvREFCNT_dec(hv); return; } ENTER; SAVETMPS; PUSHMARK(SP); XPUSHs(sv_2mortal(newRV_noinc((SV*)hv))); PUTBACK; call_sv(job_complete_cb_sv, G_VOID); FREETMPS; LEAVE; } static void timeout_cb(srun_timeout_msg_t *msg) { HV *hv; dSP; if (timeout_cb_sv == NULL || timeout_cb_sv == &PL_sv_undef) { return; } hv = newHV(); if (srun_timeout_msg_to_hv(msg, hv) < 0) { Perl_warn( aTHX_ "failed to convert surn_timeout_msg_t to perl HV"); SvREFCNT_dec(hv); return; } ENTER; SAVETMPS; PUSHMARK(SP); XPUSHs(sv_2mortal(newRV_noinc((SV*)hv))); PUTBACK; call_sv(timeout_cb_sv, G_VOID); FREETMPS; LEAVE; } static void user_msg_cb(srun_user_msg_t *msg) { HV *hv; dSP; if (user_msg_cb_sv == NULL || user_msg_cb_sv == &PL_sv_undef) { return; } hv = newHV(); if (srun_user_msg_to_hv(msg, hv) < 0) { Perl_warn( aTHX_ "failed to convert surn_user_msg_msg_t to perl HV"); SvREFCNT_dec(hv); return; } ENTER; SAVETMPS; PUSHMARK(SP); XPUSHs(sv_2mortal(newRV_noinc((SV*)hv))); PUTBACK; call_sv(user_msg_cb_sv, G_VOID); FREETMPS; LEAVE; } static void node_fail_cb(srun_node_fail_msg_t *msg) { HV *hv; dSP; if (node_fail_cb_sv == NULL || node_fail_cb_sv == &PL_sv_undef) { return; } hv = newHV(); if (srun_node_fail_msg_to_hv(msg, hv) < 0) { Perl_warn( aTHX_ "failed to convert surn_node_fail_msg_t to perl HV"); SvREFCNT_dec(hv); return; } ENTER; SAVETMPS; PUSHMARK(SP); XPUSHs(sv_2mortal(newRV_noinc((SV*)hv))); PUTBACK; call_sv(node_fail_cb_sv, G_VOID); FREETMPS; LEAVE; } slurm_allocation_callbacks_t sacb = { ping_cb, job_complete_cb, timeout_cb, user_msg_cb, node_fail_cb }; slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/bitstr.h000066400000000000000000000046371265000126300240710ustar00rootroot00000000000000/* * slurm_bit_*() functions are exported in libslurm. But the prototypes are not listed in slurm.h */ /* copied and modified from src/common/bitstring.h */ /* compat with Vixie macros */ bitstr_t *slurm_bit_alloc(bitoff_t nbits); int slurm_bit_test(bitstr_t *b, bitoff_t bit); void slurm_bit_set(bitstr_t *b, bitoff_t bit); void slurm_bit_clear(bitstr_t *b, bitoff_t bit); void slurm_bit_nclear(bitstr_t *b, bitoff_t start, bitoff_t stop); void slurm_bit_nset(bitstr_t *b, bitoff_t start, bitoff_t stop); /* changed interface from Vixie macros */ bitoff_t slurm_bit_ffc(bitstr_t *b); bitoff_t slurm_bit_ffs(bitstr_t *b); /* new */ bitoff_t slurm_bit_nffs(bitstr_t *b, int n); bitoff_t slurm_bit_nffc(bitstr_t *b, int n); bitoff_t slurm_bit_noc(bitstr_t *b, int n, int seed); void slurm_bit_free(bitstr_t *b); bitstr_t *slurm_bit_realloc(bitstr_t *b, bitoff_t nbits); bitoff_t slurm_bit_size(bitstr_t *b); void slurm_bit_and(bitstr_t *b1, bitstr_t *b2); void slurm_bit_not(bitstr_t *b); void slurm_bit_or(bitstr_t *b1, bitstr_t *b2); int slurm_bit_set_count(bitstr_t *b); int slurm_bit_set_count_range(bitstr_t *b, int start, int end); int slurm_bit_clear_count(bitstr_t *b); int slurm_bit_nset_max_count(bitstr_t *b); bitstr_t *slurm_bit_rotate_copy(bitstr_t *b1, int n, bitoff_t nbits); void slurm_bit_rotate(bitstr_t *b1, int n); char *slurm_bit_fmt(char *str, int len, bitstr_t *b); int slurm_bit_unfmt(bitstr_t *b, char *str); int *slurm_bitfmt2int (char *bit_str_ptr); char *slurm_bit_fmt_hexmask(bitstr_t *b); int slurm_bit_unfmt_hexmask(bitstr_t *b, const char *str); char *slurm_bit_fmt_binmask(bitstr_t *b); int slurm_bit_unfmt_binmask(bitstr_t *b, const char *str); bitoff_t slurm_bit_fls(bitstr_t *b); void slurm_bit_fill_gaps(bitstr_t *b); int slurm_bit_super_set(bitstr_t *b1, bitstr_t *b2); int slurm_bit_overlap(bitstr_t *b1, bitstr_t *b2); int slurm_bit_equal(bitstr_t *b1, bitstr_t *b2); void slurm_bit_copybits(bitstr_t *dest, bitstr_t *src); bitstr_t *slurm_bit_copy(bitstr_t *b); bitstr_t *slurm_bit_pick_cnt(bitstr_t *b, bitoff_t nbits); bitoff_t slurm_bit_get_bit_num(bitstr_t *b, int pos); int slurm_bit_get_pos_num(bitstr_t *b, bitoff_t pos); #ifdef FREE_NULL_BITMAP #undef FREE_NULL_BITMAP #endif #define FREE_NULL_BITMAP(_X) \ do { \ if (_X) slurm_bit_free (_X); \ _X = NULL; \ } while (0) slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/block.c000066400000000000000000000167331265000126300236470ustar00rootroot00000000000000/* * node.c - convert data between node related messages and perl HVs */ #include #include #include #include "ppport.h" #include #include "slurm-perl.h" /* * convert block_info_t to perl HV */ int block_info_to_hv(block_info_t *block_info, HV *hv) { int dim; AV* av = NULL; if(block_info->bg_block_id) STORE_FIELD(hv, block_info, bg_block_id, charp); if(block_info->blrtsimage) STORE_FIELD(hv, block_info, blrtsimage, charp); if (block_info->mp_inx) { int j; av = newAV(); for(j = 0; ; j += 2) { if(block_info->mp_inx[j] == -1) break; av_store(av, j, newSVuv(block_info->mp_inx[j])); av_store(av, j+1, newSVuv(block_info->mp_inx[j+1])); } hv_store_sv(hv, "mp_inx", newRV_noinc((SV*)av)); } av = newAV(); for (dim=0; dimconn_type[dim])); hv_store_sv(hv, "conn_type", newRV_noinc((SV*)av)); if(block_info->ionode_str) STORE_FIELD(hv, block_info, ionode_str, charp); if (block_info->ionode_inx) { int j; av = newAV(); for(j = 0; ; j += 2) { if(block_info->ionode_inx[j] == -1) break; av_store(av, j, newSVuv(block_info->ionode_inx[j])); av_store(av, j+1, newSVuv(block_info->ionode_inx[j+1])); } hv_store_sv(hv, "ionode_inx", newRV_noinc((SV*)av)); } if(block_info->linuximage) STORE_FIELD(hv, block_info, linuximage, charp); if(block_info->mloaderimage) STORE_FIELD(hv, block_info, mloaderimage, charp); if(block_info->mp_str) STORE_FIELD(hv, block_info, mp_str, charp); STORE_FIELD(hv, block_info, cnode_cnt, uint32_t); STORE_FIELD(hv, block_info, cnode_err_cnt, uint32_t); STORE_FIELD(hv, block_info, node_use, uint16_t); if(block_info->ramdiskimage) STORE_FIELD(hv, block_info, ramdiskimage, charp); if(block_info->reason) STORE_FIELD(hv, block_info, reason, charp); STORE_FIELD(hv, block_info, state, uint16_t); return 0; } /* * convert perl HV to block_info_t */ int hv_to_block_info(HV *hv, block_info_t *block_info) { SV **svp; AV *av; int i, n; memset(block_info, 0, sizeof(block_info_t)); FETCH_FIELD(hv, block_info, bg_block_id, charp, FALSE); FETCH_FIELD(hv, block_info, blrtsimage, charp, FALSE); svp = hv_fetch(hv, "mp_inx", 6, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); n = av_len(av) + 2; /* for trailing -1 */ block_info->mp_inx = xmalloc(n * sizeof(int)); for (i = 0 ; i < n-1; i += 2) { block_info->mp_inx[i] = (int)SvIV(*(av_fetch(av, i, FALSE))); block_info->mp_inx[i+1] = (int)SvIV(*(av_fetch(av, i+1 ,FALSE))); } block_info->mp_inx[n-1] = -1; } else { /* nothing to do */ } svp = hv_fetch(hv, "conn_type", 9, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); n = av_len(av); /* for trailing -1 */ for (i = 0 ; i < HIGHEST_DIMENSIONS; i++) block_info->conn_type[i] = SvUV(*(av_fetch(av, i, FALSE))); } else { /* nothing to do */ } FETCH_FIELD(hv, block_info, ionode_str, charp, FALSE); svp = hv_fetch(hv, "ionode_inx", 10, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); n = av_len(av) + 2; /* for trailing -1 */ block_info->ionode_inx = xmalloc(n * sizeof(int)); for (i = 0 ; i < n-1; i += 2) { block_info->ionode_inx[i] = (int)SvIV(*(av_fetch(av, i, FALSE))); block_info->ionode_inx[i+1] = (int)SvIV(*(av_fetch(av, i+1 ,FALSE))); } block_info->ionode_inx[n-1] = -1; } else { /* nothing to do */ } FETCH_FIELD(hv, block_info, linuximage, charp, FALSE); FETCH_FIELD(hv, block_info, mloaderimage, charp, FALSE); FETCH_FIELD(hv, block_info, mp_str, charp, FALSE); FETCH_FIELD(hv, block_info, cnode_cnt, uint32_t, TRUE); FETCH_FIELD(hv, block_info, node_use, uint16_t, TRUE); FETCH_FIELD(hv, block_info, ramdiskimage, charp, FALSE); FETCH_FIELD(hv, block_info, reason, charp, FALSE); FETCH_FIELD(hv, block_info, state, uint16_t, TRUE); return 0; } /* * convert block_info_msg_t to perl HV */ int block_info_msg_to_hv(block_info_msg_t *block_info_msg, HV *hv) { int i; HV *hv_info; AV *av; STORE_FIELD(hv, block_info_msg, last_update, time_t); /* record_count implied in node_array */ av = newAV(); for(i = 0; i < block_info_msg->record_count; i ++) { hv_info =newHV(); if (block_info_to_hv(block_info_msg->block_array + i, hv_info) < 0) { SvREFCNT_dec((SV*)hv_info); SvREFCNT_dec((SV*)av); return -1; } av_store(av, i, newRV_noinc((SV*)hv_info)); } hv_store_sv(hv, "block_array", newRV_noinc((SV*)av)); return 0; } /* * convert perl HV to block_info_msg_t */ int hv_to_block_info_msg(HV *hv, block_info_msg_t *block_info_msg) { SV **svp; AV *av; int i, n; memset(block_info_msg, 0, sizeof(block_info_msg_t)); FETCH_FIELD(hv, block_info_msg, last_update, time_t, TRUE); svp = hv_fetch(hv, "block_array", 11, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV)) { Perl_warn (aTHX_ "block_array is not an array reference in HV for block_info_msg_t"); return -1; } av = (AV*)SvRV(*svp); n = av_len(av) + 1; block_info_msg->record_count = n; block_info_msg->block_array = xmalloc(n * sizeof(block_info_t)); for (i = 0; i < n; i ++) { svp = av_fetch(av, i, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV)) { Perl_warn (aTHX_ "element %d in node_array is not valid", i); return -1; } if (hv_to_block_info((HV*)SvRV(*svp), &block_info_msg->block_array[i]) < 0) { Perl_warn (aTHX_ "failed to convert element %d in block_array", i); return -1; } } return 0; } /* * convert perl HV to update_block_msg_t */ int hv_to_update_block_msg(HV *hv, update_block_msg_t *update_msg) { SV **svp; AV *av; int i, n; slurm_init_update_block_msg(update_msg); FETCH_FIELD(hv, update_msg, bg_block_id, charp, FALSE); FETCH_FIELD(hv, update_msg, blrtsimage, charp, FALSE); svp = hv_fetch(hv, "mp_inx", 6, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); n = av_len(av) + 2; /* for trailing -1 */ update_msg->mp_inx = xmalloc(n * sizeof(int)); for (i = 0 ; i < n-1; i += 2) { update_msg->mp_inx[i] = (int)SvIV(*(av_fetch(av, i, FALSE))); update_msg->mp_inx[i+1] = (int)SvIV(*(av_fetch(av, i+1 ,FALSE))); } update_msg->mp_inx[n-1] = -1; } else { /* nothing to do */ } svp = hv_fetch(hv, "conn_type", 9, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); for (i = 0 ; i < HIGHEST_DIMENSIONS; i++) update_msg->conn_type[i] = SvUV(*(av_fetch(av, i, FALSE))); } else { /* nothing to do */ } FETCH_FIELD(hv, update_msg, ionode_str, charp, FALSE); svp = hv_fetch(hv, "ionode_inx", 10, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); n = av_len(av) + 2; /* for trailing -1 */ update_msg->ionode_inx = xmalloc(n * sizeof(int)); for (i = 0 ; i < n-1; i += 2) { update_msg->ionode_inx[i] = (int)SvIV(*(av_fetch(av, i, FALSE))); update_msg->ionode_inx[i+1] = (int)SvIV(*(av_fetch(av, i+1 ,FALSE))); } update_msg->ionode_inx[n-1] = -1; } else { /* nothing to do */ } FETCH_FIELD(hv, update_msg, linuximage, charp, FALSE); FETCH_FIELD(hv, update_msg, mloaderimage, charp, FALSE); FETCH_FIELD(hv, update_msg, mp_str, charp, FALSE); FETCH_FIELD(hv, update_msg, cnode_cnt, uint32_t, FALSE); FETCH_FIELD(hv, update_msg, node_use, uint16_t, FALSE); FETCH_FIELD(hv, update_msg, ramdiskimage, charp, FALSE); FETCH_FIELD(hv, update_msg, reason, charp, FALSE); FETCH_FIELD(hv, update_msg, state, uint16_t, FALSE); return 0; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/classmap000066400000000000000000000021471265000126300241310ustar00rootroot00000000000000# this file contains a hash reference $class_map, which maps $ntype of C type # to class name of Perl objects. This will be eval()-ed by xsubpp. # # XXX: DO NOT use $class or other variables used in xsubpp, or there will be # trouble with xsubpp v1.9508 as in RHEL5.3 $slurm_perl_api::class_map = { "slurm_t" => "Slurm", "bitstr_tPtr" => "Slurm::Bitstr", "hostlist_t" => "Slurm::Hostlist", "slurm_step_ctx_tPtr" => "Slurm::Stepctx", "List" => "Slurm::List", "ListIterator" => "Slurm::ListIterator", "dynamic_plugin_data_tPtr" => "Slurm::dynamic_plugin_data_t", "job_resources_tPtr" => "Slurm::job_resources_t", "slurm_cred_tPtr" => "Slurm::slurm_cred_t", "switch_jobinfo_tPtr" => "Slurm::switch_jobinfo_t", "select_jobinfo_tPtr" => "Slurm::select_jobinfo_t", "select_nodeinfo_tPtr" => "Slurm::select_nodeinfo_t", "jobacctinfo_tPtr" => "Slurm::jobacctinfo_t", "allocation_msg_thread_tPtr" => "Slurm::allocation_msg_thread_t", "node_info_msg_tPtr" => "Slurm::node_info_msg_t", "job_info_msg_tPtr" => "Slurm::job_info_msg_t", }; slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/conf.c000066400000000000000000000625561265000126300235060ustar00rootroot00000000000000/* * conf.c - convert data between slurm config and perl HVs */ #include #include #include #include #include "slurm-perl.h" /* * convert slurm_ctl_conf_t into perl HV */ int slurm_ctl_conf_to_hv(slurm_ctl_conf_t *conf, HV *hv) { STORE_FIELD(hv, conf, last_update, time_t); if (conf->acct_gather_conf) STORE_FIELD(hv, conf, acct_gather_conf, charp); if (conf->acct_gather_energy_type) STORE_FIELD(hv, conf, acct_gather_energy_type, charp); if (conf->acct_gather_filesystem_type) STORE_FIELD(hv, conf, acct_gather_filesystem_type, charp); if (conf->acct_gather_infiniband_type) STORE_FIELD(hv, conf, acct_gather_infiniband_type, charp); STORE_FIELD(hv, conf, acct_gather_node_freq, uint16_t); if (conf->acct_gather_profile_type) STORE_FIELD(hv, conf, acct_gather_profile_type, charp); STORE_FIELD(hv, conf, acctng_store_job_comment, uint16_t); if (conf->accounting_storage_backup_host) STORE_FIELD(hv, conf, accounting_storage_backup_host, charp); STORE_FIELD(hv, conf, accounting_storage_enforce, uint16_t); if (conf->accounting_storage_host) STORE_FIELD(hv, conf, accounting_storage_host, charp); if (conf->accounting_storage_loc) STORE_FIELD(hv, conf, accounting_storage_loc, charp); if (conf->accounting_storage_pass) STORE_FIELD(hv, conf, accounting_storage_pass, charp); STORE_FIELD(hv, conf, accounting_storage_port, uint32_t); if (conf->accounting_storage_type) STORE_FIELD(hv, conf, accounting_storage_type, charp); if (conf->accounting_storage_user) STORE_FIELD(hv, conf, accounting_storage_user, charp); if (conf->authinfo) STORE_FIELD(hv, conf, authinfo, charp); if (conf->authtype) STORE_FIELD(hv, conf, authtype, charp); if (conf->backup_addr) STORE_FIELD(hv, conf, backup_addr, charp); if (conf->backup_controller) STORE_FIELD(hv, conf, backup_controller, charp); STORE_FIELD(hv, conf, batch_start_timeout, uint16_t); if (conf->bb_type) STORE_FIELD(hv, conf, bb_type, charp); STORE_FIELD(hv, conf, boot_time, time_t); if (conf->checkpoint_type) STORE_FIELD(hv, conf, checkpoint_type, charp); if (conf->chos_loc) STORE_FIELD(hv, conf, chos_loc, charp); if (conf->core_spec_plugin) STORE_FIELD(hv, conf, core_spec_plugin, charp); if (conf->cluster_name) STORE_FIELD(hv, conf, cluster_name, charp); STORE_FIELD(hv, conf, complete_wait, uint16_t); if (conf->control_addr) STORE_FIELD(hv, conf, control_addr, charp); if (conf->control_machine) STORE_FIELD(hv, conf, control_machine, charp); STORE_FIELD(hv, conf, cpu_freq_def, uint32_t); if (conf->crypto_type) STORE_FIELD(hv, conf, crypto_type, charp); STORE_FIELD(hv, conf, debug_flags, uint64_t); STORE_FIELD(hv, conf, def_mem_per_cpu, uint32_t); STORE_FIELD(hv, conf, disable_root_jobs, uint16_t); STORE_FIELD(hv, conf, eio_timeout, uint16_t); STORE_FIELD(hv, conf, enforce_part_limits, uint16_t); if (conf->epilog) STORE_FIELD(hv, conf, epilog, charp); STORE_FIELD(hv, conf, epilog_msg_time, uint32_t); if (conf->epilog_slurmctld) STORE_FIELD(hv, conf, epilog_slurmctld, charp); if (conf->ext_sensors_conf) STORE_FIELD(hv, conf, ext_sensors_conf, charp); STORE_FIELD(hv, conf, ext_sensors_freq, uint16_t); if (conf->ext_sensors_type) STORE_FIELD(hv, conf, ext_sensors_type, charp); STORE_FIELD(hv, conf, fast_schedule, uint16_t); STORE_FIELD(hv, conf, first_job_id, uint32_t); STORE_FIELD(hv, conf, fs_dampening_factor, uint16_t); STORE_FIELD(hv, conf, get_env_timeout, uint16_t); if (conf->gres_plugins) STORE_FIELD(hv, conf, gres_plugins, charp); STORE_FIELD(hv, conf, group_info, uint16_t); STORE_FIELD(hv, conf, hash_val, uint32_t); STORE_FIELD(hv, conf, health_check_interval, uint16_t); STORE_FIELD(hv, conf, health_check_node_state, uint32_t); if (conf->health_check_program) STORE_FIELD(hv, conf, health_check_program, charp); STORE_FIELD(hv, conf, inactive_limit, uint16_t); if (conf->job_acct_gather_freq) STORE_FIELD(hv, conf, job_acct_gather_freq, charp); if (conf->job_acct_gather_params) STORE_FIELD(hv, conf, job_acct_gather_params, charp); if (conf->job_acct_gather_type) STORE_FIELD(hv, conf, job_acct_gather_type, charp); if (conf->job_ckpt_dir) STORE_FIELD(hv, conf, job_ckpt_dir, charp); if (conf->job_comp_host) STORE_FIELD(hv, conf, job_comp_host, charp); if (conf->job_comp_loc) STORE_FIELD(hv, conf, job_comp_loc, charp); if (conf->job_comp_pass) STORE_FIELD(hv, conf, job_comp_pass, charp); STORE_FIELD(hv, conf, job_comp_port, uint32_t); if (conf->job_comp_type) STORE_FIELD(hv, conf, job_comp_type, charp); if (conf->job_comp_user) STORE_FIELD(hv, conf, job_comp_user, charp); if (conf->job_container_plugin) STORE_FIELD(hv, conf, job_container_plugin, charp); if (conf->job_credential_private_key) STORE_FIELD(hv, conf, job_credential_private_key, charp); if (conf->job_credential_public_certificate) STORE_FIELD(hv, conf, job_credential_public_certificate, charp); STORE_FIELD(hv, conf, job_file_append, uint16_t); STORE_FIELD(hv, conf, job_requeue, uint16_t); if (conf->job_submit_plugins) STORE_FIELD(hv, conf, job_submit_plugins, charp); STORE_FIELD(hv, conf, keep_alive_time, uint16_t); STORE_FIELD(hv, conf, kill_on_bad_exit, uint16_t); STORE_FIELD(hv, conf, kill_wait, uint16_t); if (conf->launch_type) STORE_FIELD(hv, conf, launch_type, charp); if (conf->layouts) STORE_FIELD(hv, conf, layouts, charp); if (conf->licenses) STORE_FIELD(hv, conf, licenses, charp); if (conf->licenses_used) STORE_FIELD(hv, conf, licenses_used, charp); STORE_FIELD(hv, conf, log_fmt, uint16_t); if (conf->mail_prog) STORE_FIELD(hv, conf, mail_prog, charp); STORE_FIELD(hv, conf, max_array_sz, uint16_t); STORE_FIELD(hv, conf, max_job_cnt, uint16_t); STORE_FIELD(hv, conf, max_job_id, uint32_t); STORE_FIELD(hv, conf, max_mem_per_cpu, uint32_t); if (conf->max_step_cnt) STORE_FIELD(hv, conf, max_step_cnt, uint32_t); STORE_FIELD(hv, conf, max_tasks_per_node, uint16_t); if (conf->mem_limit_enforce) STORE_FIELD(hv, conf, mem_limit_enforce, uint16_t); STORE_FIELD(hv, conf, min_job_age, uint16_t); if (conf->mpi_default) STORE_FIELD(hv, conf, mpi_default, charp); if (conf->mpi_params) STORE_FIELD(hv, conf, mpi_params, charp); STORE_FIELD(hv, conf, msg_timeout, uint16_t); STORE_FIELD(hv, conf, next_job_id, uint32_t); if (conf->node_prefix) STORE_FIELD(hv, conf, node_prefix, charp); STORE_FIELD(hv, conf, over_time_limit, uint16_t); if (conf->plugindir) STORE_FIELD(hv, conf, plugindir, charp); if (conf->plugstack) STORE_FIELD(hv, conf, plugstack, charp); if (conf->power_parameters) STORE_FIELD(hv, conf, power_parameters, charp); STORE_FIELD(hv, conf, preempt_mode, uint16_t); if (conf->preempt_type) STORE_FIELD(hv, conf, preempt_type, charp); STORE_FIELD(hv, conf, priority_calc_period, uint32_t); STORE_FIELD(hv, conf, priority_decay_hl, uint32_t); STORE_FIELD(hv, conf, priority_favor_small, uint16_t); STORE_FIELD(hv, conf, priority_flags, uint16_t); STORE_FIELD(hv, conf, priority_max_age, uint32_t); if (conf->priority_params) STORE_FIELD(hv, conf, priority_params, charp); STORE_FIELD(hv, conf, priority_reset_period, uint16_t); if (conf->priority_type) STORE_FIELD(hv, conf, priority_type, charp); STORE_FIELD(hv, conf, priority_weight_age, uint32_t); STORE_FIELD(hv, conf, priority_weight_fs, uint32_t); STORE_FIELD(hv, conf, priority_weight_js, uint32_t); STORE_FIELD(hv, conf, priority_weight_part, uint32_t); STORE_FIELD(hv, conf, priority_weight_qos, uint32_t); STORE_FIELD(hv, conf, priority_weight_tres, charp); STORE_FIELD(hv, conf, private_data, uint16_t); if (conf->proctrack_type) STORE_FIELD(hv, conf, proctrack_type, charp); if (conf->prolog) STORE_FIELD(hv, conf, prolog, charp); STORE_FIELD(hv, conf, prolog_flags, uint16_t); if (conf->prolog_slurmctld) STORE_FIELD(hv, conf, prolog_slurmctld, charp); STORE_FIELD(hv, conf, propagate_prio_process, uint16_t); if (conf->propagate_rlimits) STORE_FIELD(hv, conf, propagate_rlimits, charp); if (conf->propagate_rlimits_except) STORE_FIELD(hv, conf, propagate_rlimits_except, charp); if (conf->reboot_program) STORE_FIELD(hv, conf, reboot_program, charp); STORE_FIELD(hv, conf, reconfig_flags, uint16_t); if (conf->requeue_exit) STORE_FIELD(hv, conf, requeue_exit, charp); if (conf->requeue_exit_hold) STORE_FIELD(hv, conf, requeue_exit_hold, charp); if (conf->resume_program) STORE_FIELD(hv, conf, resume_program, charp); STORE_FIELD(hv, conf, resume_rate, uint16_t); STORE_FIELD(hv, conf, resume_timeout, uint16_t); if (conf->resv_epilog) STORE_FIELD(hv, conf, resv_epilog, charp); STORE_FIELD(hv, conf, resv_over_run, uint16_t); if (conf->resv_prolog) STORE_FIELD(hv, conf, resv_prolog, charp); STORE_FIELD(hv, conf, ret2service, uint16_t); if (conf->route_plugin) STORE_FIELD(hv, conf, route_plugin, charp); if (conf->salloc_default_command) STORE_FIELD(hv, conf, salloc_default_command, charp); if (conf->sched_logfile) STORE_FIELD(hv, conf, sched_logfile, charp); STORE_FIELD(hv, conf, sched_log_level, uint16_t); if (conf->sched_params) STORE_FIELD(hv, conf, sched_params, charp); STORE_FIELD(hv, conf, sched_time_slice, uint16_t); if (conf->schedtype) STORE_FIELD(hv, conf, schedtype, charp); STORE_FIELD(hv, conf, schedport, uint16_t); STORE_FIELD(hv, conf, schedrootfltr, uint16_t); STORE_PTR_FIELD(hv, conf, select_conf_key_pairs, "Slurm::List"); /* TODO: Think about memory management */ if (conf->select_type) STORE_FIELD(hv, conf, select_type, charp); STORE_FIELD(hv, conf, select_type_param, uint16_t); if (conf->slurm_conf) STORE_FIELD(hv, conf, slurm_conf, charp); STORE_FIELD(hv, conf, slurm_user_id, uint32_t); if (conf->slurm_user_name) STORE_FIELD(hv, conf, slurm_user_name, charp); STORE_FIELD(hv, conf, slurmctld_debug, uint16_t); if (conf->slurmctld_logfile) STORE_FIELD(hv, conf, slurmctld_logfile, charp); if (conf->slurmctld_pidfile) STORE_FIELD(hv, conf, slurmctld_pidfile, charp); if (conf->slurmctld_plugstack) STORE_FIELD(hv, conf, slurmctld_plugstack, charp); STORE_FIELD(hv, conf, slurmctld_port, uint32_t); STORE_FIELD(hv, conf, slurmctld_port_count, uint16_t); STORE_FIELD(hv, conf, slurmctld_timeout, uint16_t); STORE_FIELD(hv, conf, slurmd_debug, uint16_t); if (conf->slurmd_logfile) STORE_FIELD(hv, conf, slurmd_logfile, charp); if (conf->slurmd_pidfile) STORE_FIELD(hv, conf, slurmd_pidfile, charp); if (conf->slurmd_plugstack) STORE_FIELD(hv, conf, slurmd_plugstack, charp); STORE_FIELD(hv, conf, slurmd_port, uint32_t); if (conf->slurmd_spooldir) STORE_FIELD(hv, conf, slurmd_spooldir, charp); STORE_FIELD(hv, conf, slurmd_timeout, uint16_t); STORE_FIELD(hv, conf, slurmd_user_id, uint32_t); if (conf->slurmd_user_name) STORE_FIELD(hv, conf, slurmd_user_name, charp); if (conf->srun_epilog) STORE_FIELD(hv, conf, srun_epilog, charp); if (conf->srun_port_range) STORE_PTR_FIELD(hv, conf, srun_port_range, "SLURM::port_range"); if (conf->srun_prolog) STORE_FIELD(hv, conf, srun_prolog, charp); if (conf->state_save_location) STORE_FIELD(hv, conf, state_save_location, charp); if (conf->suspend_exc_nodes) STORE_FIELD(hv, conf, suspend_exc_nodes, charp); if (conf->suspend_exc_parts) STORE_FIELD(hv, conf, suspend_exc_parts, charp); if (conf->suspend_program) STORE_FIELD(hv, conf, suspend_program, charp); STORE_FIELD(hv, conf, suspend_rate, uint16_t); STORE_FIELD(hv, conf, suspend_time, uint32_t); STORE_FIELD(hv, conf, suspend_timeout, uint16_t); if (conf->switch_type) STORE_FIELD(hv, conf, switch_type, charp); if (conf->task_epilog) STORE_FIELD(hv, conf, task_epilog, charp); if (conf->task_plugin) STORE_FIELD(hv, conf, task_plugin, charp); STORE_FIELD(hv, conf, task_plugin_param, uint16_t); if (conf->task_prolog) STORE_FIELD(hv, conf, task_prolog, charp); if (conf->tmp_fs) STORE_FIELD(hv, conf, tmp_fs, charp); if (conf->topology_plugin) STORE_FIELD(hv, conf, topology_plugin, charp); STORE_FIELD(hv, conf, track_wckey, uint16_t); STORE_FIELD(hv, conf, tree_width, uint16_t); if (conf->unkillable_program) STORE_FIELD(hv, conf, unkillable_program, charp); STORE_FIELD(hv, conf, unkillable_timeout, uint16_t); STORE_FIELD(hv, conf, use_pam, uint16_t); STORE_FIELD(hv, conf, use_spec_resources, uint16_t); if (conf->version) STORE_FIELD(hv, conf, version, charp); STORE_FIELD(hv, conf, vsize_factor, uint16_t); STORE_FIELD(hv, conf, wait_time, uint16_t); STORE_FIELD(hv, conf, z_16, uint16_t); STORE_FIELD(hv, conf, z_32, uint32_t); if (conf->z_char) STORE_FIELD(hv, conf, z_char, charp); return 0; } /* * convert perl HV to slurm_ctl_conf_t */ int hv_to_slurm_ctl_conf(HV *hv, slurm_ctl_conf_t *conf) { memset(conf, 0, sizeof(slurm_ctl_conf_t)); FETCH_FIELD(hv, conf, last_update, time_t, FALSE); FETCH_FIELD(hv, conf, acct_gather_conf, charp, FALSE); FETCH_FIELD(hv, conf, acct_gather_energy_type, charp, FALSE); FETCH_FIELD(hv, conf, acct_gather_filesystem_type, charp, FALSE); FETCH_FIELD(hv, conf, acct_gather_infiniband_type, charp, FALSE); FETCH_FIELD(hv, conf, acct_gather_node_freq, uint16_t, FALSE); FETCH_FIELD(hv, conf, acct_gather_profile_type, charp, FALSE); FETCH_FIELD(hv, conf, acctng_store_job_comment, uint16_t, FALSE); FETCH_FIELD(hv, conf, accounting_storage_enforce, uint16_t, TRUE); FETCH_FIELD(hv, conf, accounting_storage_backup_host, charp, FALSE); FETCH_FIELD(hv, conf, accounting_storage_host, charp, FALSE); FETCH_FIELD(hv, conf, accounting_storage_loc, charp, FALSE); FETCH_FIELD(hv, conf, accounting_storage_pass, charp, FALSE); FETCH_FIELD(hv, conf, accounting_storage_port, uint32_t, TRUE); FETCH_FIELD(hv, conf, accounting_storage_type, charp, FALSE); FETCH_FIELD(hv, conf, accounting_storage_user, charp, FALSE); FETCH_FIELD(hv, conf, authinfo, charp, FALSE); FETCH_FIELD(hv, conf, authtype, charp, FALSE); FETCH_FIELD(hv, conf, backup_addr, charp, FALSE); FETCH_FIELD(hv, conf, backup_controller, charp, FALSE); FETCH_FIELD(hv, conf, batch_start_timeout, uint16_t, TRUE); FETCH_FIELD(hv, conf, bb_type, charp, FALSE); FETCH_FIELD(hv, conf, boot_time, time_t, TRUE); FETCH_FIELD(hv, conf, checkpoint_type, charp, FALSE); FETCH_FIELD(hv, conf, chos_loc, charp, FALSE); FETCH_FIELD(hv, conf, core_spec_plugin, charp, FALSE); FETCH_FIELD(hv, conf, cluster_name, charp, FALSE); FETCH_FIELD(hv, conf, complete_wait, uint16_t, TRUE); FETCH_FIELD(hv, conf, control_addr, charp, FALSE); FETCH_FIELD(hv, conf, control_machine, charp, FALSE); FETCH_FIELD(hv, conf, cpu_freq_def, uint32_t, FALSE); FETCH_FIELD(hv, conf, crypto_type, charp, FALSE); FETCH_FIELD(hv, conf, debug_flags, uint64_t, TRUE); FETCH_FIELD(hv, conf, def_mem_per_cpu, uint32_t, TRUE); FETCH_FIELD(hv, conf, disable_root_jobs, uint16_t, TRUE); FETCH_FIELD(hv, conf, eio_timeout, uint16_t, FALSE); FETCH_FIELD(hv, conf, enforce_part_limits, uint16_t, TRUE); FETCH_FIELD(hv, conf, epilog, charp, FALSE); FETCH_FIELD(hv, conf, epilog_msg_time, uint32_t, TRUE); FETCH_FIELD(hv, conf, epilog_slurmctld, charp, FALSE); FETCH_FIELD(hv, conf, ext_sensors_conf, charp, FALSE); FETCH_FIELD(hv, conf, ext_sensors_freq, uint16_t, TRUE); FETCH_FIELD(hv, conf, ext_sensors_type, charp, FALSE); FETCH_FIELD(hv, conf, fast_schedule, uint16_t, TRUE); FETCH_FIELD(hv, conf, first_job_id, uint32_t, TRUE); FETCH_FIELD(hv, conf, fs_dampening_factor, uint16_t, FALSE); FETCH_FIELD(hv, conf, get_env_timeout, uint16_t, TRUE); FETCH_FIELD(hv, conf, gres_plugins, charp, FALSE); FETCH_FIELD(hv, conf, group_info, uint16_t, TRUE); FETCH_FIELD(hv, conf, hash_val, uint32_t, TRUE); FETCH_FIELD(hv, conf, health_check_interval, uint16_t, TRUE); FETCH_FIELD(hv, conf, health_check_node_state, uint32_t, TRUE); FETCH_FIELD(hv, conf, health_check_program, charp, FALSE); FETCH_FIELD(hv, conf, inactive_limit, uint16_t, TRUE); FETCH_FIELD(hv, conf, job_acct_gather_freq, charp, TRUE); FETCH_FIELD(hv, conf, job_acct_gather_params, charp, FALSE); FETCH_FIELD(hv, conf, job_acct_gather_type, charp, FALSE); FETCH_FIELD(hv, conf, job_ckpt_dir, charp, FALSE); FETCH_FIELD(hv, conf, job_comp_host, charp, FALSE); FETCH_FIELD(hv, conf, job_comp_loc, charp, FALSE); FETCH_FIELD(hv, conf, job_comp_pass, charp, FALSE); FETCH_FIELD(hv, conf, job_comp_port, uint32_t, TRUE); FETCH_FIELD(hv, conf, job_comp_type, charp, FALSE); FETCH_FIELD(hv, conf, job_comp_user, charp, FALSE); FETCH_FIELD(hv, conf, job_container_plugin, charp, FALSE); FETCH_FIELD(hv, conf, job_credential_private_key, charp, FALSE); FETCH_FIELD(hv, conf, job_credential_public_certificate, charp, FALSE); FETCH_FIELD(hv, conf, job_file_append, uint16_t, TRUE); FETCH_FIELD(hv, conf, job_requeue, uint16_t, TRUE); FETCH_FIELD(hv, conf, job_submit_plugins, charp, FALSE); FETCH_FIELD(hv, conf, keep_alive_time, uint16_t, TRUE); FETCH_FIELD(hv, conf, kill_on_bad_exit, uint16_t, TRUE); FETCH_FIELD(hv, conf, kill_wait, uint16_t, TRUE); FETCH_FIELD(hv, conf, launch_type, charp, FALSE); FETCH_FIELD(hv, conf, layouts, charp, FALSE); FETCH_FIELD(hv, conf, licenses, charp, FALSE); FETCH_FIELD(hv, conf, licenses_used, charp, FALSE); FETCH_FIELD(hv, conf, log_fmt, uint16_t, FALSE); FETCH_FIELD(hv, conf, mail_prog, charp, FALSE); FETCH_FIELD(hv, conf, max_array_sz, uint16_t, TRUE); FETCH_FIELD(hv, conf, max_job_cnt, uint16_t, TRUE); FETCH_FIELD(hv, conf, max_job_id, uint32_t, FALSE); FETCH_FIELD(hv, conf, max_mem_per_cpu, uint32_t, TRUE); FETCH_FIELD(hv, conf, max_step_cnt, uint32_t, FALSE); FETCH_FIELD(hv, conf, max_tasks_per_node, uint16_t, TRUE); FETCH_FIELD(hv, conf, mem_limit_enforce, uint16_t, FALSE); FETCH_FIELD(hv, conf, min_job_age, uint16_t, TRUE); FETCH_FIELD(hv, conf, mpi_default, charp, FALSE); FETCH_FIELD(hv, conf, mpi_params, charp, FALSE); FETCH_FIELD(hv, conf, msg_timeout, uint16_t, TRUE); FETCH_FIELD(hv, conf, next_job_id, uint32_t, TRUE); FETCH_FIELD(hv, conf, node_prefix, charp, FALSE); FETCH_FIELD(hv, conf, over_time_limit, uint16_t, TRUE); FETCH_FIELD(hv, conf, plugindir, charp, FALSE); FETCH_FIELD(hv, conf, plugstack, charp, FALSE); FETCH_FIELD(hv, conf, power_parameters, charp, FALSE); FETCH_FIELD(hv, conf, preempt_mode, uint16_t, TRUE); FETCH_FIELD(hv, conf, preempt_type, charp, FALSE); FETCH_FIELD(hv, conf, priority_calc_period, uint32_t, TRUE); FETCH_FIELD(hv, conf, priority_decay_hl, uint32_t, TRUE); FETCH_FIELD(hv, conf, priority_favor_small, uint16_t, TRUE); FETCH_FIELD(hv, conf, priority_flags, uint16_t, FALSE); FETCH_FIELD(hv, conf, priority_max_age, uint32_t, TRUE); FETCH_FIELD(hv, conf, priority_params, charp, FALSE); FETCH_FIELD(hv, conf, priority_reset_period, uint16_t, TRUE); FETCH_FIELD(hv, conf, priority_type, charp, FALSE); FETCH_FIELD(hv, conf, priority_weight_age, uint32_t, TRUE); FETCH_FIELD(hv, conf, priority_weight_fs, uint32_t, TRUE); FETCH_FIELD(hv, conf, priority_weight_js, uint32_t, TRUE); FETCH_FIELD(hv, conf, priority_weight_part, uint32_t, TRUE); FETCH_FIELD(hv, conf, priority_weight_qos, uint32_t, TRUE); FETCH_FIELD(hv, conf, priority_weight_tres, charp, TRUE); FETCH_FIELD(hv, conf, private_data, uint16_t, TRUE); FETCH_FIELD(hv, conf, proctrack_type, charp, FALSE); FETCH_FIELD(hv, conf, prolog, charp, FALSE); FETCH_FIELD(hv, conf, prolog_flags, uint16_t, TRUE); FETCH_FIELD(hv, conf, prolog_slurmctld, charp, FALSE); FETCH_FIELD(hv, conf, propagate_prio_process, uint16_t, TRUE); FETCH_FIELD(hv, conf, propagate_rlimits, charp, FALSE); FETCH_FIELD(hv, conf, propagate_rlimits_except, charp, FALSE); FETCH_FIELD(hv, conf, reboot_program, charp, FALSE); FETCH_FIELD(hv, conf, reconfig_flags, uint16_t, TRUE); FETCH_FIELD(hv, conf, requeue_exit, charp, FALSE); FETCH_FIELD(hv, conf, requeue_exit_hold, charp, FALSE); FETCH_FIELD(hv, conf, resume_program, charp, FALSE); FETCH_FIELD(hv, conf, resume_rate, uint16_t, TRUE); FETCH_FIELD(hv, conf, resume_timeout, uint16_t, TRUE); FETCH_FIELD(hv, conf, resv_epilog, charp, FALSE); FETCH_FIELD(hv, conf, resv_over_run, uint16_t, TRUE); FETCH_FIELD(hv, conf, resv_prolog, charp, FALSE); FETCH_FIELD(hv, conf, ret2service, uint16_t, TRUE); FETCH_FIELD(hv, conf, route_plugin, charp, FALSE); FETCH_FIELD(hv, conf, salloc_default_command, charp, FALSE); FETCH_FIELD(hv, conf, sched_logfile, charp, FALSE); FETCH_FIELD(hv, conf, sched_log_level, uint16_t, TRUE); FETCH_FIELD(hv, conf, sched_params, charp, FALSE); FETCH_FIELD(hv, conf, sched_time_slice, uint16_t, TRUE); FETCH_FIELD(hv, conf, schedtype, charp, FALSE); FETCH_FIELD(hv, conf, schedport, uint16_t, TRUE); FETCH_FIELD(hv, conf, schedrootfltr, uint16_t, TRUE); FETCH_FIELD(hv, conf, select_conf_key_pairs, charp, FALSE); FETCH_FIELD(hv, conf, select_type, charp, FALSE); FETCH_FIELD(hv, conf, select_type_param, uint16_t, TRUE); FETCH_FIELD(hv, conf, slurm_conf, charp, FALSE); FETCH_FIELD(hv, conf, slurm_user_id, uint32_t, TRUE); FETCH_FIELD(hv, conf, slurm_user_name, charp, FALSE); FETCH_FIELD(hv, conf, slurmd_user_id, uint32_t, TRUE); FETCH_FIELD(hv, conf, slurmd_user_name, charp, FALSE); FETCH_FIELD(hv, conf, slurmctld_debug, uint16_t, TRUE); FETCH_FIELD(hv, conf, slurmctld_logfile, charp, FALSE); FETCH_FIELD(hv, conf, slurmctld_pidfile, charp, FALSE); FETCH_FIELD(hv, conf, slurmctld_plugstack, charp, FALSE); FETCH_FIELD(hv, conf, slurmctld_port, uint32_t, TRUE); FETCH_FIELD(hv, conf, slurmctld_port_count, uint16_t, TRUE); FETCH_FIELD(hv, conf, slurmctld_timeout, uint16_t, TRUE); FETCH_FIELD(hv, conf, slurmd_debug, uint16_t, TRUE); FETCH_FIELD(hv, conf, slurmd_logfile, charp, FALSE); FETCH_FIELD(hv, conf, slurmd_pidfile, charp, FALSE); FETCH_FIELD(hv, conf, slurmd_plugstack, charp, FALSE); FETCH_FIELD(hv, conf, slurmd_port, uint32_t, TRUE); FETCH_FIELD(hv, conf, slurmd_spooldir, charp, FALSE); FETCH_FIELD(hv, conf, slurmd_timeout, uint16_t, TRUE); FETCH_FIELD(hv, conf, srun_epilog, charp, FALSE); FETCH_PTR_FIELD(hv, conf, srun_port_range, "SLURM::port_range", FALSE); FETCH_FIELD(hv, conf, srun_prolog, charp, FALSE); FETCH_FIELD(hv, conf, state_save_location, charp, FALSE); FETCH_FIELD(hv, conf, suspend_exc_nodes, charp, FALSE); FETCH_FIELD(hv, conf, suspend_exc_parts, charp, FALSE); FETCH_FIELD(hv, conf, suspend_program, charp, FALSE); FETCH_FIELD(hv, conf, suspend_rate, uint16_t, TRUE); FETCH_FIELD(hv, conf, suspend_time, uint32_t, TRUE); FETCH_FIELD(hv, conf, suspend_timeout, uint16_t, TRUE); FETCH_FIELD(hv, conf, switch_type, charp, FALSE); FETCH_FIELD(hv, conf, task_epilog, charp, FALSE); FETCH_FIELD(hv, conf, task_plugin, charp, FALSE); FETCH_FIELD(hv, conf, task_plugin_param, uint16_t, TRUE); FETCH_FIELD(hv, conf, task_prolog, charp, FALSE); FETCH_FIELD(hv, conf, tmp_fs, charp, FALSE); FETCH_FIELD(hv, conf, topology_plugin, charp, FALSE); FETCH_FIELD(hv, conf, track_wckey, uint16_t, TRUE); FETCH_FIELD(hv, conf, tree_width, uint16_t, TRUE); FETCH_FIELD(hv, conf, unkillable_program, charp, FALSE); FETCH_FIELD(hv, conf, unkillable_timeout, uint16_t, TRUE); FETCH_FIELD(hv, conf, use_pam, uint16_t, TRUE); FETCH_FIELD(hv, conf, use_spec_resources, uint16_t, TRUE); FETCH_FIELD(hv, conf, version, charp, FALSE); FETCH_FIELD(hv, conf, vsize_factor, uint16_t, TRUE); FETCH_FIELD(hv, conf, wait_time, uint16_t, TRUE); FETCH_FIELD(hv, conf, z_16, uint16_t, FALSE); FETCH_FIELD(hv, conf, z_32, uint32_t, FALSE); FETCH_FIELD(hv, conf, z_char, charp, FALSE); return 0; } /* * convert slurmd_status_t to perl HV */ int slurmd_status_to_hv(slurmd_status_t *status, HV *hv) { STORE_FIELD(hv, status, booted, time_t); STORE_FIELD(hv, status, last_slurmctld_msg, time_t); STORE_FIELD(hv, status, slurmd_debug, uint16_t); STORE_FIELD(hv, status, actual_cpus, uint16_t); STORE_FIELD(hv, status, actual_sockets, uint16_t); STORE_FIELD(hv, status, actual_cores, uint16_t); STORE_FIELD(hv, status, actual_threads, uint16_t); STORE_FIELD(hv, status, actual_real_mem, uint32_t); STORE_FIELD(hv, status, actual_tmp_disk, uint32_t); STORE_FIELD(hv, status, pid, uint32_t); if (status->hostname) STORE_FIELD(hv, status, hostname, charp); if (status->slurmd_logfile) STORE_FIELD(hv, status, slurmd_logfile, charp); if (status->step_list) STORE_FIELD(hv, status, step_list, charp); if (status->version) STORE_FIELD(hv, status, version, charp); return 0; } /* * convert perl HV to slurmd_status_t */ int hv_to_slurmd_status(HV *hv, slurmd_status_t *status) { memset(status, 0, sizeof(slurmd_status_t)); FETCH_FIELD(hv, status, booted, time_t, TRUE); FETCH_FIELD(hv, status, last_slurmctld_msg, time_t, TRUE); FETCH_FIELD(hv, status, slurmd_debug, uint16_t, TRUE); FETCH_FIELD(hv, status, actual_cpus, uint16_t, TRUE); FETCH_FIELD(hv, status, actual_sockets, uint16_t, TRUE); FETCH_FIELD(hv, status, actual_cores, uint16_t, TRUE); FETCH_FIELD(hv, status, actual_threads, uint16_t, TRUE); FETCH_FIELD(hv, status, actual_real_mem, uint32_t, TRUE); FETCH_FIELD(hv, status, actual_tmp_disk, uint32_t, TRUE); FETCH_FIELD(hv, status, pid, uint32_t, TRUE); FETCH_FIELD(hv, status, hostname, charp, FALSE); FETCH_FIELD(hv, status, slurmd_logfile, charp, FALSE); FETCH_FIELD(hv, status, step_list, charp, FALSE); FETCH_FIELD(hv, status, version, charp, FALSE); return 0; } /* * convert perl HV to step_update_request_msg_t */ int hv_to_step_update_request_msg(HV *hv, step_update_request_msg_t *update_msg) { slurm_init_update_step_msg(update_msg); FETCH_FIELD(hv, update_msg, end_time, time_t, TRUE); FETCH_FIELD(hv, update_msg, exit_code, uint32_t, TRUE); FETCH_FIELD(hv, update_msg, job_id, uint32_t, TRUE); FETCH_FIELD(hv, update_msg, name, charp, FALSE); FETCH_FIELD(hv, update_msg, start_time, time_t, TRUE); FETCH_FIELD(hv, update_msg, step_id, uint32_t, TRUE); FETCH_FIELD(hv, update_msg, time_limit, uint32_t, TRUE); return 0; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/job.c000066400000000000000000000404131265000126300233170ustar00rootroot00000000000000/* * job.c - convert data between job (step) related messages and perl HVs */ #include #include #include #include #include "ppport.h" #include "src/common/job_resources.h" #include "bitstr.h" #include "slurm-perl.h" static node_info_msg_t *job_node_ptr = NULL; /* This set of functions loads/free node information so that we can map a job's * core bitmap to it's CPU IDs based upon the thread count on each node. */ static void _load_node_info(void) { if (!job_node_ptr) (void) slurm_load_node((time_t) NULL, &job_node_ptr, 0); } static void _free_node_info(void) { if (job_node_ptr) { slurm_free_node_info_msg(job_node_ptr); job_node_ptr = NULL; } } static uint32_t _threads_per_core(char *host) { uint32_t i, threads = 1; if (!job_node_ptr || !host) return threads; slurm_mutex_lock(&job_node_info_lock); for (i = 0; i < job_node_ptr->record_count; i++) { if (job_node_ptr->node_array[i].name && !strcmp(host, job_node_ptr->node_array[i].name)) { threads = job_node_ptr->node_array[i].threads; break; } } slurm_mutex_unlock(&job_node_info_lock); return threads; } static int _job_resrcs_to_hv(job_info_t *job_info, HV *hv) { AV *av; HV *nr_hv; bitstr_t *cpu_bitmap; int sock_inx, sock_reps, last, cnt = 0, i, j, k; char tmp1[128], tmp2[128]; char *host; job_resources_t *job_resrcs = job_info->job_resrcs; int bit_inx, bit_reps; int abs_node_inx, rel_node_inx; uint32_t *last_mem_alloc_ptr = NULL; uint32_t last_mem_alloc = NO_VAL; char *last_hosts; hostlist_t hl, hl_last; uint32_t threads; if (!job_resrcs || !job_resrcs->core_bitmap || ((last = slurm_bit_fls(job_resrcs->core_bitmap)) == -1)) return 0; if (!(hl = slurm_hostlist_create(job_resrcs->nodes))) return 1; if (!(hl_last = slurm_hostlist_create(NULL))) return 1; av = newAV(); bit_inx = 0; i = sock_inx = sock_reps = 0; abs_node_inx = job_info->node_inx[i]; /* tmp1[] stores the current cpu(s) allocated */ tmp2[0] = '\0'; /* stores last cpu(s) allocated */ for (rel_node_inx=0; rel_node_inx < job_resrcs->nhosts; rel_node_inx++) { if (sock_reps >= job_resrcs->sock_core_rep_count[sock_inx]) { sock_inx++; sock_reps = 0; } sock_reps++; bit_reps = job_resrcs->sockets_per_node[sock_inx] * job_resrcs->cores_per_socket[sock_inx]; host = slurm_hostlist_shift(hl); threads = _threads_per_core(host); cpu_bitmap = slurm_bit_alloc(bit_reps * threads); for (j = 0; j < bit_reps; j++) { if (slurm_bit_test(job_resrcs->core_bitmap, bit_inx)){ for (k = 0; k < threads; k++) slurm_bit_set(cpu_bitmap, (j * threads) + k); } bit_inx++; } slurm_bit_fmt(tmp1, sizeof(tmp1), cpu_bitmap); FREE_NULL_BITMAP(cpu_bitmap); /* * If the allocation values for this host are not the same as the * last host, print the report of the last group of hosts that had * identical allocation values. */ if (strcmp(tmp1, tmp2) || (last_mem_alloc_ptr != job_resrcs->memory_allocated) || (job_resrcs->memory_allocated && (last_mem_alloc != job_resrcs->memory_allocated[rel_node_inx]))) { if (slurm_hostlist_count(hl_last)) { last_hosts = slurm_hostlist_ranged_string_xmalloc( hl_last); nr_hv = newHV(); hv_store_charp(nr_hv, "nodes", last_hosts); hv_store_charp(nr_hv, "cpu_ids", tmp2); hv_store_uint32_t(nr_hv, "mem", last_mem_alloc_ptr ? last_mem_alloc : 0); av_store(av, cnt++, newRV_noinc((SV*)nr_hv)); xfree(last_hosts); slurm_hostlist_destroy(hl_last); hl_last = slurm_hostlist_create(NULL); } strcpy(tmp2, tmp1); last_mem_alloc_ptr = job_resrcs->memory_allocated; if (last_mem_alloc_ptr) last_mem_alloc = job_resrcs-> memory_allocated[rel_node_inx]; else last_mem_alloc = NO_VAL; } slurm_hostlist_push_host(hl_last, host); free(host); if (bit_inx > last) break; if (abs_node_inx > job_info->node_inx[i+1]) { i += 2; abs_node_inx = job_info->node_inx[i]; } else { abs_node_inx++; } } if (slurm_hostlist_count(hl_last)) { last_hosts = slurm_hostlist_ranged_string_xmalloc(hl_last); nr_hv = newHV(); hv_store_charp(nr_hv, "nodes", last_hosts); hv_store_charp(nr_hv, "cpu_ids", tmp2); hv_store_uint32_t(nr_hv, "mem", last_mem_alloc_ptr ? last_mem_alloc : 0); av_store(av, cnt++, newRV_noinc((SV*)nr_hv)); xfree(last_hosts); } slurm_hostlist_destroy(hl); slurm_hostlist_destroy(hl_last); hv_store_sv(hv, "node_resrcs", newRV_noinc((SV*)av)); return 0; } /* * convert job_info_t to perl HV */ int job_info_to_hv(job_info_t *job_info, HV *hv) { int j; AV *av; if(job_info->account) STORE_FIELD(hv, job_info, account, charp); if(job_info->alloc_node) STORE_FIELD(hv, job_info, alloc_node, charp); STORE_FIELD(hv, job_info, alloc_sid, uint32_t); STORE_FIELD(hv, job_info, array_job_id, uint32_t); STORE_FIELD(hv, job_info, array_task_id, uint32_t); if(job_info->array_task_str) STORE_FIELD(hv, job_info, array_task_str, charp); STORE_FIELD(hv, job_info, assoc_id, uint32_t); STORE_FIELD(hv, job_info, batch_flag, uint16_t); if(job_info->command) STORE_FIELD(hv, job_info, command, charp); if(job_info->comment) STORE_FIELD(hv, job_info, comment, charp); STORE_FIELD(hv, job_info, contiguous, uint16_t); STORE_FIELD(hv, job_info, cpus_per_task, uint16_t); if(job_info->dependency) STORE_FIELD(hv, job_info, dependency, charp); STORE_FIELD(hv, job_info, derived_ec, uint32_t); STORE_FIELD(hv, job_info, eligible_time, time_t); STORE_FIELD(hv, job_info, end_time, time_t); if(job_info->exc_nodes) STORE_FIELD(hv, job_info, exc_nodes, charp); av = newAV(); for(j = 0; ; j += 2) { if(job_info->exc_node_inx[j] == -1) break; av_store(av, j, newSVuv(job_info->exc_node_inx[j])); av_store(av, j+1, newSVuv(job_info->exc_node_inx[j+1])); } hv_store_sv(hv, "exc_node_inx", newRV_noinc((SV*)av)); STORE_FIELD(hv, job_info, exit_code, uint32_t); if(job_info->features) STORE_FIELD(hv, job_info, features, charp); if(job_info->gres) STORE_FIELD(hv, job_info, gres, charp); STORE_FIELD(hv, job_info, group_id, uint32_t); STORE_FIELD(hv, job_info, job_id, uint32_t); STORE_FIELD(hv, job_info, job_state, uint32_t); if(job_info->licenses) STORE_FIELD(hv, job_info, licenses, charp); STORE_FIELD(hv, job_info, max_cpus, uint32_t); STORE_FIELD(hv, job_info, max_nodes, uint32_t); STORE_FIELD(hv, job_info, profile, uint32_t); STORE_FIELD(hv, job_info, sockets_per_node, uint16_t); STORE_FIELD(hv, job_info, cores_per_socket, uint16_t); STORE_FIELD(hv, job_info, threads_per_core, uint16_t); if(job_info->name) STORE_FIELD(hv, job_info, name, charp); if(job_info->network) STORE_FIELD(hv, job_info, network, charp); STORE_FIELD(hv, job_info, nice, uint16_t); if(job_info->nodes) STORE_FIELD(hv, job_info, nodes, charp); av = newAV(); for(j = 0; ; j += 2) { if(job_info->node_inx[j] == -1) break; av_store(av, j, newSVuv(job_info->node_inx[j])); av_store(av, j+1, newSVuv(job_info->node_inx[j+1])); } hv_store_sv(hv, "node_inx", newRV_noinc((SV*)av)); STORE_FIELD(hv, job_info, ntasks_per_core, uint16_t); STORE_FIELD(hv, job_info, ntasks_per_node, uint16_t); STORE_FIELD(hv, job_info, ntasks_per_socket, uint16_t); #ifdef HAVE_BG slurm_get_select_jobinfo(job_info->select_jobinfo, SELECT_JOBDATA_NODE_CNT, &job_info->num_nodes); #endif STORE_FIELD(hv, job_info, num_nodes, uint32_t); STORE_FIELD(hv, job_info, num_cpus, uint32_t); STORE_FIELD(hv, job_info, pn_min_memory, uint32_t); STORE_FIELD(hv, job_info, pn_min_cpus, uint16_t); STORE_FIELD(hv, job_info, pn_min_tmp_disk, uint32_t); if(job_info->partition) STORE_FIELD(hv, job_info, partition, charp); STORE_FIELD(hv, job_info, pre_sus_time, time_t); STORE_FIELD(hv, job_info, priority, uint32_t); if(job_info->qos) STORE_FIELD(hv, job_info, qos, charp); if(job_info->req_nodes) STORE_FIELD(hv, job_info, req_nodes, charp); av = newAV(); for(j = 0; ; j += 2) { if(job_info->req_node_inx[j] == -1) break; av_store(av, j, newSVuv(job_info->req_node_inx[j])); av_store(av, j+1, newSVuv(job_info->req_node_inx[j+1])); } hv_store_sv(hv, "req_node_inx", newRV_noinc((SV*)av)); STORE_FIELD(hv, job_info, req_switch, uint32_t); STORE_FIELD(hv, job_info, requeue, uint16_t); STORE_FIELD(hv, job_info, resize_time, time_t); STORE_FIELD(hv, job_info, restart_cnt, uint16_t); if(job_info->resv_name) STORE_FIELD(hv, job_info, resv_name, charp); STORE_PTR_FIELD(hv, job_info, select_jobinfo, "Slurm::dynamic_plugin_data_t"); STORE_PTR_FIELD(hv, job_info, job_resrcs, "Slurm::job_resources_t"); STORE_FIELD(hv, job_info, shared, uint16_t); STORE_FIELD(hv, job_info, show_flags, uint16_t); STORE_FIELD(hv, job_info, start_time, time_t); if(job_info->state_desc) STORE_FIELD(hv, job_info, state_desc, charp); STORE_FIELD(hv, job_info, state_reason, uint16_t); if(job_info->std_in) STORE_FIELD(hv, job_info, std_in, charp); if(job_info->std_out) STORE_FIELD(hv, job_info, std_out, charp); if(job_info->std_err) STORE_FIELD(hv, job_info, std_err, charp); STORE_FIELD(hv, job_info, submit_time, time_t); STORE_FIELD(hv, job_info, suspend_time, time_t); STORE_FIELD(hv, job_info, time_limit, uint32_t); STORE_FIELD(hv, job_info, time_min, uint32_t); STORE_FIELD(hv, job_info, user_id, uint32_t); STORE_FIELD(hv, job_info, wait4switch, uint32_t); if(job_info->wckey) STORE_FIELD(hv, job_info, wckey, charp); if(job_info->work_dir) STORE_FIELD(hv, job_info, work_dir, charp); _job_resrcs_to_hv(job_info, hv); return 0; } /* * convert perl HV to job_info_t */ int hv_to_job_info(HV *hv, job_info_t *job_info) { SV **svp; AV *av; int i, n; memset(job_info, 0, sizeof(job_info_t)); FETCH_FIELD(hv, job_info, account, charp, FALSE); FETCH_FIELD(hv, job_info, alloc_node, charp, FALSE); FETCH_FIELD(hv, job_info, alloc_sid, uint32_t, TRUE); FETCH_FIELD(hv, job_info, array_job_id, uint32_t, TRUE); FETCH_FIELD(hv, job_info, array_task_id, uint32_t, TRUE); FETCH_FIELD(hv, job_info, array_task_str, charp, FALSE); FETCH_FIELD(hv, job_info, batch_flag, uint16_t, TRUE); FETCH_FIELD(hv, job_info, command, charp, FALSE); FETCH_FIELD(hv, job_info, comment, charp, FALSE); FETCH_FIELD(hv, job_info, contiguous, uint16_t, TRUE); FETCH_FIELD(hv, job_info, cpus_per_task, uint16_t, TRUE); FETCH_FIELD(hv, job_info, dependency, charp, FALSE); FETCH_FIELD(hv, job_info, derived_ec, uint32_t, TRUE); FETCH_FIELD(hv, job_info, eligible_time, time_t, TRUE); FETCH_FIELD(hv, job_info, end_time, time_t, TRUE); FETCH_FIELD(hv, job_info, exc_nodes, charp, FALSE); svp = hv_fetch(hv, "exc_node_inx", 12, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); n = av_len(av) + 2; /* for trailing -1 */ job_info->exc_node_inx = xmalloc(n * sizeof(int)); for (i = 0; i < n-1; i += 2) { job_info->exc_node_inx[i] = (int)SvIV(*(av_fetch(av, i, FALSE))); job_info->exc_node_inx[i+1] = (int)SvIV(*(av_fetch(av, i+1, FALSE))); } job_info->exc_node_inx[n-1] = -1; } else { /* nothing to do */ } FETCH_FIELD(hv, job_info, exit_code, uint32_t, TRUE); FETCH_FIELD(hv, job_info, features, charp, FALSE); FETCH_FIELD(hv, job_info, gres, charp, FALSE); FETCH_FIELD(hv, job_info, group_id, uint32_t, TRUE); FETCH_FIELD(hv, job_info, job_id, uint32_t, TRUE); FETCH_FIELD(hv, job_info, job_state, uint32_t, TRUE); FETCH_FIELD(hv, job_info, licenses, charp, FALSE); FETCH_FIELD(hv, job_info, max_cpus, uint32_t, TRUE); FETCH_FIELD(hv, job_info, max_nodes, uint32_t, TRUE); FETCH_FIELD(hv, job_info, profile, uint32_t, TRUE); FETCH_FIELD(hv, job_info, sockets_per_node, uint16_t, TRUE); FETCH_FIELD(hv, job_info, cores_per_socket, uint16_t, TRUE); FETCH_FIELD(hv, job_info, threads_per_core, uint16_t, TRUE); FETCH_FIELD(hv, job_info, name, charp, FALSE); FETCH_FIELD(hv, job_info, network, charp, FALSE); FETCH_FIELD(hv, job_info, nice, uint16_t, TRUE); FETCH_FIELD(hv, job_info, nodes, charp, FALSE); svp = hv_fetch(hv, "node_inx", 8, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); n = av_len(av) + 2; /* for trailing -1 */ job_info->node_inx = xmalloc(n * sizeof(int)); for (i = 0; i < n-1; i += 2) { job_info->node_inx[i] = (int)SvIV(*(av_fetch(av, i, FALSE))); job_info->node_inx[i+1] = (int)SvIV(*(av_fetch(av, i+1, FALSE))); } job_info->node_inx[n-1] = -1; } else { /* nothing to do */ } FETCH_FIELD(hv, job_info, ntasks_per_core, uint16_t, TRUE); FETCH_FIELD(hv, job_info, ntasks_per_node, uint16_t, TRUE); FETCH_FIELD(hv, job_info, ntasks_per_socket, uint16_t, TRUE); FETCH_FIELD(hv, job_info, num_nodes, uint32_t, TRUE); FETCH_FIELD(hv, job_info, num_cpus, uint32_t, TRUE); FETCH_FIELD(hv, job_info, pn_min_memory, uint32_t, TRUE); FETCH_FIELD(hv, job_info, pn_min_cpus, uint16_t, TRUE); FETCH_FIELD(hv, job_info, pn_min_tmp_disk, uint32_t, TRUE); FETCH_FIELD(hv, job_info, partition, charp, FALSE); FETCH_FIELD(hv, job_info, pre_sus_time, time_t, TRUE); FETCH_FIELD(hv, job_info, priority, uint32_t, TRUE); FETCH_FIELD(hv, job_info, qos, charp, FALSE); FETCH_FIELD(hv, job_info, req_nodes, charp, FALSE); svp = hv_fetch(hv, "req_node_inx", 12, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); n = av_len(av) + 2; /* for trailing -1 */ job_info->req_node_inx = xmalloc(n * sizeof(int)); for (i = 0; i < n-1; i += 2) { job_info->req_node_inx[i] = (int)SvIV(*(av_fetch(av, i, FALSE))); job_info->req_node_inx[i+1] = (int)SvIV(*(av_fetch(av, i+1, FALSE))); } job_info->req_node_inx[n-1] = -1; } else { /* nothing to do */ } FETCH_FIELD(hv, job_info, req_switch, uint32_t, FALSE); FETCH_FIELD(hv, job_info, requeue, uint16_t, TRUE); FETCH_FIELD(hv, job_info, resize_time, time_t, TRUE); FETCH_FIELD(hv, job_info, restart_cnt, uint16_t, TRUE); FETCH_FIELD(hv, job_info, resv_name, charp, FALSE); FETCH_PTR_FIELD(hv, job_info, select_jobinfo, "Slurm::dynamic_plugin_data_t", FALSE); FETCH_PTR_FIELD(hv, job_info, job_resrcs, "Slurm::job_resources_t", FALSE); FETCH_FIELD(hv, job_info, shared, uint16_t, TRUE); FETCH_FIELD(hv, job_info, show_flags, uint16_t, TRUE); FETCH_FIELD(hv, job_info, start_time, time_t, TRUE); FETCH_FIELD(hv, job_info, state_desc, charp, FALSE); FETCH_FIELD(hv, job_info, state_reason, uint16_t, TRUE); FETCH_FIELD(hv, job_info, std_in, charp, FALSE); FETCH_FIELD(hv, job_info, std_out, charp, FALSE); FETCH_FIELD(hv, job_info, std_err, charp, FALSE); FETCH_FIELD(hv, job_info, submit_time, time_t, TRUE); FETCH_FIELD(hv, job_info, suspend_time, time_t, TRUE); FETCH_FIELD(hv, job_info, time_limit, uint32_t, TRUE); FETCH_FIELD(hv, job_info, time_min, uint32_t, TRUE); FETCH_FIELD(hv, job_info, user_id, uint32_t, TRUE); FETCH_FIELD(hv, job_info, wait4switch, uint32_t, FALSE); FETCH_FIELD(hv, job_info, wckey, charp, FALSE); FETCH_FIELD(hv, job_info, work_dir, charp, FALSE); return 0; } /* * convert job_info_msg_t to perl HV */ int job_info_msg_to_hv(job_info_msg_t *job_info_msg, HV *hv) { int i; HV *hv_info; AV *av; _load_node_info(); STORE_FIELD(hv, job_info_msg, last_update, time_t); /* record_count implied in job_array */ av = newAV(); for(i = 0; i < job_info_msg->record_count; i ++) { hv_info = newHV(); if (job_info_to_hv(job_info_msg->job_array + i, hv_info) < 0) { SvREFCNT_dec(hv_info); SvREFCNT_dec(av); return -1; } av_store(av, i, newRV_noinc((SV*)hv_info)); } hv_store_sv(hv, "job_array", newRV_noinc((SV*)av)); _free_node_info(); return 0; } /* * convert perl HV to job_info_msg_t */ int hv_to_job_info_msg(HV *hv, job_info_msg_t *job_info_msg) { SV **svp; AV *av; int i, n; memset(job_info_msg, 0, sizeof(job_info_msg_t)); FETCH_FIELD(hv, job_info_msg, last_update, time_t, TRUE); svp = hv_fetch(hv, "job_array", 9, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV)) { Perl_warn (aTHX_ "job_array is not an arrary reference in HV for job_info_msg_t"); return -1; } av = (AV*)SvRV(*svp); n = av_len(av) + 1; job_info_msg->record_count = n; job_info_msg->job_array = xmalloc(n * sizeof(job_info_t)); for(i = 0; i < n; i ++) { svp = av_fetch(av, i, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV)) { Perl_warn (aTHX_ "element %d in job_array is not valid", i); return -1; } if (hv_to_job_info((HV*)SvRV(*svp), &job_info_msg->job_array[i]) < 0) { Perl_warn(aTHX_ "failed to convert element %d in job_array", i); return -1; } } return 0; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/lib/000077500000000000000000000000001265000126300231455ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/lib/Slurm.pm000066400000000000000000001326141265000126300246140ustar00rootroot00000000000000package Slurm; use 5.008; use strict; use warnings; use Carp; use Slurm::Hostlist; use Slurm::Bitstr; use Slurm::Stepctx; use Slurm::Constant; sub import { # export constants Slurm::Constant->import2() if grep(/^:constant$/, @_) || grep(/^:all$/, @_); # export job/node state testing macros my $callpkg = caller(0); { no strict "refs"; my ($macro, $sub); while( ($macro, $sub) = each(%{Slurm::}) ) { next unless $macro =~ /^IS_JOB_/ or $macro =~ /^IS_NODE_/; *{$callpkg . "::$macro"} = $sub; } } } our $VERSION = '0.02'; # XSLoader will not work for SLURM because it does not honour dl_load_flags. require DynaLoader; our @ISA; push @ISA, 'DynaLoader'; bootstrap Slurm $VERSION; sub dl_load_flags { if($^O eq 'aix') { 0x00 } else { 0x01 }} ############################################################ # handy macros defined in slurm_protocol_defs.h ############################################################ # /* Defined job states */ sub IS_JOB_PENDING { (($_[0]->{job_state} & JOB_STATE_BASE) == JOB_PENDING) } sub IS_JOB_RUNNING { (($_[0]->{job_state} & JOB_STATE_BASE) == JOB_RUNNING) } sub IS_JOB_SUSPENDED { (($_[0]->{job_state} & JOB_STATE_BASE) == JOB_SUSPENDED) } sub IS_JOB_COMPLETE { (($_[0]->{job_state} & JOB_STATE_BASE) == JOB_COMPLETE) } sub IS_JOB_CANCELLED { (($_[0]->{job_state} & JOB_STATE_BASE) == JOB_CANCELLED) } sub IS_JOB_FAILED { (($_[0]->{job_state} & JOB_STATE_BASE) == JOB_FAILED) } sub IS_JOB_TIMEOUT { (($_[0]->{job_state} & JOB_STATE_BASE) == JOB_TIMEOUT) } sub IS_JOB_NODE_FAILED { (($_[0]->{job_state} & JOB_STATE_BASE) == JOB_NODE_FAIL) } # /* Derived job states */ sub IS_JOB_COMPLETING { ($_[0]->{job_state} & JOB_COMPLETING) } sub IS_JOB_CONFIGURING { ($_[0]->{job_state} & JOB_CONFIGURING) } sub IS_JOB_STARTED { (($_[0]->{job_state} & JOB_STATE_BASE) > JOB_PENDING) } sub IS_JOB_FINISHED { (($_[0]->{job_state} & JOB_STATE_BASE) > JOB_SUSPENDED) } sub IS_JOB_COMPLETED { (IS_JOB_FINISHED($_[0]) && (($_[0]->{job_state} & JOB_COMPLETING) == 0)) } sub IS_JOB_RESIZING { ($_[0]->{job_state} & JOB_RESIZING) } # /* Defined node states */ sub IS_NODE_UNKNOWN { (($_[0]->{node_state} & NODE_STATE_BASE) == NODE_STATE_UNKNOWN) } sub IS_NODE_DOWN { (($_[0]->{node_state} & NODE_STATE_BASE) == NODE_STATE_DOWN) } sub IS_NODE_IDLE { (($_[0]->{node_state} & NODE_STATE_BASE) == NODE_STATE_IDLE) } sub IS_NODE_ALLOCATED { (($_[0]->{node_state} & NODE_STATE_BASE) == NODE_STATE_ALLOCATED) } sub IS_NODE_ERROR { (($_[0]->{node_state} & NODE_STATE_BASE) == NODE_STATE_ERROR) } sub IS_NODE_MIXED { (($_[0]->{node_state} & NODE_STATE_BASE) == NODE_STATE_MIXED) } sub IS_NODE_FUTURE { (($_[0]->{node_state} & NODE_STATE_BASE) == NODE_STATE_FUTURE) } # /* Derived node states */ sub IS_NODE_DRAIN { ($_[0]->{node_state} & NODE_STATE_DRAIN) } sub IS_NODE_DRAINING { (($_[0]->{node_state} & NODE_STATE_DRAIN) && (IS_NODE_ALLOCATED($_[0]) || IS_NODE_ERROR($_[0]) || IS_NODE_MIXED($_[0]))) } sub IS_NODE_DRAINED { (IS_NODE_DRAIN($_[0]) && !IS_NODE_DRAINING($_[0])) } sub IS_NODE_COMPLETING { ($_[0]->{node_state} & NODE_STATE_COMPLETING) } sub IS_NODE_NO_RESPOND { ($_[0]->{node_state} & NODE_STATE_NO_RESPOND) } sub IS_NODE_POWER_SAVE { ($_[0]->{node_state} & NODE_STATE_POWER_SAVE) } sub IS_NODE_POWER_UP { ($_[0]->{node_state} & NODE_STATE_POWER_UP) } sub IS_NODE_FAIL { ($_[0]->{node_state} & NODE_STATE_FAIL) } sub IS_NODE_MAINT { ($_[0]->{node_state} & NODE_STATE_MAINT) } 1; __END__ =head1 NAME Slurm - Perl API for libslurm =head1 SYNOPSIS use Slurm; my $slurm = Slurm::new(); $nodes = $slurm->load_node(); unless($nodes) { die "failed to load node info: " . $slurm->strerror(); } =head1 DESCRIPTION The Slurm class provides Perl interface of the SLURM API functions in Cslurm/slurm.hE>, with some extra frequently used functions exported by libslurm. =head2 METHODS To use the API, first create a Slurm object: $slurm = Slurm::new($conf); Then call the desired functions: $resp = $slurm->load_jobs(); In the following L section, if a parameter is omitted, it will be listed as "param=val" , where "val" is the default value of the parameter. =head2 DATA STRUCTURES Typically, C structures are converted to (maybe blessed) Perl hash references, with field names as hash keys. Arrays in C are converted to arrays in Perl. For example, there is a structure C: typedef struct job_info_msg { time_t last_update; /* time of latest info */ uint32_t record_count; /* number of records */ job_info_t *job_array; /* the job records */ } job_info_msg_t; This will be converted to a hash reference with the following structure: { last_update => 1285847672, job_array => [ {account => 'test', alloc_node => 'ln0', alloc_sid => 1234, ...}, {account => 'debug', alloc_node => 'ln2', alloc_sid => 5678, ...}, ... ] } Note the missing of the C field in the hash. It can be derived from the number of elements in array C. To pass parameters to the API functions, use the corresponding hash references, for example: $rc = $slurm->update_node({node_names => 'node[0-7]', node_state => NODE_STATE_DRAIN}); Please see Cslurm/slurm.hE> for the definition of the structures. =head2 CONSTANTS The enumerations and macro definitions are available in the Slurm package. If ':constant' is given when using the Slurm package, the constants will be exported to the calling package. Please see L for the available constants. =head1 METHODS =head2 CONSTRUCTOR/DESTRUCTOR =head3 $slurm = Slurm::new($conf_file=undef); Create a Slurm object. For now the object is just a hash reference with no members. =over 2 =item * IN $conf_file: the SLURM configuration file. If omitted, the default SLURM configuration file will be used (file specified by environment variable SLURM_CONF or the file slurm.conf under directroy specified in compile time). =item * RET: blessed opaque Slurm object. On error C is returned. =back =head2 ERROR INFORMATION FUNCTIONS =head3 $errno = $slurm->get_errno(); Get the error number associated with last operation. =over 2 =item * RET: error number associated with last operation. =back =head3 $str = $slurm->strerror($errno=0) Get the string describing the specified error number. =over 2 =item * IN $errno: error number. If omitted or 0, the error number returned by C<$slurm->get_errno()> will be used. =item * RET: error string. =back =head2 ENTITY STATE/REASON/FLAG/TYPE STRING FUNCTIONS =head3 $str = $slurm->preempt_mode_string($mode_num); Get the string describing the specified preemt mode number. =over 2 =item * IN $mode_num: preempt mode number. =item * RET: preempt mode string. =back =head3 $num = $slurm->preempt_mode_num($mode_str); Get the preempt mode number of the specified preempt mode string. =over 2 =item * IN $mode_str: preempt mode string. =item * RET: preempt mode number. =back =head3 $str = $slurm->job_reason_string($num); Get the string representation of the specified job state reason number. =over 2 =item * IN $num: job reason number. =item * RET: job reason string. =back =head3 $str = $slurm->job_state_string($num); Get the string representation of the specified job state number. =over 2 =item * IN $num: job state number. =item * RET: job state string. =back =head3 $str = $slurm->job_state_string_compact($num); Get the compact string representation of the specified job state number. =over 2 =item * IN $num: job state number. =item * RET: compact job state string. =back =head3 $num = $slurm->job_state_num($str); Get the job state number of the specified (compact) job state string. =over 2 =item * IN $str: job state string. =item * RET: job state number. =back =head3 $str = $slurm->reservation_flags_string($flags); Get the string representation of the specified reservation flags. =over 2 =item * IN $num: reservation flags number. =item * RET: reservation flags string. =back =head3 $str = $slurm->node_state_string($num); Get the string representation of the specified node state number. =over 2 =item * IN $num: node state number. =item * RET: node state string. =back =head3 $str = $slurm->node_state_string_compact($num); Get the compact string representation of the specified node state number. =over 2 =item * IN $num: node state number. =item * RET: compact node state string. =back =head3 $str = $slurm->private_data_string($num); Get the string representation of the specified private data type. =over 2 =item * IN $num: private data type number. =item * RET: private data type string. =back =head3 $str = $slurm->accounting_enforce_string($num); Get the string representation of the specified accounting enforce type. =over 2 =item * IN $num: accounting enforce type number. =item * RET: accounting enforce type string. =back =head3 $str = $slurm->conn_type_string($num); Get the string representation of the specified connection type. =over 2 =item * IN $num: connection type number. =item * RET: connection type string. =back =head3 $str = $slurm->node_use_string($num); Get the string representation of the specified node usage type. =over 2 =item * IN $num: node usage type number. =item * RET: node usage type string. =back =head3 $str = $slurm->bg_block_state_string($num); Get the string representation of the specified BlueGene block state. =over 2 =item * IN $num: BG block state number. =item * RET: BG block state string. =back =head2 RESOURCE ALLOCATION FUNCTIONS =head3 $resp = $slurm->allocate_resources($job_desc); Allocate resources for a job request. If the requested resources are not immediately available, the slurmctld will send the job_alloc_resp_msg to the sepecified node and port. =over 2 =item * IN $job_desc: description of resource allocation request, with sturcture of C. =item * RET: response to request, with structure of C. This only represents a job allocation if resources are immediately available. Otherwise it just contains the job id of the enqueued job request. On failure C is returned. =back =head3 $resp = $slurm->allocate_resources_blocking($job_desc, $timeout=0, $pending_callbacks=undef); Allocate resources for a job request. This call will block until the allocation is granted, or the specified timeout limit is reached. =over 2 =item * IN $job_desc: description of resource allocation request, with sturcture of C. =item * IN $timeout: amount of time, in seconds, to wait for a response before giving up. A timeout of zero will wait indefinitely. =item * IN $pending_callbacks: If the allocation cannot be granted immediately, the controller will put the job in the PENDING state. If pending callback is given, it will be called with the job id of the pending job as the sole parameter. =item * RET: allcation response, with structure of C. On failure C is returned, with errno set. =back =head3 $resp = $slurm->allocation_lookup($job_id); Retrieve info for an existing resource allocation. =over 2 =item * IN $job_id: job allocation identifier. =item * RET: job allocation info, with structure of C. On failure C is returned with errno set. =back =head3 $resp = $slurm->allocatiion_lookup_lite($job_id); Retrieve minor info for an existing resource allocation. =over 2 =item * IN $job_id: job allocation identifier. =item * RET: job allocation info, with structure of C. On failure C is returned with errno set. =back =head3 $str = $slurm->read_hostfile($filename, $n); Read a specified SLURM hostfile. The file must contain a list of SLURM NodeNames, one per line. =over 2 =item * IN $filename: name of SLURM hostlist file to be read. =item * IN $n: number of NodeNames required. =item * RET: a string representing the hostlist. Returns NULL if there are fewer than $n hostnames in the file, or if an error occurs. =back =head3 $msg_thr = $slurm->allocation_msg_thr_create($port, $callbacks); Startup a message handler talking with the controller dealing with messages from the controller during an allocation. =over 2 =item * OUT $port: port we are listening for messages on from the controller. =item * IN $callbacks: callbacks for different types of messages, with structure of C. =item * RET: opaque object of C, or NULL on failure. =back =head3 $slurm->allocation_msg_thr_destroy($msg_thr); Shutdown the message handler talking with the controller dealing with messages from the controller during an allocation. =over 2 =item * IN $msg_thr: opaque object of C pointer. =back =head3 $resp = $slurm->submit_batch_job($job_desc_msg); Issue RPC to submit a job for later execution. =over 2 =item * IN $job_desc_msg: description of batch job request, with structure of C. =item * RET: 0 on success, otherwise return -1 and set errno to indicate the error. =back =head3 $rc = $slurm->job_will_run($job_desc_msg); Determine if a job would execute immediately if submitted now. =over 2 =item * IN $job_desc_msg: description of resource allocation request, with structure of C. =item * RET: 0 on success, otherwise return -1 and set errno to indicate the error. =back =head3 $resp = $slurm->sbcast_lookup($job_id); Retrieve info for an existing resource allocation including a credential needed for sbcast. =over 2 =item * IN $jobid: job allocation identifier. =item * RET: job allocation information includeing a credential for sbcast, with structure of C. On failure C is returned with errno set. =back =head2 JOB/STEP SIGNALING FUNCTIONS =head3 $rc = $slurm->kill_job($job_id, $signal, $batch_flag=0); Send the specified signal to all steps of an existing job. =over 2 =item * IN $job_id: the job's id. =item * IN $signal: signal number. =item * IN $batch_flag: 1 to signal batch shell only, otherwise 0. =item * RET: 0 on success, otherwise return -1 and set errno to indicate the error. =back =head3 $rc = $slurm->kill_job_step($job_id, $step_id, $signal); Send the specified signal to an existing job step. =over 2 =item * IN $job_id: the job's id. =item * IN $step_id: the job step's id. =item * IN $signal: signal number. =item * RET: 0 on success, otherwise return -1 and set errno to indicate the error. =back =head3 $rc = $slurm->signal_job($job_id, $signal); Send the specified signal to all steps of an existing job. =over 2 =item * IN $job_id: the job's id. =item * IN $signal: signal number. =item * RET: 0 on success, otherwise return -1 and set errno to indicate the error. =back =head3 $rc = $slurm->signal_job_step($job_id, $step_id, $signal); Send the specified signal to an existing job step. =over 2 =item * IN $job_id: the job's id. =item * IN $step_id: the job step's id. =item * IN $signal: signal number. =item * RET: 0 on success, otherwise return -1 and set errno to indicate the error. =back =head2 JOB/STEP COMPLETION FUNCTIONS =head3 $rc = $slurm->complete_job($job_id, $job_rc=0); Note the completion of a job and all of its steps. =over 2 =item * IN $job_id: the job's id. =item * IN $job_rc: the highest exit code of any task of the job. =item * RET: 0 on success, otherwise return -1 and set errno to indicate the error. =back =head3 $rc = $slurm->terminate_job_step($job_id, $step_id); Terminates a job step by sending a REQUEST_TERMINATE_TASKS rpc to all slurmd of a job step, and then calls slurm_complete_job_step() after verifying that all nodes in the job step no longer have running tasks from the job step. (May take over 35 seconds to return.) =over 2 =item * IN $job_id: the job's id. =item * IN $step_id: the job step's id - use SLURM_BATCH_SCRIPT as the step_id to terminate a job's batch script. =item * RET: 0 on success, otherwise return -1 and set errno to indicate the error. =back =head2 SLURM TASK SPAWNING FUNCTIONS =head3 $ctx = $slurm->step_ctx_create($params); Create a job step and its context. =over 2 =item * IN $params: job step parameters, with structure of C. =item * RET: the step context. On failure C is returned with errno set. =back =head3 $ctx = $slurm->step_ctx_create_no_alloc($params); Create a job step and its context without getting an allocation. =over 2 =item * IN $params: job step parameters, with structure of C.. =item * IN $step_id: fake job step id. =item * RET: the step context. On failure C is returned with errno set. =back =head2 SLURM CONTROL CONFIGURATION READ/PRINT/UPDATE FUNCTIONS =head3 ($major, $minor, $micro) = $slurm->api_version(); Get the SLURM API's version number. =over 2 =item * RET: a three element list of the major, minor, and micro version number. =back =head3 $resp = $slurm->load_ctl_conf($update_time=0); Issue RPC to get SLURM control configuration information if changed. =over 2 =item * IN $update_time: time of current configuration data. =item * RET: SLURM configuration data, with structure of C. On failure C is returned with errno set. =back =head3 $slurm->print_ctl_conf($out, $conf); Output the contents of SLURM control configuration message as loaded using C. =over 2 =item * IN $out: file to write to. =item * IN $conf: SLURM control configuration, with structure of C. =back =head3 $list = $slurm->ctl_conf_2_key_pairs($conf); Put the SLURM configuration data into a List of opaque data type C. =over 2 =item * IN $conf: SLURM control configuration, with structure of C. =item * RET: List of opaque data type C. =back =head3 $resp = $slurm->load_slurmd_status(); Issue RPC to get the status of slurmd daemon on this machine. =over 2 =item * RET: slurmd status info, with structure of C. On failure C is returned with errno set. =back =head3 $slurm->print_slurmd_status($out, $slurmd_status); Output the contents of slurmd status message as loaded using C. =over 2 =item * IN $out: file to write to. =item * IN $slurmd_status: slurmd status info, with structure of C. =back =head3 $slurm->print_key_pairs($out, $key_pairs, $title); Output the contents of key_pairs which is a list of opaque data type C. =over 2 =item * IN $out: file to write to. =item * IN $key_pairs: List containing key pairs to be printed. =item * IN $title: title of key pair list. =back =head3 $rc = $slurm->update_step($step_msg); Update the time limit of a job step. =over 2 =item * IN $step_msg: step update messasge descriptor, with structure of C. =item * RET: 0 or -1 on error. =back =head2 SLURM JOB RESOURCES READ/PRINT FUNCTIONS =head3 $num = $slurm->job_cpus_allocated_on_node_id($job_res, $node_id); Get the number of cpus allocated to a job on a node by node id. =over 2 =item * IN $job_res: job resources data, with structure of C. =item * IN $node_id: zero-origin node id in allocation. =item * RET: number of CPUs allocated to job on this node or -1 on error. =back =head3 $num = $slurm->job_cpus_allocated_on_node($job_res, $node_name); Get the number of cpus allocated to a job on a node by node name. =over 2 =item * IN $job_res: job resources data, with structure of C. =item * IN $node_name: name of node. =item * RET: number of CPUs allocated to job on this node or -1 on error. =back =head2 SLURM JOB CONFIGURATION READ/PRINT/UPDATE FUNCTIONS =head3 $time = $slurm->get_end_time($job_id); Get the expected end time for a given slurm job. =over 2 =item * IN $jobid: SLURM job id. =item * RET: scheduled end time for the job. On failure C is returned with errno set. =back =head3 $secs = $slurm->get_rem_time($job_id); Get the expected time remaining for a given job. =over 2 =item * IN $jobid: SLURM job id. =item * RET: remaining time in seconds or -1 on error. =back =head3 $rc = $slurm->job_node_ready($job_id); Report if nodes are ready for job to execute now. =over 2 =item * IN $job_id: SLURM job id. =item * RET: =over 2 =item * READY_JOB_FATAL: fatal error =item * READY_JOB_ERROR: ordinary error =item * READY_NODE_STATE: node is ready =item * READY_JOB_STATE: job is ready to execute =back =back =head3 $resp = $slurm->load_job($job_id, $show_flags=0); Issue RPC to get job information for one job ID. =over 2 =item * IN $job_id: ID of job we want information about. =item * IN $show_flags: job filtering options. =item * RET: job information, with structure of C. On failure C is returned with errno set. =back =head3 $resp = $slurm->load_jobs($update_time=0, $show_flags=0); Issue RPC to get all SLURM job information if changed. =over 2 =item * IN $update_time: time of current job information data. =item * IN $show_flags: job filtering options. =item * RET: job information, with structure of C. On failure C is returned with errno set. =back =head3 $rc = $slurm->notify_job($job_id, $message); Send message to the job's stdout, usable only by user root. =over 2 =item * IN $job_id: SLURM job id or 0 for all jobs. =item * IN $message: arbitrary message. =item * RET: 0 or -1 on error. =back =head3 $job_id = $slurm->pid2jobid($job_pid); Issue RPC to get the SLURM job ID of a given process ID on this machine. =over 2 =item * IN $job_pid: process ID of interest on this machine. =item * RET: corresponding job ID. On failure C is returned. =back =head3 $slurm->print_job_info($out, $job_info, $one_liner=0); Output information about a specific SLURM job based upon message as loaded using C. =over 2 =item * IN $out: file to write to. =item * IN $job_info: an individual job information record, with structure of C. =item * IN $one_liner: print as a single line if true. =back =head3 $slurm->print_job_info_msg($out, $job_info_msg, $one_liner=0); Output information about all SLURM jobs based upon message as loaded using C. =over 2 =item * IN $out: file to write to. =item * IN $job_info_msg: job information message, with structure of C. =item * IN $one_liner: print as a single line if true. =back =head3 $str = $slurm->sprint_job_info($job_info, $one_liner=0); Output information about a specific SLURM job based upon message as loaded using C. =over 2 =item * IN $job_info: an individual job information record, with structure of C. =item * IN $one_liner: print as a single line if true. =item * RET: string containing formatted output. =back =head3 $rc = $slurm->update_job($job_info); Issue RPC to a job's configuration per request only usable by user root or (for some parameters) the job's owner. =over 2 =item * IN $job_info: description of job updates, with structure of C. =item * RET: 0 on success, otherwise return -1 and set errno to indicate the error. =back =head2 SLURM JOB STEP CONFIGURATION READ/PRINT/UPDATE FUNCTIONS =head3 $resp = $slurm->get_job_steps($update_time=0, $job_id=NO_VAL, $step_id=NO_VAL, $show_flags=0); Issue RPC to get specific slurm job step configuration information if changed since update_time. =over 2 =item * IN $update_time: time of current configuration data. =item * IN $job_id: get information for specific job id, NO_VAL for all jobs. =item * IN $step_id: get information for specific job step id, NO_VAL for all job steps. =item * IN $show_flags: job step filtering options. =item * RET: job step information, with structure of C. On failure C is returned with errno set. =back =head3 $slurm->print_job_step_info_msg($out, $step_info_msg, $one_liner); Output information about all SLURM job steps based upon message as loaded using C. =over 2 =item * IN $out: file to write to. =item * IN $step_info_msg: job step information message, with structure of C. =item * IN $one_liner: print as a single line if true. =back =head3 $slurm->print_job_step_info($out, $step_info, $one_liner); Output information about a specific SLURM job step based upon message as loaded using C. =over 2 =item * IN $out: file to write to. =item * IN $step_info: job step information, with structure of C. =item * IN $one_liner: print as a single line if true. =back =head3 $str = $slurm->sprint_job_step_info($step_info, $one_liner); Output information about a specific SLURM job step based upon message as loaded using C. =over 2 =item * IN $step_info: job step information, with structure of C. =item * IN $one_liner: print as a single line if true. =item * RET: string containing formatted output. =back =head3 $layout = $slurm->job_step_layout_get($job_id, $step_id); Get the layout structure for a particular job step. =over 2 =item * IN $job_id: SLURM job ID. =item * IN $step_id: SLURM step ID. =item * RET: layout of the job step, with structure of C. On failure C is returned with errno set. =back =head3 $resp = $slurm->job_step_stat($job_id, $step_id, $nodelist=undef); Get status of a current step. =over 2 =item * IN $job_id : SLURM job ID. =item * IN $step_id: SLURM step ID. =item * IN $nodelist: nodes to check status of step. If omitted, all nodes in step are used. =item * RET: response of step status, with structure of C. On failure C is returned. =back =head3 $resp = $slurm->job_step_get_pids($job_id, $step_id, $nodelist); Get the complete list of pids for a given job step. =over 2 =item * IN $job_id: SLURM job ID. =item * IN $step_id: SLURM step ID. =item * IN $nodelist: nodes to check pids of step. If omitted, all nodes in step are used. =item * RET: response of pids information, with structure of C. On failure C is returned. =back =head2 SLURM NODE CONFIGURATION READ/PRINT/UPDATE FUNCTIONS =head3 $resp = $slurm->load_node($update_time=0, $show_flags=0); Issue RPC to get all node configuration information if changed. =over 2 =item * IN $update_time: time of current configuration data. =item * IN $show_flags: node filtering options. =item * RET: response hash reference with structure of C. On failure C is returned with errno set. =back =head3 $slurm->print_node_info_msg($out, $node_info_msg, $one_liner=0); Output information about all SLURM nodes based upon message as loaded using C. =over 2 =item * IN $out: FILE handle to write to. =item * IN $node_info_msg: node information message to print, with structure of C. =item * IN $one_liner: if true, each node info will be printed as a single line. =back =head3 $slurm->print_node_table($out, $node_info, $node_scaling=1, $one_liner=0); Output information about a specific SLURM node based upon message as loaded using C. =over 2 =item * IN $out: FILE handle to write to. =item * IN $node_info: an individual node information record with structure of C. =item * IN $node_scaling: the number of nodes each node information record represents. =item * IN $one_liner: whether to print as a single line. =back =head3 $str = $slurm->sprint_node_table($node_info, $node_scaling=1, $one_liner=0); Output information about a specific SLURM node based upon message as loaded using C. =over 2 =item * IN $node_info: an individual node information record with structure of C. =item * IN $node_scaling: number of nodes each node information record represents. =item * IN $one_liner: whether to print as a single line. =item * RET: string containing formatted output on success, C on failure. =back =head3 $rc = $slurm->update_node($node_info); Issue RPC to modify a node's configuration per request, only usable by user root. =over 2 =item * IN $node_info: description of node updates, with structure of C. =item * RET: 0 on success, -1 on failure with errno set. =back =head2 SLURM SWITCH TOPOLOGY CONFIGURATION READ/PRINT FUNCTIONS =head3 $resp = $slurm->load_topo(); Issue RPC to get all switch topology configuration information. =over 2 =item * RET: response hash reference with structure of C. On failure C is returned with errno set. =back =head3 $slurm->print_topo_info_msg($out, $topo_info_msg, $one_liner=0); Output information about all switch topology configuration information based upon message as loaded using C. =over 2 =item * IN $out: FILE handle to write to. =item * IN $topo_info_msg: swith topology information message, with structure of C. =item * IN $one_liner: print as a single line if not zero. =back =head3 $slurm->print_topo_record($out, $topo_info, $one_liner); Output information about a specific SLURM topology record based upon message as loaded using C. =over 2 =item * IN $out: FILE handle to write to. =item * IN $topo_info: an individual switch information record, with structure of C. =item * IN $one_liner: print as a single line if not zero. =back =head2 SLURM SELECT READ/PRINT/UPDATE FUNCTIONS =head3 $rc = $slurm->get_select_jobinfo($jobinfo, $data_type, $data) Get data from a select job credential. =over 2 =item * IN $jobinfo: select job credential to get data from. Opaque object. =item * IN $data_type: type of data to get. =over 2 =item * TODO: enumerate data type and returned value. =back =item * OUT $data: the data got. =item * RET: error code. =back =head3 $rc = $slurm->get_select_nodeinfo($nodeinfo, $data_type, $state, $data); Get data from a select node credential. =over 2 =item * IN $nodeinfo: select node credential to get data from. =item * IN $data_type: type of data to get. =over 2 =item * TODO: enumerate data type and returned value. =back =item * IN $state: state of node query. =item * OUT $data: the data got. =back =head2 SLURM PARTITION CONFIGURATION READ/PRINT/UPDATE FUNCTIONS =head3 $resp = $slurm->load_partitions($update_time=0, $show_flags=0); Issue RPC to get all SLURM partition configuration information if changed. =over 2 =item * IN $update_time: time of current configuration data. =item * IN $show_flags: partitions filtering options. =item * RET: response hash reference with structure of C. =back =head3 $slurm->print_partition_info_msg($out, $part_info_msg, $one_liner=0); Output information about all SLURM partitions based upon message as loaded using C. =over 2 =item * IN $out: FILE handle to write to. =item * IN $part_info_msg: partitions information message, with structure of C. =item * IN $one_liner: print as a single line if true. =back =head3 $slurm->print_partition_info($out, $part_info, $one_liner=0); Output information about a specific SLURM partition based upon message as loaded using C. =over 2 =item * IN $out: FILE handle to write to. =item * IN $part_info: an individual partition information record, with structure of C. =item * IN $one_liner: print as a single ine if true. =back =head3 $str = $slurm->sprint_partition_info($part_info, $one_liner=0); Output information about a specific SLURM partition based upon message as loaded using C. =over 2 =item * IN $part_info: an individual partition information record, with structure of C. =item * IN $one_liner: print as a single line if true. =item * RET: string containing formatted output. On failure C is returned. =back =head3 $rc = $slurm->create_partition($part_info); Create a new partition, only usable by user root. =over 2 =item * IN $part_info: description of partition configuration with structure of C. =item * RET: 0 on success, -1 on failure with errno set. =back =head3 $rc = $slurm->update_partition($part_info); Issue RPC to update a partition's configuration per request, only usable by user root. =over 2 =item * IN $part_info: description of partition updates with structure of C. =item * RET: 0 on success, -1 on failure with errno set. =back =head3 $rc = $slurm->delete_partition($part_info) Issue RPC to delete a partition, only usable by user root. =over 2 =item * IN $part_info: description of partition to delete, with structure of C. =item * RET: 0 on success, -1 on failure with errno set. =back =head2 SLURM RESERVATION CONFIGURATION READ/PRINT/UPDATE FUNCTIONS =head3 $name = $slurm->create_reservation($resv_info); Create a new reservation, only usable by user root. =over 2 =item * IN $resv_info: description of reservation, with structure of C. =item * RET: name of reservation created. On failure C is returned with errno set. =back =head3 $rc = $slurm->update_reservation($resv_info); Modify an existing reservation, only usable by user root. =over 2 =item * IN $resv_info: description of reservation, with structure of C. =item * RET: error code. =back =head3 $rc = $slurm->delete_reservation($resv_info); Issue RPC to delete a reservation, only usable by user root. =over 2 =item * IN $resv_info: description of reservation to delete, with structure of C. =item * RET: error code =back =head3 $resp = $slurm->load_reservations($update_time=0); Issue RPC to get all SLURM reservation configuration information if changed. =over 2 =item * IN $update_time: time of current configuration data. =item * RET: response of reservation information, with structure of C. On failure C is returned with errno set. =back =head3 $slurm->print_reservation_info_msg($out, $resv_info_msg, $one_liner=0); Output information about all SLURM reservations based upon message as loaded using C. =over 2 =item * IN $out: FILE handle to write to. =item * IN $resv_info_msg: reservation information message, with structure of C. =item * IN $one_liner: print as a single line if true. =back =head3 $slurm->print_reservation_info($out, $resv_info, $one_liner=0); Output information about a specific SLURM reservation based upon message as loaded using C. =over 2 =item * IN $out: FILE handle to write to. =item * IN $resv_info: an individual reservation information record, with structure of C. =item * IN $one_liner: print as a single line if true. =back =head3 $str = $slurm->sprint_reservation_info($resv_info, $one_liner=0); Output information about a specific SLURM reservation based upon message as loaded using C. =over 2 =item * IN $resv_info: an individual reservation information record, with structure of C. =item * IN $one_liner: print as a single line if true. =item * RET: string containing formatted output. On failure C is returned. =back =head2 SLURM PING/RECONFIGURE/SHUTDOWN FUNCTIONS =head3 $rc = $slurm->ping($primary); Issue RPC to ping Slurm controller (slurmctld). =over 2 =item * IN primary: 1 for primary controller, 2 for secondary controller. =item * RET: error code. =back =head3 $rc = $slurm->reconfigure() Issue RPC to have Slurm controller (slurmctld) reload its configuration file. =over 2 =item * RET: error code. =back =head3 $rc = $slurm->shutdown($options); Issue RPC to have Slurm controller (slurmctld) cease operations, both the primary and backup controller are shutdown. =over 2 =item * IN $options: =over 4 =item * 0: all slurm daemons are shutdown. =item * 1: slurmctld generates a core file. =item * 2: only the slurmctld is shutdown (no core file). =back =item * RET: error code. =back =head3 $rc = $slurm->takeover(); Issue RPC to have Slurm backup controller take over the primary controller. REQUEST_CONTROL is sent by the backup to the primary controller to take control. =over 2 =item * RET: error code. =back =head3 $rc = $slurm->set_debug_level($debug_level) Issue RPC to set slurm controller debug level. =over 2 =item * IN $debug_level: requested debug level. =item * RET: 0 on success, -1 on error with errno set. =back =head3 $rc = $slurm->set_schedlog_level($schedlog_level); Issue RPC to set slurm scheduler log level. =over 2 =item * schedlog_level: requested scheduler log level. =item * RET: 0 on success, -1 on error with errno set. =back =head2 SLURM JOB SUSPEND FUNCTIONS =head3 $rc = $slurm->suspend($job_id); Suspend execution of a job. =over 2 =item * IN $job_id: job on which top perform operation. =item * RET: error code. =back =head3 $rc = $slurm->resume($job_id); Resume execution of a previously suspended job. =over 2 =item * IN $job_id: job on which to perform operation. =item * RET: error code. =back =head3 $rc = $slurm->requeue($job_id); Re-queue a batch job, if already running then terminate it first. =over 2 =item * IN $job_id: job on which to perform operation. =item * RET: error code. =back =head2 SLURM JOB CHECKPOINT FUNCTIONS =head3 $rc = $slurm->checkpoint_able($job_id, $step_id, $start_time); Determine if the specified job step can presently be checkpointed. =over 2 =item * IN $job_id: job on which to perform operation. =item * IN $step_id: job step on which to perform operation. =item * OUT $start_time: time at which checkpoint request was issued. =item * RET: 0 (can be checkpoined) or a slurm error code. =back =head3 $rc = $slurm->checkpoint_disable($job_id, $step_id); Disable checkpoint requests for some job step. =over 2 =item * IN $job_id: job on which to perform operation. =item * IN $step_id: job step on which to perform operation. =item * RET: error code. =back =head3 $rc = $slurm->checkpoint_enable($job_id, $step_id); Enable checkpoint requests for some job step. =over 2 =item * IN $job_id: job on which to perform operation. =item * IN $step_id: job step on which to perform operation. =item * RET: error code. =back =head3 $rc = $slurm->checkpoint_create($job_id, $step_id, $max_wait, $image_dir); Initiate a checkpoint requests for some job step. The job will continue execution after the checkpoint operation completes. =over 2 =item * IN $job_id: job on which to perform operation. =item * IN $step_id: job step on which to perform operation. =item * IN $max_wait: maximum wait for operation to complete, in seconds. =item * IN $image_dir: directory to store image files. =item * RET: error code. =back =head3 $rc = $slurm->checkpoint_vacate($job_id, $step_id, $max_wait, $image_dir); Initiate a checkpoint requests for some job step. The job will terminate after the checkpoint operation completes. =over 2 =item * IN $job_id: job on which to perform operation. =item * IN $step_id: job step on which to perform operation. =item * IN $max_wait: maximum wait for operation to complete, in seconds. =item * IN $image_dir: directory to store image files. =item * RET: error code. =back =head3 $rc = $slurm->checkpoint_restart($job_id, $step_id, $stick, $image_dir) Restart execution of a checkpointed job step. =over 2 =item * IN $job_id: job on which to perform operation. =item * IN $step_id: job step on which to perform operation. =item * IN $stick: if true, stick to nodes previously running on. =item * IN $image_dir: directory to find checkpoint image files. =item * RET: error code. =back =head3 $rc = $slurm->checkpoint_complete($job_id, $step_id, $begin_time, $error_code, $error_msg); Note the completion of a job step's checkpoint operation. =over 2 =item * IN $job_id: job on which to perform operation. =item * IN $step_id: job step on which to perform operation. =item * IN $begin_time: time at which checkpoint began. =item * IN $error_code: error code, highest value for all complete calls is preserved. =item * IN $error_msg: error message, preserved for highest error_code. =item * RET: error code. =back =head3 checkpoint_task_complete($job_id, $step_id, $task_id, $begin_time, $error_code, $error_msg); Note the completion of a task's checkpoint operation. =over 2 =item * IN $job_id: job on which to perform operation. =item * IN $step_id: job step on which to perform operation. =item * IN $task_id: task which completed the operation. =item * IN $begin_time: time at which checkpoint began. =item * IN $error_code: error code, highest value for all complete calls is preserved. =item * IN $error_msg: error message, preserved for highest error_code. =item * RET: error code. =back =head3 $rc = $slurm->checkpoint_error($job_id, $step_id, $error_code, $error_msg); Gather error information for the last checkpoint operation for some job step. =over 2 =item * IN $job_id: job on which to perform operation. =item * IN $step_id: job step on which to perform operation. =item * OUT $error_code: error number associated with the last checkpoint operation. =item * OUT $error_msg: error message associated with the last checkpoint operation. =item * RET: error code. =back =head3 $rc = $slurm->checkpoint_tasks($job_id, $step_id, $image_dir, $max_wait, $nodelist); Send checkoint request to tasks of specified job step. =over 2 =item * IN $job_id: job on which to perform operation. =item * IN $step_id: job step on which to perform operation. =item * IN $image_dir: location to store checkpoint image files. =item * IN $max_wait: seconds to wait for the operation to complete. =item * IN $nodelist: nodes to send the request. =item * RET: 0 on success, non-zero on failure with errno set. =back =head2 SLURM TRIGGER FUNCTIONS =head3 $rc = $slurm->set_trigger($trigger_info); Set an event trigger. =over 2 =item * IN $trigger_info: hash reference of specification of trigger to create, with structure of C. =item * RET: error code. =back =head3 $rc = $slurm->clear_trigger($trigger_info); Clear an existing event trigger. =over 2 =item * IN $trigger_info: hash reference of specification of trigger to remove, with structure of C. =item * RET: error code. =back =head3 $resp = $slurm->get_triggers(); Get all event trigger information. =over 2 =item * RET: hash reference with structure of C. On failure C is returned with errno set. =back =head2 JOB/NODE STATE TESTING FUNCTIONS The following are functions to test job/node state, based on the macros defined in F. The functions take a parameter of a hash reference of a job/node, and return a boolean value. For job, $job->{job_state} is tested. For node, $node->{node_state} is tested. =head3 $cond = IS_JOB_PENDING($job); =head3 $cond = IS_JOB_RUNNING($job); =head3 $cond = IS_JOB_SUSPENDED($job); =head3 $cond = IS_JOB_COMPLETE($job); =head3 $cond = IS_JOB_CANCELLED($job); =head3 $cond = IS_JOB_FAILED($job); =head3 $cond = IS_JOB_TIMEOUT($job); =head3 $cond = IS_JOB_NODE_FAILED($job); =head3 $cond = IS_JOB_COMPLETING($job); =head3 $cond = IS_JOB_CONFIGURING($job); =head3 $cond = IS_JOB_STARTED($job); =head3 $cond = IS_JOB_FINISHED($job); =head3 $cond = IS_JOB_COMPLETED($job); =head3 $cond = IS_JOB_RESIZING($job); =head3 $cond = IS_NODE_UNKNOWN($node); =head3 $cond = IS_NODE_DOWN($node); =head3 $cond = IS_NODE_IDLE($node); =head3 $cond = IS_NODE_ALLOCATED($node); =head3 $cond = IS_NODE_ERROR($node); =head3 $cond = IS_NODE_MIXED($node); =head3 $cond = IS_NODE_FUTURE($node); =head3 $cond = IS_NODE_DRAIN($node); =head3 $cond = IS_NODE_DRAINING($node); =head3 $cond = IS_NODE_DRAINED($node); =head3 $cond = IS_NODE_COMPLETING($node); =head3 $cond = IS_NODE_NO_RESPOND($node); =head3 $cond = IS_NODE_POWER_SAVE($node); =head3 $cond = IS_NODE_POWER_UP($node); =head3 $cond = IS_NODE_FAIL($node); =head3 $cond = IS_NODE_MAINT($node); =head1 EXPORT The job/node state testing functions are exported by default. If ':constant' if specified, all constants are exported. =head1 SEE ALSO L, L, L, L for various hash reference structures. Home page of SLURM: L. =head1 AUTHOR This library is created by Hongjia Cao, Ehjcao(AT)nudt.edu.cnE and Danny Auble, Eda(AT)llnl.govE. It is distributed with SLURM. =head1 COPYRIGHT AND LICENSE This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.4 or, at your option, any later version of Perl 5 you may have available. =cut slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/lib/Slurm/000077500000000000000000000000001265000126300242475ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/lib/Slurm/Bitstr.pm000066400000000000000000000120351265000126300260550ustar00rootroot00000000000000package Slurm::Bitstr; 1; __END__ =head1 NAME Slurm::Bitstr - Bitstring functions in libslurm =head1 SYNOPSIS use Slurm; $bitmap = Slurm::Bitstr::alloc(32); if ($bitmap->test(10)) { print "bit 10 is set\n"; } =head1 DESCRIPTION The Slurm::Bitstr class is a wrapper of the bit string functions in libslurm. This package is loaded and bootstrapped with package Slurm. =head1 METHODS =head3 $bitmap = Slurm::Bitstr::alloc($nbits); Allocate a bitstring object with $nbits bits. An opaque bitstr object is returned. This is a B. =head3 $bitmap->realloc($nbits); Reallocate a bitstring(expand or contract size). $nbits is the number of bits in the new bitstring. =head3 $len = $bitmap->size(); Return the number of possible bits in a bitstring. =head3 $cond = $bitmap->test($n); Check if bit $n of $bitmap is set. =head3 $bitmap->set($n); Set bit $n of $bitmap. =head3 $bitmap->clear($n); Clear bit $n of $bitmap. =head3 $bitmap->nset($start, $stop); Set bits $start .. $stop in $bitmap. =head3 $bitmap->nclear($start, $stop); Clear bits $start .. $stop in $bitmap. =head3 $pos = $bitmap->ffc(); Find first bit clear in $bitmap. =head3 $pos = $bitmap->nffc($n) Find the first $n contiguous bits clear in $bitmap. =head3 $pos = $bitmap->noc($n, $seed); Find $n contiguous bits clear in $bitmap starting at offset $seed. =head3 $pos = $bitmap->nffs($n); Find the first $n contiguous bits set in $bitmap. =head3 $pos = $bitmap->ffs(); Find first bit set in $bitmap; =head3 $pos = $bitmap->fls(); Find last bit set in $bitmap; =head3 $bitmap->fill_gaps(); Set all bits of $bitmap between the first and last bits set(i.e. fill in the gaps to make set bits contiguous). =head3 $cond = $bitmap1->super_set($bitmap2); Return 1 if all bits set in $bitmap1 are also set in $bitmap2, 0 otherwise. =head3 $cond = $bitmap1->equal($bitmap2); Return 1 if $bitmap1 and $bitmap2 are identical, 0 otherwise. =head3 $bitmap1->and($bitmap2); $bitmap1 &= $bitmap2. =head3 $bitmap->not(); $bitmap = ~$bitmap. =head3 $bitmap1->or($bitmap2); $bitmap1 |= $bitmap2. =head3 $new = $bitmap->copy(); Return a copy of the supplied bitmap. =head3 $dest_bitmap->copybits($src_bitmap); Copy all bits of $src_bitmap to $dest_bitmap. =head3 $n = $bitmap->set_count(); Count the number of bits set in bitstring. =head3 $n = $bitmap1->overlap($bitmap2); Return number of bits set in $bitmap1 that are also set in $bitmap2, 0 if no overlap. =head3 $n = $bitmap->clear_count(); Count the number of bits clear in bitstring. =head3 $n = $bitmap->nset_max_count(); Return the count of the largest number of contiguous bits set in $bitmap. =head3 $sum = $bitmap->inst_and_set_count($int_array); And $int_array and $bitmap and sum the elements corresponding to set entries in $bitmap. =head3 $new = $bitmap->rotate_copy($n, $nbits); Return a copy of $bitmap rotated by $n bits. Number of bit in the new bitmap is $nbits. =head3 $bitmap->rotate($n); Rotate $bitmap by $n bits. =head3 $new = $bitmap->pick_cnt($nbits); Build a bitmap containing the first $nbits of $bitmap which are set. =head3 $str = $bitmap->fmt(); Convert $bitmap to range string format, e.g. 0-5,42 =head3 $rc = $bitmap->unfmt($str); Convert range string format to bitmap. =head3 $array = Slurm::Bitstr::bitfmt2int($str); Convert $str describing bitmap (output from fmt(), e.g. "0-30,45,50-60") into an array of integer (start/edn) pairs terminated by -1 (e.g. "0, 30, 45, 45, 50, 60, -1"). =head3 $str = $bitmap->fmt_hexmask(); Given a bit string, allocate and return a string in the form of: "0x0123ABC\0" ^ ^ | | MSB LSB =head3 $rc = $bitmap->unfmt_hexmask($str); Give a hex mask string "0x0123ABC\0", convert to a bit string. ^ ^ | | MSB LSB =head3 $str = $bitmap->fmt_binmask(); Given a bit string, allocate and return a binary string in the form of: "0001010\0" ^ ^ | | MSB LSB =head3 $rc = $bitmap->unfmt_binmask($str); Give a bin mask string "0001010\0", convert to a bit string. ^ ^ | | MSB LSB =head3 $pos = $bitmap->get_bit_num($n); Find position of the $n-th set bit(0 based, i.e., the first set bit is the 0-th) in $bitmap. Returns -1 if there are less than $n bits set. =head3 $n = $bitmap->get_pos_num($pos); Find the number of bits set minus one in $bitmap between bit postion [0 .. $pos]. Returns -1 if no bits are set between [0 .. $pos]. =head1 SEE ALSO L =head1 AUTHOR This library is created by Hongjia Cao, Ehjcao(AT)nudt.edu.cnE and Danny Auble, Eda(AT)llnl.govE. It is distributed with SLURM. =head1 COPYRIGHT AND LICENSE This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.4 or, at your option, any later version of Perl 5 you may have available. =cut slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/lib/Slurm/Constant.pm000066400000000000000000000622451265000126300264070ustar00rootroot00000000000000package Slurm::Constant; use strict; use warnings; use Carp; my %const; my $got = 0; no warnings 'portable'; sub _get_constants { seek(DATA, 0, 0); local $/=''; # paragraph mode local $_; while() { next unless /^=item\s+\*\s+(\S+)\s+(\S+)\s*$/; my ($name, $val) = ($1,$2); if ($val =~ /^0x/) { $val = hex($val); } else { $val = int($val); } $const{$name} = sub () { $val }; } $got = 1; } sub import { my $pkg = shift; my $callpkg = caller(0); croak "Please use `use Slurm qw(:constant)' instead of `use Slurm::Constant'." unless $callpkg eq "Slurm"; _get_constants() unless $got; { no strict "refs"; my ($sym, $sub); while (($sym, $sub) = each(%const)) { *{$callpkg . "::$sym"} = $sub; } } } sub import2 { my $pkg = shift; my $callpkg = caller(0); croak "Please use `use Slurm qw(:constant)' instead of `use Slurm::Constant'." unless $callpkg eq "Slurm"; my $main = caller(1); _get_constants() unless $got; { no strict "refs"; my ($sym, $sub); while (($sym, $sub) = each(%const)) { *{$main . "::$sym"} = $sub; } } } 1; __DATA__ =head1 NAME Slurm::Constant - Constants for use with Slurm =head1 SYNOPSIS use Slurm qw(:constant); if ($rc != SLURM_SUCCESS { print STDERR "action failed!\n"; } =head1 DESCRIPTION This package export constants for use with Slurm. This includes enumerations and defined macros. The constants will be exported to package Slurm and the package which "use Slurm qw(:constant);". =head1 EXPORTED CONSTANTS =head2 DEFINED MACROS =head3 Misc values =over 2 =item * TRUE 1 =item * FALSE 0 =item * INFINITE 0xffffffff =item * INFINITE64 0xffffffffffffffff =item * NO_VAL 0xfffffffe =item * NO_VAL64 0xfffffffffffffffe =item * MAX_TASKS_PER_NODE 128 =item * SLURM_BATCH_SCRIPT 0xfffffffe =back =head3 Job state flags =over 2 =item * JOB_STATE_BASE 0x00ff =item * JOB_STATE_FLAGS 0xff00 =item * JOB_COMPLETING 0x8000 =item * JOB_CONFIGURING 0x4000 =item * JOB_RESIZING 0x2000 =item * READY_JOB_FATAL -2 =item * READY_JOB_ERROR -1 =item * READY_NODE_STATE 0x01 =item * READY_JOB_STATE 0x02 =back =head3 Job mail notification =over 2 =item * MAIL_JOB_BEGIN 0x0001 =item * MAIL_JOB_END 0x0002 =item * MAIL_JOB_FAIL 0x0004 =item * MAIL_JOB_REQUEUE 0x0008 =back =head3 Offset for job's nice value =over 2 =item * NICE_OFFSET 10000 =back =head3 Partition state flags =over 2 =item * PARTITION_SUBMIT 0x01 =item * PARTITION_SCHED 0x02 =item * PARTITION_DOWN 0x01 =item * PARTITION_UP 0x03 =item * PARTITION_DRAIN 0x02 =item * PARTITION_INACTIVE 0x00 =back =head3 Open stdout/stderr mode =over 2 =item * OPEN_MODE_APPEND 1 =item * OPEN_MODE_TRUNCATE 2 =back =head3 Node state flags =over 2 =item * NODE_STATE_BASE 0x000f =item * NODE_STATE_FLAGS 0xfff0 =item * NODE_STATE_NET 0x0010 =item * NODE_STATE_RES 0x0020 =item * NODE_STATE_UNDRAIN 0x0040 =item * NODE_STATE_CLOUD 0x0080 =item * NODE_RESUME 0x0100 =item * NODE_STATE_DRAIN 0x0200 =item * NODE_STATE_COMPLETING 0x0400 =item * NODE_STATE_NO_RESPOND 0x0800 =item * NODE_STATE_POWER_SAVE 0x1000 =item * NODE_STATE_FAIL 0x2000 =item * NODE_STATE_POWER_UP 0x4000 =item * NODE_STATE_MAINT 0x8000 =back =head3 Size of the credential signature =over 2 =item * SLURM_SSL_SIGNATURE_LENGTH 128 =back =head3 show_flags of slurm_get_/slurm_load_ function calls =over 2 =item * SHOW_ALL 0x0001 =item * SHOW_DETAIL 0x0002 =back =head3 Consumerable resources parameters =over 2 =item * CR_CPU 0x0001 =item * CR_SOCKET 0x0002 =item * CR_CORE 0x0004 =item * CR_MEMORY 0x0010 =item * CR_ONE_TASK_PER_CORE 0x0100 =item * CR_CORE_DEFAULT_DIST_BLOCK 0x1000 =item * MEM_PER_CPU 0x80000000 =item * SHARED_FORCE 0x8000 =back =head3 Private data values =over 2 =item * PRIVATE_DATA_JOBS 0x0001 =item * PRIVATE_DATA_NODES 0x0002 =item * PRIVATE_DATA_PARTITIONS 0x0004 =item * PRIVATE_DATA_USAGE 0x0008 =item * PRIVATE_DATA_USERS 0x0010 =item * PRIVATE_DATA_ACCOUNTS 0x0020 =item * PRIVATE_DATA_RESERVATIONS 0x0040 =back =head3 Priority reset period =over 2 =item * PRIORITY_RESET_NONE 0x0000 =item * PRIORITY_RESET_NOW 0x0001 =item * PRIORITY_RESET_DAILY 0x0002 =item * PRIORITY_RESET_WEEKLY 0x0003 =item * PRIORITY_RESET_MONTHLY 0x0004 =item * PRIORITY_RESET_QUARTERLY 0x0005 =item * PRIORITY_RESET_YEARLY 0x0006 =back =head3 Process priority propagation =over 2 =item * PROP_PRIO_OFF 0x0000 =item * PROP_PRIO_ON 0x0001 =item * PROP_PRIO_NICER 0x0002 =back =head3 Partition state information =over 2 =item * PART_FLAG_DEFAULT 0x0001 =item * PART_FLAG_HIDDEN 0x0002 =item * PART_FLAG_NO_ROOT 0x0004 =item * PART_FLAG_ROOT_ONLY 0x0008 =item * PART_FLAG_DEFAULT_CLR 0x0100 =item * PART_FLAG_HIDDEN_CLR 0x0200 =item * PART_FLAG_NO_ROOT_CLR 0x0400 =item * PART_FLAG_ROOT_ONLY_CLR 0x0800 =back =head3 Reservation flags =over 2 =item * RESERVE_FLAG_MAINT 0x0001 =item * RESERVE_FLAG_NO_MAINT 0x0002 =item * RESERVE_FLAG_DAILY 0x0004 =item * RESERVE_FLAG_NO_DAILY 0x0008 =item * RESERVE_FLAG_WEEKLY 0x0010 =item * RESERVE_FLAG_NO_WEEKLY 0x0020 =item * RESERVE_FLAG_IGN_JOBS 0x0040 =item * RESERVE_FLAG_NO_IGN_JOB 0x0080 =item * RESERVE_FLAG_OVERLAP 0x4000 =item * RESERVE_FLAG_SPEC_NODES 0x8000 =back =head3 Log debug flags =over 2 =item * DEBUG_FLAG_SELECT_TYPE 0x00000001 =item * DEBUG_FLAG_STEPS 0x00000002 =item * DEBUG_FLAG_TRIGGERS 0x00000004 =item * DEBUG_FLAG_CPU_BIND 0x00000008 =item * DEBUG_FLAG_WIKI 0x00000010 =item * DEBUG_FLAG_NO_CONF_HASH 0x00000020 =item * DEBUG_FLAG_GRES 0x00000040 =item * DEBUG_FLAG_BG_PICK 0x00000080 =item * DEBUG_FLAG_BG_WIRES 0x00000100 =item * DEBUG_FLAG_BG_ALGO 0x00000200 =item * DEBUG_FLAG_BG_ALGO_DEEP 0x00000400 =item * DEBUG_FLAG_PRIO 0x00000800 =item * DEBUG_FLAG_BACKFILL 0x00001000 =item * DEBUG_FLAG_GANG 0x00002000 =item * DEBUG_FLAG_RESERVATION 0x00004000 =back =head3 Group cache =over 2 =item * GROUP_FORCE 0x8000 =item * GROUP_CACHE 0x4000 =item * GROUP_TIME_MASK 0x0fff =back =head3 Preempt mode =over 2 =item * PREEMPT_MODE_OFF 0x0000 =item * PREEMPT_MODE_SUSPEND 0x0001 =item * PREEMPT_MODE_REQUEUE 0x0002 =item * PREEMPT_MODE_CHECKPOINT 0x0004 =item * PREEMPT_MODE_CANCEL 0x0008 =item * PREEMPT_MODE_GANG 0x8000 =back =head3 Trigger type =over 2 =item * TRIGGER_RES_TYPE_JOB 0x0001 =item * TRIGGER_RES_TYPE_NODE 0x0002 =item * TRIGGER_RES_TYPE_SLURMCTLD 0x0003 =item * TRIGGER_RES_TYPE_SLURMDBD 0x0004 =item * TRIGGER_RES_TYPE_DATABASE 0x0005 =item * TRIGGER_TYPE_UP 0x00000001 =item * TRIGGER_TYPE_DOWN 0x00000002 =item * TRIGGER_TYPE_FAIL 0x00000004 =item * TRIGGER_TYPE_TIME 0x00000008 =item * TRIGGER_TYPE_FINI 0x00000010 =item * TRIGGER_TYPE_RECONFIG 0x00000020 =item * TRIGGER_TYPE_BLOCK_ERR 0x00000040 =item * TRIGGER_TYPE_IDLE 0x00000080 =item * TRIGGER_TYPE_DRAINED 0x00000100 =item * TRIGGER_TYPE_PRI_CTLD_FAIL 0x00000200 =item * TRIGGER_TYPE_PRI_CTLD_RES_OP 0x00000400 =item * TRIGGER_TYPE_PRI_CTLD_RES_CTRL 0x00000800 =item * TRIGGER_TYPE_PRI_CTLD_ACCT_FULL 0x00001000 =item * TRIGGER_TYPE_BU_CTLD_FAIL 0x00002000 =item * TRIGGER_TYPE_BU_CTLD_RES_OP 0x00004000 =item * TRIGGER_TYPE_BU_CTLD_AS_CTRL 0x00008000 =item * TRIGGER_TYPE_PRI_DBD_FAIL 0x00010000 =item * TRIGGER_TYPE_PRI_DBD_RES_OP 0x00020000 =item * TRIGGER_TYPE_PRI_DB_FAIL 0x00040000 =item * TRIGGER_TYPE_PRI_DB_RES_OP 0x00080000 =back =head2 Enumerations =head3 Job states =over 2 =item * JOB_PENDING 0 =item * JOB_RUNNING 1 =item * JOB_SUSPENDED 2 =item * JOB_COMPLETE 3 =item * JOB_CANCELLED 4 =item * JOB_FAILED 5 =item * JOB_TIMEOUT 6 =item * JOB_NODE_FAIL 7 =item * JOB_PREEMPTED 8 =item * JOB_BOOT_FAIL 9 =item * JOB_END 10 =back =head3 Job state reason =over 2 =item * WAIT_NO_REASON 0 =item * WAIT_PRIORITY 1 =item * WAIT_DEPENDENCY 2 =item * WAIT_RESOURCES 3 =item * WAIT_PART_NODE_LIMIT 4 =item * WAIT_PART_TIME_LIMIT 5 =item * WAIT_PART_DOWN 6 =item * WAIT_PART_INACTIVE 7 =item * WAIT_HELD 8 =item * WAIT_TIME 9 =item * WAIT_LICENSES 10 =item * WAIT_ASSOC_JOB_LIMIT 11 =item * WAIT_ASSOC_RESOURCE_LIMIT 12 =item * WAIT_ASSOC_TIME_LIMIT 13 =item * WAIT_RESERVATION 14 =item * WAIT_NODE_NOT_AVAIL 15 =item * WAIT_HELD_USER 16 =item * WAIT_TBD2 17 =item * FAIL_DOWN_PARTITION 18 =item * FAIL_DOWN_NODE 19 =item * FAIL_BAD_CONSTRAINTS 20 =item * FAIL_SYSTEM 21 =item * FAIL_LAUNCH 22 =item * FAIL_EXIT_CODE 23 =item * FAIL_TIMEOUT 24 =item * FAIL_INACTIVE_LIMIT 25 =item * FAIL_ACCOUNT 26 =item * FAIL_QOS 27 =item * WAIT_QOS_THRES 28 =back =head3 Job account types =over 2 =item * JOB_START 0 =item * JOB_STEP 1 =item * JOB_SUSPEND 2 =item * JOB_TERMINATED 3 =back =head3 Connection type =over 2 =item * SELECT_MESH 0 =item * SELECT_TORUS 1 =item * SELECT_NAV 2 =item * SELECT_SMALL 3 =item * SELECT_HTC_S 4 =item * SELECT_HTC_D 5 =item * SELECT_HTC_V 6 =item * SELECT_HTC_L 7 =back =head3 Node use type =over 2 =item * SELECT_COPROCESSOR_MODE 0 =item * SELECT_VIRTUAL_NODE_MODE 1 =item * SELECT_NAV_MODE 2 =back =head3 Select jobdata type =over 2 =item * SELECT_JOBDATA_GEOMETRY 0 =item * SELECT_JOBDATA_ROTATE 1 =item * SELECT_JOBDATA_CONN_TYPE 2 =item * SELECT_JOBDATA_BLOCK_ID 3 =item * SELECT_JOBDATA_NODES 4 =item * SELECT_JOBDATA_IONODES 5 =item * SELECT_JOBDATA_NODE_CNT 6 =item * SELECT_JOBDATA_ALTERED 7 =item * SELECT_JOBDATA_BLRTS_IMAGE 8 =item * SELECT_JOBDATA_LINUX_IMAGE 9 =item * SELECT_JOBDATA_MLOADER_IMAGE 10 =item * SELECT_JOBDATA_RAMDISK_IMAGE 11 =item * SELECT_JOBDATA_REBOOT 12 =item * SELECT_JOBDATA_RESV_ID 13 =item * SELECT_JOBDATA_PTR 14 =back =head3 Select nodedata type =over 2 =item * SELECT_NODEDATA_BITMAP_SIZE 0 =item * SELECT_NODEDATA_SUBGRP_SIZE 1 =item * SELECT_NODEDATA_SUBCNT 2 =item * SELECT_NODEDATA_BITMAP 3 =item * SELECT_NODEDATA_STR 4 =item * SELECT_NODEDATA_PTR 5 =back =head3 Select print mode =over 2 =item * SELECT_PRINT_HEAD 0 =item * SELECT_PRINT_DATA 1 =item * SELECT_PRINT_MIXED 2 =item * SELECT_PRINT_MIXED_SHORT 3 =item * SELECT_PRINT_BG_ID 4 =item * SELECT_PRINT_NODES 5 =item * SELECT_PRINT_CONNECTION 6 =item * SELECT_PRINT_ROTATE 7 =item * SELECT_PRINT_GEOMETRY 8 =item * SELECT_PRINT_START 9 =item * SELECT_PRINT_BLRTS_IMAGE 10 =item * SELECT_PRINT_LINUX_IMAGE 11 =item * SELECT_PRINT_MLOADER_IMAGE 12 =item * SELECT_PRINT_RAMDISK_IMAGE 13 =item * SELECT_PRINT_REBOOT 14 =item * SELECT_PRINT_RESV_ID 15 =back =head3 Select node cnt =over 2 =item * SELECT_GET_NODE_SCALING 0 =item * SELECT_GET_NODE_CPU_CNT 1 =item * SELECT_GET_BP_CPU_CNT 2 =item * SELECT_APPLY_NODE_MIN_OFFSET 3 =item * SELECT_APPLY_NODE_MAX_OFFSET 4 =item * SELECT_SET_NODE_CNT 5 =item * SELECT_SET_BP_CNT 6 =back =head3 Jobacct data type =over 2 =item * JOBACCT_DATA_TOTAL 0 =item * JOBACCT_DATA_PIPE 1 =item * JOBACCT_DATA_RUSAGE 2 =item * JOBACCT_DATA_MAX_VSIZE 3 =item * JOBACCT_DATA_MAX_VSIZE_ID 4 =item * JOBACCT_DATA_TOT_VSIZE 5 =item * JOBACCT_DATA_MAX_RSS 6 =item * JOBACCT_DATA_MAX_RSS_ID 7 =item * JOBACCT_DATA_TOT_RSS 8 =item * JOBACCT_DATA_MAX_PAGES 9 =item * JOBACCT_DATA_MAX_PAGES_ID 10 =item * JOBACCT_DATA_TOT_PAGES 11 =item * JOBACCT_DATA_MIN_CPU 12 =item * JOBACCT_DATA_MIN_CPU_ID 13 =item * JOBACCT_DATA_TOT_CPU 14 =back =head3 TRES Records =over 2 =item * TRES_CPU 1 =item * TRES_MEM 2 =item * TRES_ENERGY 3 =item * TRES_NODE 4 =back =head3 Task distribution =over 2 =item * SLURM_DIST_CYCLIC 1 =item * SLURM_DIST_BLOCK 2 =item * SLURM_DIST_ARBITRARY 3 =item * SLURM_DIST_PLANE 4 =item * SLURM_DIST_CYCLIC_CYCLIC 5 =item * SLURM_DIST_CYCLIC_BLOCK 6 =item * SLURM_DIST_BLOCK_CYCLIC 7 =item * SLURM_DIST_BLOCK_BLOCK 8 =item * SLURM_NO_LLLP_DIST 9 =item * SLURM_DIST_UNKNOWN 10 =back =head3 CPU bind type =over 2 =item * CPU_BIND_VERBOSE 0x01 =item * CPU_BIND_TO_THREADS 0x02 =item * CPU_BIND_TO_CORES 0x04 =item * CPU_BIND_TO_SOCKETS 0x08 =item * CPU_BIND_TO_LDOMS 0x10 =item * CPU_BIND_NONE 0x20 =item * CPU_BIND_RANK 0x40 =item * CPU_BIND_MAP 0x80 =item * CPU_BIND_MASK 0x100 =item * CPU_BIND_LDRANK 0x200 =item * CPU_BIND_LDMAP 0x400 =item * CPU_BIND_LDMASK 0x800 =item * CPU_BIND_CPUSETS 0x8000 =back =head3 Memory bind type =over 2 =item * MEM_BIND_VERBOSE 0x01 =item * MEM_BIND_NONE 0x02 =item * MEM_BIND_RANK 0x04 =item * MEM_BIND_MAP 0x08 =item * MEM_BIND_MASK 0x10 =item * MEM_BIND_LOCAL 0x20 =back =head3 Node state =over 2 =item * NODE_STATE_UNKNOWN 0 =item * NODE_STATE_DOWN 1 =item * NODE_STATE_IDLE 2 =item * NODE_STATE_ALLOCATED 3 =item * NODE_STATE_ERROR 4 =item * NODE_STATE_MIXED 5 =item * NODE_STATE_FUTURE 6 =item * NODE_STATE_END 7 =back =head3 Ctx keys =over 2 =item * SLURM_STEP_CTX_STEPID 0 =item * SLURM_STEP_CTX_TASKS 1 =item * SLURM_STEP_CTX_TID 2 =item * SLURM_STEP_CTX_RESP 3 =item * SLURM_STEP_CTX_CRED 4 =item * SLURM_STEP_CTX_SWITCH_JOB 5 =item * SLURM_STEP_CTX_NUM_HOSTS 6 =item * SLURM_STEP_CTX_HOST 7 =item * SLURM_STEP_CTX_JOBID 8 =item * SLURM_STEP_CTX_USER_MANAGED_SOCKETS 9 =back head2 SLURM ERRNO =head3 Defined macro error values =over 2 =item * SLURM_SUCCESS 0 =item * SLURM_ERROR -1 =item * SLURM_FAILURE -1 =item * SLURM_SOCKET_ERROR -1 =item * SLURM_PROTOCOL_SUCCESS 0 =item * SLURM_PROTOCOL_ERROR -1 =back =head3 General Message error codes =over 2 =item * SLURM_UNEXPECTED_MSG_ERROR 1000 =item * SLURM_COMMUNICATIONS_CONNECTION_ERROR 1001 =item * SLURM_COMMUNICATIONS_SEND_ERROR 1002 =item * SLURM_COMMUNICATIONS_RECEIVE_ERROR 1003 =item * SLURM_COMMUNICATIONS_SHUTDOWN_ERROR 1004 =item * SLURM_PROTOCOL_VERSION_ERROR 1005 =item * SLURM_PROTOCOL_IO_STREAM_VERSION_ERROR 1006 =item * SLURM_PROTOCOL_AUTHENTICATION_ERROR 1007 =item * SLURM_PROTOCOL_INSANE_MSG_LENGTH 1008 =item * SLURM_MPI_PLUGIN_NAME_INVALID 1009 =item * SLURM_MPI_PLUGIN_PRELAUNCH_SETUP_FAILED 1010 =item * SLURM_PLUGIN_NAME_INVALID 1011 =item * SLURM_UNKNOWN_FORWARD_ADDR 1012 =back =head3 communication failures to/from slurmctld =over 2 =item * SLURMCTLD_COMMUNICATIONS_CONNECTION_ERROR 1800 =item * SLURMCTLD_COMMUNICATIONS_SEND_ERROR 1801 =item * SLURMCTLD_COMMUNICATIONS_RECEIVE_ERROR 1802 =item * SLURMCTLD_COMMUNICATIONS_SHUTDOWN_ERROR 1803 =back =head3 _info.c/communication layer RESPONSE_SLURM_RC message codes =over 2 =item * SLURM_NO_CHANGE_IN_DATA 1900 =back =head3 slurmctld error codes =over 2 =item * ESLURM_INVALID_PARTITION_NAME 2000 =item * ESLURM_DEFAULT_PARTITION_NOT_SET 2001 =item * ESLURM_ACCESS_DENIED 2002 =item * ESLURM_JOB_MISSING_REQUIRED_PARTITION_GROUP 2003 =item * ESLURM_REQUESTED_NODES_NOT_IN_PARTITION 2004 =item * ESLURM_TOO_MANY_REQUESTED_CPUS 2005 =item * ESLURM_INVALID_NODE_COUNT 2006 =item * ESLURM_ERROR_ON_DESC_TO_RECORD_COPY 2007 =item * ESLURM_JOB_MISSING_SIZE_SPECIFICATION 2008 =item * ESLURM_JOB_SCRIPT_MISSING 2009 =item * ESLURM_USER_ID_MISSING 2010 =item * ESLURM_DUPLICATE_JOB_ID 2011 =item * ESLURM_PATHNAME_TOO_LONG 2012 =item * ESLURM_NOT_TOP_PRIORITY 2013 =item * ESLURM_REQUESTED_NODE_CONFIG_UNAVAILABLE 2014 =item * ESLURM_REQUESTED_PART_CONFIG_UNAVAILABLE 2015 =item * ESLURM_NODES_BUSY 2016 =item * ESLURM_INVALID_JOB_ID 2017 =item * ESLURM_INVALID_NODE_NAME 2018 =item * ESLURM_WRITING_TO_FILE 2019 =item * ESLURM_TRANSITION_STATE_NO_UPDATE 2020 =item * ESLURM_ALREADY_DONE 2021 =item * ESLURM_INTERCONNECT_FAILURE 2022 =item * ESLURM_BAD_DIST 2023 =item * ESLURM_JOB_PENDING 2024 =item * ESLURM_BAD_TASK_COUNT 2025 =item * ESLURM_INVALID_JOB_CREDENTIAL 2026 =item * ESLURM_IN_STANDBY_MODE 2027 =item * ESLURM_INVALID_NODE_STATE 2028 =item * ESLURM_INVALID_FEATURE 2029 =item * ESLURM_INVALID_AUTHTYPE_CHANGE 2030 =item * ESLURM_INVALID_CHECKPOINT_TYPE_CHANGE 2031 =item * ESLURM_INVALID_SCHEDTYPE_CHANGE 2032 =item * ESLURM_INVALID_SELECTTYPE_CHANGE 2033 =item * ESLURM_INVALID_SWITCHTYPE_CHANGE 2034 =item * ESLURM_FRAGMENTATION 2035 =item * ESLURM_NOT_SUPPORTED 2036 =item * ESLURM_DISABLED 2037 =item * ESLURM_DEPENDENCY 2038 =item * ESLURM_BATCH_ONLY 2039 =item * ESLURM_TASKDIST_ARBITRARY_UNSUPPORTED 2040 =item * ESLURM_TASKDIST_REQUIRES_OVERCOMMIT 2041 =item * ESLURM_JOB_HELD 2042 =item * ESLURM_INVALID_CRYPTO_TYPE_CHANGE 2043 =item * ESLURM_INVALID_TASK_MEMORY 2044 =item * ESLURM_INVALID_ACCOUNT 2045 =item * ESLURM_INVALID_PARENT_ACCOUNT 2046 =item * ESLURM_SAME_PARENT_ACCOUNT 2047 =item * ESLURM_INVALID_LICENSES 2048 =item * ESLURM_NEED_RESTART 2049 =item * ESLURM_ACCOUNTING_POLICY 2050 =item * ESLURM_INVALID_TIME_LIMIT 2051 =item * ESLURM_RESERVATION_ACCESS 2052 =item * ESLURM_RESERVATION_INVALID 2053 =item * ESLURM_INVALID_TIME_VALUE 2054 =item * ESLURM_RESERVATION_BUSY 2055 =item * ESLURM_RESERVATION_NOT_USABLE 2056 =item * ESLURM_INVALID_WCKEY 2057 =item * ESLURM_RESERVATION_OVERLAP 2058 =item * ESLURM_PORTS_BUSY 2059 =item * ESLURM_PORTS_INVALID 2060 =item * ESLURM_PROLOG_RUNNING 2061 =item * ESLURM_NO_STEPS 2062 =item * ESLURM_INVALID_BLOCK_STATE 2063 =item * ESLURM_INVALID_BLOCK_LAYOUT 2064 =item * ESLURM_INVALID_BLOCK_NAME 2065 =item * ESLURM_INVALID_QOS 2066 =item * ESLURM_QOS_PREEMPTION_LOOP 2067 =item * ESLURM_NODE_NOT_AVAIL 2068 =item * ESLURM_INVALID_CPU_COUNT 2069 =item * ESLURM_PARTITION_NOT_AVAIL 2070 =item * ESLURM_CIRCULAR_DEPENDENCY 2071 =item * ESLURM_INVALID_GRES 2072 =item * ESLURM_JOB_NOT_PENDING 2073 =back =head3 switch specific error codes specific values defined in plugin module =over 2 =item * ESLURM_SWITCH_MIN 3000 =item * ESLURM_SWITCH_MAX 3099 =item * ESLURM_JOBCOMP_MIN 3100 =item * ESLURM_JOBCOMP_MAX 3199 =item * ESLURM_SCHED_MIN 3200 =item * ESLURM_SCHED_MAX 3299 =back =head3 slurmd error codes =over 2 =item * ESLRUMD_PIPE_ERROR_ON_TASK_SPAWN 4000 =item * ESLURMD_KILL_TASK_FAILED 4001 =item * ESLURMD_KILL_JOB_ALREADY_COMPLETE 4002 =item * ESLURMD_INVALID_ACCT_FREQ 4003 =item * ESLURMD_INVALID_JOB_CREDENTIAL 4004 =item * ESLURMD_UID_NOT_FOUND 4005 =item * ESLURMD_GID_NOT_FOUND 4006 =item * ESLURMD_CREDENTIAL_EXPIRED 4007 =item * ESLURMD_CREDENTIAL_REVOKED 4008 =item * ESLURMD_CREDENTIAL_REPLAYED 4009 =item * ESLURMD_CREATE_BATCH_DIR_ERROR 4010 =item * ESLURMD_MODIFY_BATCH_DIR_ERROR 4011 =item * ESLURMD_CREATE_BATCH_SCRIPT_ERROR 4012 =item * ESLURMD_MODIFY_BATCH_SCRIPT_ERROR 4013 =item * ESLURMD_SETUP_ENVIRONMENT_ERROR 4014 =item * ESLURMD_SHARED_MEMORY_ERROR 4015 =item * ESLURMD_SET_UID_OR_GID_ERROR 4016 =item * ESLURMD_SET_SID_ERROR 4017 =item * ESLURMD_CANNOT_SPAWN_IO_THREAD 4018 =item * ESLURMD_FORK_FAILED 4019 =item * ESLURMD_EXECVE_FAILED 4020 =item * ESLURMD_IO_ERROR 4021 =item * ESLURMD_PROLOG_FAILED 4022 =item * ESLURMD_EPILOG_FAILED 4023 =item * ESLURMD_SESSION_KILLED 4024 =item * ESLURMD_TOOMANYSTEPS 4025 =item * ESLURMD_STEP_EXISTS 4026 =item * ESLURMD_JOB_NOTRUNNING 4027 =item * ESLURMD_STEP_SUSPENDED 4028 =item * ESLURMD_STEP_NOTSUSPENDED 4029 =back =head3 slurmd errors in user batch job =over 2 =item * ESCRIPT_CHDIR_FAILED 4100 =item * ESCRIPT_OPEN_OUTPUT_FAILED 4101 =item * ESCRIPT_NON_ZERO_RETURN 4102 =back =head3 socket specific SLURM communications error =over 2 =item * SLURM_PROTOCOL_SOCKET_IMPL_ZERO_RECV_LENGTH 5000 =item * SLURM_PROTOCOL_SOCKET_IMPL_NEGATIVE_RECV_LENGTH 5001 =item * SLURM_PROTOCOL_SOCKET_IMPL_NOT_ALL_DATA_SENT 5002 =item * ESLURM_PROTOCOL_INCOMPLETE_PACKET 5003 =item * SLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT 5004 =item * SLURM_PROTOCOL_SOCKET_ZERO_BYTES_SENT 5005 =back =head3 slurm_auth errors =over 2 =item * ESLURM_AUTH_CRED_INVALID 6000 =item * ESLURM_AUTH_FOPEN_ERROR 6001 =item * ESLURM_AUTH_NET_ERROR 6002 =item * ESLURM_AUTH_UNABLE_TO_SIGN 6003 =back =head3 accounting errors =over 2 =item * ESLURM_DB_CONNECTION 7000 =item * ESLURM_JOBS_RUNNING_ON_ASSOC 7001 =item * ESLURM_CLUSTER_DELETED 7002 =item * ESLURM_ONE_CHANGE 7003 =back =head2 =head1 SEE ALSO Slurm =head1 AUTHOR This library is created by Hongjia Cao, Ehjcao(AT)nudt.edu.cnE and Danny Auble, Eda(AT)llnl.govE. It is distributed with SLURM. =head1 COPYRIGHT AND LICENSE This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.4 or, at your option, any later version of Perl 5 you may have available. =cut slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/lib/Slurm/Hostlist.pm000066400000000000000000000056331265000126300264250ustar00rootroot00000000000000package Slurm::Hostlist; 1; __END__ =head1 NAME Slurm::Hostlist - Hostlist functions in libslurm =head1 SYNOPSIS use Slurm; $hostnames = "node1,node[2-5,12]"; $hl = Slurm::Hostlist::create($hostnames); $cnt = $hl->count; $hl->push("node21,node[27-34]"); while($host = $hl->shift()) { print $host, "\n"; } print $hl->ranged_string(), "\n"; =head1 DESCRIPTION The Slurm::Hostlist class is a wrapper of the hostlist functions in libslurm. This package is loaded and bootstrapped with package Slurm. =head1 METHODS =head2 $hl = Slurm::Hostlist::new($str); Create a new hostlist from a string representation. Returns an opaque hostlist object. This is a B. The string representation ($str) may contain one or more hostnames or bracketed hostlists separated by either `,' or whitespace. A bracketed hostlist is denoted by a common prefix followed by a list of numeric ranges contained within brackets: e.g. "tux[0-5,12,20-25]". To support systems with 3-D topography, a rectangular prism may be described using two three digit numbers separated by "x": e.g. "bgl[123x456]". This selects all nodes between 1 and 4 inclusive in the first dimension, between 2 and 5 in the second, and between 3 and 6 in the third dimension for a total of 4*4*4=64 nodes. If $str is omitted, and empty hostlist is created and returned. =head2 $cnt = $hl->count(); Return the number of hosts in the hostlist. =head2 $pos = $hl->find($hostname); Searches hostlist $hl for the first host matching $hostname and returns position in list if found. Returns -1 if host is not found. =head2 $cnt = $hl->push($hosts); Push a string representation of hostnames onto a hostlist. The $hosts argument may take the same form as in create(). Returns the number of hostnames inserted into the list, =head2 $cnt = $hl->push_host($hostname); Push a single host onto the hostlist hl. This function is more efficient than slurm_hostlist_push() for a single hostname, since the argument does not need to be checked for ranges. Return value is 1 for success, 0 for failure. =head2 $str = $hl->ranged_string(); Return the string representation of the hostlist $hl. ranged_string() will write a bracketed hostlist representation where possible. =head2 $host = $hl->shift(); Returns the string representation of the first host in the hostlist or `undef' if the hostlist is empty or there was an error allocating memory. The host is removed from the hostlist. =head2 $hl->uniq(); Sort the hostlist $hl and remove duplicate entries. =head1 SEE ALSO Slurm =head1 AUTHOR This library is created by Hongjia Cao, Ehjcao(AT)nudt.edu.cnE and Danny Auble, Eda(AT)llnl.govE. It is distributed with SLURM. =head1 COPYRIGHT AND LICENSE This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.4 or, at your option, any later version of Perl 5 you may have available. =cut slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/lib/Slurm/Stepctx.pm000066400000000000000000000100141265000126300262330ustar00rootroot00000000000000package Slurm::Stepctx; 1; __END__ =head1 NAME Slurm::Stepctx - Step launching functions in libslurm =head1 SYNOPSIS use Slurm; $slurm = Slurm::new(); $params = {job_id => 1234, ...}; $ctx = $slurm->step_ctx_create($params); $rc = $ctx->launch({...}, {task_start => sub {...}, task_finish => sub {...} }); =head1 DESCRIPTION The Slurm::Stepctx class is a wrapper of the job step context and step launching functions in libslurm. This package is loaded and bootstraped with package Slurm. =head1 METHODS =head2 STEP CONTEXT CREATION FUNCTIONS Please see L for step context creation functions. =head2 STEP CONTEXT MANIPULATION FUNCTIONS =head3 $rc = $ctx->get($ctx_key, ...); Get parameters from a job step context. =over 2 =item * INPUT $ctx_key: type of the parameter to get. Supported key and the corresponding result data are: =over 2 =item * $rc = $ctx->get(SLURM_STEP_CTX_STEPID, $stepid); Get the created job step id. $stepid will be set to the step id number. =item * $rc = $ctx->get(SLURM_STEP_CTX_TASKS, $tasks); Get array of task count on each node. $tasks will be set to an array reference. =item * $rc = $ctx->get(SLURM_STEP_CTX_TID, $nodeid, $tids); Get array of task IDs for specified node. $nodeid specifies index of the node. $tids will be set to an array reference. =item * $rc = $ctx->get(SLURM_STEP_CTX_RESP, $resp); TODO: this is not exported. Get job step create response message. =item * $rc = $ctx->get(SLURM_STEP_CTX_CRED, $cred); Get credential of the created job step. $cred will be an opaque object blessed to "Slurm::slurm_cred_t". =item * $rc = $ctx->get(SLURM_STEP_CTX_SWITCH_JOB, $switch_info); Get switch plugin specific info of the step. $switch_info will be an opaque object blessed to "Slurm::switch_jobinfo_t". =item * $rc = $ctx->get(SLURM_STEP_CTX_NUM_HOSTS, $num); Get number of nodes allocated to the job step. =item * $rc = $ctx-Eget(SLURM_STEP_CTX_HOST, $nodeid, $nodename); Get node name allocated to the job step. $nodeid specifies index of the node. =item * $rc = $ctx->get(SLURM_STEP_CTX_JOBID, $jobid); Get job ID of the job step. =item * $rc = $ctx->get(SLURM_STEP_CTX_USER_MANAGED_SOCKETS, $numtasks, $sockets); Get user managed I/O sockets. TODO: describe the parameters. =back =item * RET: error code. =back =head3 $rc = $ctx->daemon_per_node_hack($node_list, $node_cnt, $curr_task_num); Hack the step context to run a single process per node, regardless of the settings selected at Slurm::Stepctx::create() time. =over 2 =item * RET: error code. =back =head2 STEP TASK LAUNCHING FUNCTIONS =head3 $rc = $ctx->launch($params, $callbacks); Launch a parallel job step. =over 2 =item * IN $params: parameters of task launching, with structure of C. =item * IN $callbacks: callback functions, with structure of C. NOTE: the callback functions will be called in a thread different from the thread calling the C function. =item * RET: error code. =back =head3 $rc = $ctx->launch_wait_start(); Block until all tasks have started. =over 2 =item * RET: error code. =back =head3 $ctx->launch_wait_finish(); Block until all tasks have finished (or failed to start altogether). =head3 $ctx->launch_abort(); Abort an in-progress launch, or terminate the fully launched job step. Can be called from a signal handler. =head3 $ctx->launch_fwd_signal($signo); Forward a signal to all those nodes with running tasks. =over 2 =item * IN $signo: signal number. =back =head1 SEE ALSO Slurm =head1 AUTHOR This library is created by Hongjia Cao, Ehjcao(AT)nudt.edu.cnE and Danny Auble, Eda(AT)llnl.govE. It is distributed with SLURM. =head1 COPYRIGHT AND LICENSE This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.4 or, at your option, any later version of Perl 5 you may have available. =cut slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/node.c000066400000000000000000000142721265000126300234760ustar00rootroot00000000000000/* * node.c - convert data between node related messages and perl HVs */ #include #include #include #include "ppport.h" #include #include "slurm-perl.h" #ifdef HAVE_BG /* These are just helper functions from slurm proper that don't get * exported regularly. Copied from src/common/slurm_protocol_defs.h. */ #define IS_NODE_ALLOCATED(_X) \ ((_X->node_state & NODE_STATE_BASE) == NODE_STATE_ALLOCATED) #define IS_NODE_COMPLETING(_X) \ (_X->node_state & NODE_STATE_COMPLETING) #endif /* * convert node_info_t to perl HV */ int node_info_to_hv(node_info_t *node_info, uint16_t node_scaling, HV *hv) { uint16_t err_cpus = 0, alloc_cpus = 0; #ifdef HAVE_BG int cpus_per_node = 1; if(node_scaling) cpus_per_node = node_info->cpus / node_scaling; #endif if(node_info->arch) STORE_FIELD(hv, node_info, arch, charp); STORE_FIELD(hv, node_info, boot_time, time_t); STORE_FIELD(hv, node_info, cores, uint16_t); STORE_FIELD(hv, node_info, cpu_load, uint32_t); STORE_FIELD(hv, node_info, cpus, uint16_t); if(node_info->features) STORE_FIELD(hv, node_info, features, charp); if(node_info->gres) STORE_FIELD(hv, node_info, gres, charp); if (node_info->name) STORE_FIELD(hv, node_info, name, charp); else { Perl_warn (aTHX_ "node name missing in node_info_t"); return -1; } STORE_FIELD(hv, node_info, node_state, uint32_t); if(node_info->os) STORE_FIELD(hv, node_info, os, charp); STORE_FIELD(hv, node_info, real_memory, uint32_t); if(node_info->reason) STORE_FIELD(hv, node_info, reason, charp); STORE_FIELD(hv, node_info, reason_time, time_t); STORE_FIELD(hv, node_info, reason_uid, uint32_t); STORE_FIELD(hv, node_info, slurmd_start_time, time_t); STORE_FIELD(hv, node_info, boards, uint16_t); STORE_FIELD(hv, node_info, sockets, uint16_t); STORE_FIELD(hv, node_info, threads, uint16_t); STORE_FIELD(hv, node_info, tmp_disk, uint32_t); slurm_get_select_nodeinfo(node_info->select_nodeinfo, SELECT_NODEDATA_SUBCNT, NODE_STATE_ALLOCATED, &alloc_cpus); #ifdef HAVE_BG if(!alloc_cpus && (IS_NODE_ALLOCATED(node_info) || IS_NODE_COMPLETING(node_info))) alloc_cpus = node_info->cpus; else alloc_cpus *= cpus_per_node; #endif slurm_get_select_nodeinfo(node_info->select_nodeinfo, SELECT_NODEDATA_SUBCNT, NODE_STATE_ERROR, &err_cpus); #ifdef HAVE_BG err_cpus *= cpus_per_node; #endif hv_store_uint16_t(hv, "alloc_cpus", alloc_cpus); hv_store_uint16_t(hv, "err_cpus", err_cpus); STORE_PTR_FIELD(hv, node_info, select_nodeinfo, "Slurm::dynamic_plugin_data_t"); STORE_FIELD(hv, node_info, weight, uint32_t); return 0; } /* * convert perl HV to node_info_t */ int hv_to_node_info(HV *hv, node_info_t *node_info) { memset(node_info, 0, sizeof(node_info_t)); FETCH_FIELD(hv, node_info, arch, charp, FALSE); FETCH_FIELD(hv, node_info, boot_time, time_t, TRUE); FETCH_FIELD(hv, node_info, cores, uint16_t, TRUE); FETCH_FIELD(hv, node_info, cpu_load, uint32_t, TRUE); FETCH_FIELD(hv, node_info, cpus, uint16_t, TRUE); FETCH_FIELD(hv, node_info, features, charp, FALSE); FETCH_FIELD(hv, node_info, gres, charp, FALSE); FETCH_FIELD(hv, node_info, name, charp, TRUE); FETCH_FIELD(hv, node_info, node_state, uint32_t, TRUE); FETCH_FIELD(hv, node_info, os, charp, FALSE); FETCH_FIELD(hv, node_info, real_memory, uint32_t, TRUE); FETCH_FIELD(hv, node_info, reason, charp, FALSE); FETCH_FIELD(hv, node_info, reason_time, time_t, TRUE); FETCH_FIELD(hv, node_info, reason_uid, uint32_t, TRUE); FETCH_FIELD(hv, node_info, slurmd_start_time, time_t, TRUE); FETCH_FIELD(hv, node_info, boards, uint16_t, TRUE); FETCH_FIELD(hv, node_info, sockets, uint16_t, TRUE); FETCH_FIELD(hv, node_info, threads, uint16_t, TRUE); FETCH_FIELD(hv, node_info, tmp_disk, uint32_t, TRUE); FETCH_FIELD(hv, node_info, weight, uint32_t, TRUE); FETCH_PTR_FIELD(hv, node_info, select_nodeinfo, "Slurm::dynamic_plugin_data_t", TRUE); return 0; } /* * convert node_info_msg_t to perl HV */ int node_info_msg_to_hv(node_info_msg_t *node_info_msg, HV *hv) { int i; HV *hv_info; AV *av; STORE_FIELD(hv, node_info_msg, last_update, time_t); STORE_FIELD(hv, node_info_msg, node_scaling, uint16_t); /* record_count implied in node_array */ av = newAV(); for(i = 0; i < node_info_msg->record_count; i ++) { if (!node_info_msg->node_array[i].name) continue; hv_info =newHV(); if (node_info_to_hv(node_info_msg->node_array + i, node_info_msg->node_scaling, hv_info) < 0) { SvREFCNT_dec((SV*)hv_info); SvREFCNT_dec((SV*)av); return -1; } av_store(av, i, newRV_noinc((SV*)hv_info)); } hv_store_sv(hv, "node_array", newRV_noinc((SV*)av)); return 0; } /* * convert perl HV to node_info_msg_t */ int hv_to_node_info_msg(HV *hv, node_info_msg_t *node_info_msg) { SV **svp; AV *av; int i, n; memset(node_info_msg, 0, sizeof(node_info_msg_t)); FETCH_FIELD(hv, node_info_msg, last_update, time_t, TRUE); FETCH_FIELD(hv, node_info_msg, node_scaling, uint16_t, TRUE); svp = hv_fetch(hv, "node_array", 10, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV)) { Perl_warn (aTHX_ "node_array is not an array reference in HV for node_info_msg_t"); return -1; } av = (AV*)SvRV(*svp); n = av_len(av) + 1; node_info_msg->record_count = n; node_info_msg->node_array = xmalloc(n * sizeof(node_info_t)); for (i = 0; i < n; i ++) { svp = av_fetch(av, i, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV)) { Perl_warn (aTHX_ "element %d in node_array is not valid", i); return -1; } if (hv_to_node_info((HV*)SvRV(*svp), &node_info_msg->node_array[i]) < 0) { Perl_warn (aTHX_ "failed to convert element %d in node_array", i); return -1; } } return 0; } /* * convert perl HV to update_node_msg_t */ int hv_to_update_node_msg(HV *hv, update_node_msg_t *update_msg) { slurm_init_update_node_msg(update_msg); FETCH_FIELD(hv, update_msg, node_addr, charp, FALSE); FETCH_FIELD(hv, update_msg, node_hostname, charp, FALSE); FETCH_FIELD(hv, update_msg, node_names, charp, TRUE); FETCH_FIELD(hv, update_msg, node_state, uint32_t, FALSE); FETCH_FIELD(hv, update_msg, reason, charp, FALSE); FETCH_FIELD(hv, update_msg, features, charp, FALSE); FETCH_FIELD(hv, update_msg, weight, uint32_t, FALSE); return 0; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/partition.c000066400000000000000000000162241265000126300245610ustar00rootroot00000000000000/* * partition.c - convert data between partition related messages and perl HVs */ #include #include #include #include "ppport.h" #include #include "slurm-perl.h" /* * convert partition_info_t to perl HV */ int partition_info_to_hv(partition_info_t *part_info, HV *hv) { if (part_info->allow_alloc_nodes) STORE_FIELD(hv, part_info, allow_alloc_nodes, charp); if (part_info->allow_groups) STORE_FIELD(hv, part_info, allow_groups, charp); if (part_info->alternate) STORE_FIELD(hv, part_info, alternate, charp); if (part_info->cr_type) STORE_FIELD(hv, part_info, cr_type, uint16_t); if (part_info->def_mem_per_cpu) STORE_FIELD(hv, part_info, def_mem_per_cpu, uint32_t); STORE_FIELD(hv, part_info, default_time, uint32_t); if (part_info->deny_accounts) STORE_FIELD(hv, part_info, deny_accounts, charp); if (part_info->deny_qos) STORE_FIELD(hv, part_info, deny_qos, charp); STORE_FIELD(hv, part_info, flags, uint16_t); if (part_info->grace_time) STORE_FIELD(hv, part_info, grace_time, uint32_t); if (part_info->max_cpus_per_node) STORE_FIELD(hv, part_info, max_cpus_per_node, uint32_t); if (part_info->max_mem_per_cpu) STORE_FIELD(hv, part_info, max_mem_per_cpu, uint32_t); STORE_FIELD(hv, part_info, max_nodes, uint32_t); STORE_FIELD(hv, part_info, max_share, uint16_t); STORE_FIELD(hv, part_info, max_time, uint32_t); STORE_FIELD(hv, part_info, min_nodes, uint32_t); if (part_info->name) STORE_FIELD(hv, part_info, name, charp); else { Perl_warn(aTHX_ "partition name missing in partition_info_t"); return -1; } /* no store for int pointers yet */ if (part_info->node_inx) { int j; AV* av = newAV(); for(j = 0; ; j += 2) { if(part_info->node_inx[j] == -1) break; av_store(av, j, newSVuv(part_info->node_inx[j])); av_store(av, j+1, newSVuv(part_info->node_inx[j+1])); } hv_store_sv(hv, "node_inx", newRV_noinc((SV*)av)); } if (part_info->nodes) STORE_FIELD(hv, part_info, nodes, charp); STORE_FIELD(hv, part_info, preempt_mode, uint16_t); STORE_FIELD(hv, part_info, priority, uint16_t); if (part_info->qos_char) STORE_FIELD(hv, part_info, qos_char, charp); STORE_FIELD(hv, part_info, state_up, uint16_t); STORE_FIELD(hv, part_info, total_cpus, uint32_t); STORE_FIELD(hv, part_info, total_nodes, uint32_t); return 0; } /* * convert perl HV to partition_info_t */ int hv_to_partition_info(HV *hv, partition_info_t *part_info) { SV **svp; AV *av; int i, n; memset(part_info, 0, sizeof(partition_info_t)); FETCH_FIELD(hv, part_info, allow_alloc_nodes, charp, FALSE); FETCH_FIELD(hv, part_info, allow_accounts, charp, FALSE); FETCH_FIELD(hv, part_info, allow_groups, charp, FALSE); FETCH_FIELD(hv, part_info, allow_qos, charp, FALSE); FETCH_FIELD(hv, part_info, alternate, charp, FALSE); FETCH_FIELD(hv, part_info, cr_type, uint16_t, FALSE); FETCH_FIELD(hv, part_info, def_mem_per_cpu, uint32_t, FALSE); FETCH_FIELD(hv, part_info, default_time, uint32_t, TRUE); FETCH_FIELD(hv, part_info, deny_accounts, charp, FALSE); FETCH_FIELD(hv, part_info, deny_qos, charp, FALSE); FETCH_FIELD(hv, part_info, flags, uint16_t, TRUE); FETCH_FIELD(hv, part_info, grace_time, uint32_t, FALSE); FETCH_FIELD(hv, part_info, max_cpus_per_node, uint32_t, FALSE); FETCH_FIELD(hv, part_info, max_mem_per_cpu, uint32_t, FALSE); FETCH_FIELD(hv, part_info, max_nodes, uint32_t, TRUE); FETCH_FIELD(hv, part_info, max_share, uint16_t, TRUE); FETCH_FIELD(hv, part_info, max_time, uint32_t, TRUE); FETCH_FIELD(hv, part_info, min_nodes, uint32_t, TRUE); FETCH_FIELD(hv, part_info, name, charp, TRUE); svp = hv_fetch(hv, "node_inx", 8, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); n = av_len(av) + 2; /* for trailing -1 */ part_info->node_inx = xmalloc(n * sizeof(int)); for (i = 0 ; i < n-1; i += 2) { part_info->node_inx[i] = (int)SvIV(*(av_fetch(av, i, FALSE))); part_info->node_inx[i+1] = (int)SvIV(*(av_fetch(av, i+1 ,FALSE))); } part_info->node_inx[n-1] = -1; } else { /* nothing to do */ } FETCH_FIELD(hv, part_info, nodes, charp, FALSE); FETCH_FIELD(hv, part_info, preempt_mode, uint16_t, TRUE); FETCH_FIELD(hv, part_info, priority, uint16_t, TRUE); FETCH_FIELD(hv, part_info, qos_char, charp, TRUE); FETCH_FIELD(hv, part_info, state_up, uint16_t, TRUE); FETCH_FIELD(hv, part_info, total_cpus, uint32_t, TRUE); FETCH_FIELD(hv, part_info, total_nodes, uint32_t, TRUE); return 0; } /* * convert partition_info_msg_t to perl HV */ int partition_info_msg_to_hv(partition_info_msg_t *part_info_msg, HV *hv) { int i; HV *hv_info; AV *av; STORE_FIELD(hv, part_info_msg, last_update, time_t); /* record_count implied in partition_array */ av = newAV(); for(i = 0; i < part_info_msg->record_count; i ++) { hv_info = newHV(); if (partition_info_to_hv(part_info_msg->partition_array + i, hv_info) < 0) { SvREFCNT_dec(hv_info); SvREFCNT_dec(av); return -1; } av_store(av, i, newRV_noinc((SV*)hv_info)); } hv_store_sv(hv, "partition_array", newRV_noinc((SV*)av)); return 0; } /* * convert perl HV to partition_info_msg_t */ int hv_to_partition_info_msg(HV *hv, partition_info_msg_t *part_info_msg) { SV **svp; AV *av; int i, n; memset(part_info_msg, 0, sizeof(partition_info_msg_t)); FETCH_FIELD(hv, part_info_msg, last_update, time_t, TRUE); svp = hv_fetch(hv, "partition_array", 15, TRUE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV)) { Perl_warn (aTHX_ "partition_array is not an array reference in HV for partition_info_msg_t"); return -1; } av = (AV*)SvRV(*svp); n = av_len(av) + 1; part_info_msg->record_count = n; part_info_msg->partition_array = xmalloc(n * sizeof(partition_info_t)); for (i = 0; i < n; i ++) { svp = av_fetch(av, i, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV)) { Perl_warn (aTHX_ "element %d in partition_array is not valid", i); return -1; } if (hv_to_partition_info((HV*)SvRV(*svp), &part_info_msg->partition_array[i]) < 0) { Perl_warn (aTHX_ "failed to convert element %d in partition_array", i); return -1; } } return 0; } /* * convert perl HV to update_part_msg_t */ int hv_to_update_part_msg(HV *hv, update_part_msg_t *part_msg) { slurm_init_part_desc_msg(part_msg); FETCH_FIELD(hv, part_msg, allow_alloc_nodes, charp, FALSE); FETCH_FIELD(hv, part_msg, allow_groups, charp, FALSE); FETCH_FIELD(hv, part_msg, default_time, uint32_t, FALSE); FETCH_FIELD(hv, part_msg, flags, uint16_t, FALSE); FETCH_FIELD(hv, part_msg, max_nodes, uint32_t, FALSE); FETCH_FIELD(hv, part_msg, max_share, uint16_t, FALSE); FETCH_FIELD(hv, part_msg, max_time, uint32_t, FALSE); FETCH_FIELD(hv, part_msg, min_nodes, uint32_t, FALSE); FETCH_FIELD(hv, part_msg, name, charp, TRUE); /*not used node_inx */ FETCH_FIELD(hv, part_msg, nodes, charp, FALSE); FETCH_FIELD(hv, part_msg, priority, uint16_t, FALSE); FETCH_FIELD(hv, part_msg, state_up, uint16_t, FALSE); FETCH_FIELD(hv, part_msg, total_cpus, uint32_t, FALSE); FETCH_FIELD(hv, part_msg, total_nodes, uint32_t, FALSE); return 0; } /* * convert perl HV to delete_part_msg_t */ int hv_to_delete_part_msg(HV *hv, delete_part_msg_t *delete_msg) { FETCH_FIELD(hv, delete_msg, name, charp, TRUE); return 0; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/ppport.h000066400000000000000000005254061265000126300241100ustar00rootroot00000000000000#if 0 <<'SKIP'; #endif /* ---------------------------------------------------------------------- ppport.h -- Perl/Pollution/Portability Version 3.19 Automatically created by Devel::PPPort running under perl 5.010000. Do NOT edit this file directly! -- Edit PPPort_pm.PL and the includes in parts/inc/ instead. Use 'perldoc ppport.h' to view the documentation below. ---------------------------------------------------------------------- SKIP =pod =head1 NAME ppport.h - Perl/Pollution/Portability version 3.19 =head1 SYNOPSIS perl ppport.h [options] [source files] Searches current directory for files if no [source files] are given --help show short help --version show version --patch=file write one patch file with changes --copy=suffix write changed copies with suffix --diff=program use diff program and options --compat-version=version provide compatibility with Perl version --cplusplus accept C++ comments --quiet don't output anything except fatal errors --nodiag don't show diagnostics --nohints don't show hints --nochanges don't suggest changes --nofilter don't filter input files --strip strip all script and doc functionality from ppport.h --list-provided list provided API --list-unsupported list unsupported API --api-info=name show Perl API portability information =head1 COMPATIBILITY This version of F is designed to support operation with Perl installations back to 5.003, and has been tested up to 5.10.0. =head1 OPTIONS =head2 --help Display a brief usage summary. =head2 --version Display the version of F. =head2 --patch=I If this option is given, a single patch file will be created if any changes are suggested. This requires a working diff program to be installed on your system. =head2 --copy=I If this option is given, a copy of each file will be saved with the given suffix that contains the suggested changes. This does not require any external programs. Note that this does not automagially add a dot between the original filename and the suffix. If you want the dot, you have to include it in the option argument. If neither C<--patch> or C<--copy> are given, the default is to simply print the diffs for each file. This requires either C or a C program to be installed. =head2 --diff=I Manually set the diff program and options to use. The default is to use C, when installed, and output unified context diffs. =head2 --compat-version=I Tell F to check for compatibility with the given Perl version. The default is to check for compatibility with Perl version 5.003. You can use this option to reduce the output of F if you intend to be backward compatible only down to a certain Perl version. =head2 --cplusplus Usually, F will detect C++ style comments and replace them with C style comments for portability reasons. Using this option instructs F to leave C++ comments untouched. =head2 --quiet Be quiet. Don't print anything except fatal errors. =head2 --nodiag Don't output any diagnostic messages. Only portability alerts will be printed. =head2 --nohints Don't output any hints. Hints often contain useful portability notes. Warnings will still be displayed. =head2 --nochanges Don't suggest any changes. Only give diagnostic output and hints unless these are also deactivated. =head2 --nofilter Don't filter the list of input files. By default, files not looking like source code (i.e. not *.xs, *.c, *.cc, *.cpp or *.h) are skipped. =head2 --strip Strip all script and documentation functionality from F. This reduces the size of F dramatically and may be useful if you want to include F in smaller modules without increasing their distribution size too much. The stripped F will have a C<--unstrip> option that allows you to undo the stripping, but only if an appropriate C module is installed. =head2 --list-provided Lists the API elements for which compatibility is provided by F. Also lists if it must be explicitly requested, if it has dependencies, and if there are hints or warnings for it. =head2 --list-unsupported Lists the API elements that are known not to be supported by F and below which version of Perl they probably won't be available or work. =head2 --api-info=I Show portability information for API elements matching I. If I is surrounded by slashes, it is interpreted as a regular expression. =head1 DESCRIPTION In order for a Perl extension (XS) module to be as portable as possible across differing versions of Perl itself, certain steps need to be taken. =over 4 =item * Including this header is the first major one. This alone will give you access to a large part of the Perl API that hasn't been available in earlier Perl releases. Use perl ppport.h --list-provided to see which API elements are provided by ppport.h. =item * You should avoid using deprecated parts of the API. For example, using global Perl variables without the C prefix is deprecated. Also, some API functions used to have a C prefix. Using this form is also deprecated. You can safely use the supported API, as F will provide wrappers for older Perl versions. =item * If you use one of a few functions or variables that were not present in earlier versions of Perl, and that can't be provided using a macro, you have to explicitly request support for these functions by adding one or more C<#define>s in your source code before the inclusion of F. These functions or variables will be marked C in the list shown by C<--list-provided>. Depending on whether you module has a single or multiple files that use such functions or variables, you want either C or global variants. For a C function or variable (used only in a single source file), use: #define NEED_function #define NEED_variable For a global function or variable (used in multiple source files), use: #define NEED_function_GLOBAL #define NEED_variable_GLOBAL Note that you mustn't have more than one global request for the same function or variable in your project. Function / Variable Static Request Global Request ----------------------------------------------------------------------------------------- PL_parser NEED_PL_parser NEED_PL_parser_GLOBAL PL_signals NEED_PL_signals NEED_PL_signals_GLOBAL eval_pv() NEED_eval_pv NEED_eval_pv_GLOBAL grok_bin() NEED_grok_bin NEED_grok_bin_GLOBAL grok_hex() NEED_grok_hex NEED_grok_hex_GLOBAL grok_number() NEED_grok_number NEED_grok_number_GLOBAL grok_numeric_radix() NEED_grok_numeric_radix NEED_grok_numeric_radix_GLOBAL grok_oct() NEED_grok_oct NEED_grok_oct_GLOBAL load_module() NEED_load_module NEED_load_module_GLOBAL my_snprintf() NEED_my_snprintf NEED_my_snprintf_GLOBAL my_sprintf() NEED_my_sprintf NEED_my_sprintf_GLOBAL my_strlcat() NEED_my_strlcat NEED_my_strlcat_GLOBAL my_strlcpy() NEED_my_strlcpy NEED_my_strlcpy_GLOBAL newCONSTSUB() NEED_newCONSTSUB NEED_newCONSTSUB_GLOBAL newRV_noinc() NEED_newRV_noinc NEED_newRV_noinc_GLOBAL newSV_type() NEED_newSV_type NEED_newSV_type_GLOBAL newSVpvn_flags() NEED_newSVpvn_flags NEED_newSVpvn_flags_GLOBAL newSVpvn_share() NEED_newSVpvn_share NEED_newSVpvn_share_GLOBAL pv_display() NEED_pv_display NEED_pv_display_GLOBAL pv_escape() NEED_pv_escape NEED_pv_escape_GLOBAL pv_pretty() NEED_pv_pretty NEED_pv_pretty_GLOBAL sv_2pv_flags() NEED_sv_2pv_flags NEED_sv_2pv_flags_GLOBAL sv_2pvbyte() NEED_sv_2pvbyte NEED_sv_2pvbyte_GLOBAL sv_catpvf_mg() NEED_sv_catpvf_mg NEED_sv_catpvf_mg_GLOBAL sv_catpvf_mg_nocontext() NEED_sv_catpvf_mg_nocontext NEED_sv_catpvf_mg_nocontext_GLOBAL sv_pvn_force_flags() NEED_sv_pvn_force_flags NEED_sv_pvn_force_flags_GLOBAL sv_setpvf_mg() NEED_sv_setpvf_mg NEED_sv_setpvf_mg_GLOBAL sv_setpvf_mg_nocontext() NEED_sv_setpvf_mg_nocontext NEED_sv_setpvf_mg_nocontext_GLOBAL vload_module() NEED_vload_module NEED_vload_module_GLOBAL vnewSVpvf() NEED_vnewSVpvf NEED_vnewSVpvf_GLOBAL warner() NEED_warner NEED_warner_GLOBAL To avoid namespace conflicts, you can change the namespace of the explicitly exported functions / variables using the C macro. Just C<#define> the macro before including C: #define DPPP_NAMESPACE MyOwnNamespace_ #include "ppport.h" The default namespace is C. =back The good thing is that most of the above can be checked by running F on your source code. See the next section for details. =head1 EXAMPLES To verify whether F is needed for your module, whether you should make any changes to your code, and whether any special defines should be used, F can be run as a Perl script to check your source code. Simply say: perl ppport.h The result will usually be a list of patches suggesting changes that should at least be acceptable, if not necessarily the most efficient solution, or a fix for all possible problems. If you know that your XS module uses features only available in newer Perl releases, if you're aware that it uses C++ comments, and if you want all suggestions as a single patch file, you could use something like this: perl ppport.h --compat-version=5.6.0 --cplusplus --patch=test.diff If you only want your code to be scanned without any suggestions for changes, use: perl ppport.h --nochanges You can specify a different C program or options, using the C<--diff> option: perl ppport.h --diff='diff -C 10' This would output context diffs with 10 lines of context. If you want to create patched copies of your files instead, use: perl ppport.h --copy=.new To display portability information for the C function, use: perl ppport.h --api-info=newSVpvn Since the argument to C<--api-info> can be a regular expression, you can use perl ppport.h --api-info=/_nomg$/ to display portability information for all C<_nomg> functions or perl ppport.h --api-info=/./ to display information for all known API elements. =head1 BUGS If this version of F is causing failure during the compilation of this module, please check if newer versions of either this module or C are available on CPAN before sending a bug report. If F was generated using the latest version of C and is causing failure of this module, please file a bug report using the CPAN Request Tracker at L. Please include the following information: =over 4 =item 1. The complete output from running "perl -V" =item 2. This file. =item 3. The name and version of the module you were trying to build. =item 4. A full log of the build that failed. =item 5. Any other information that you think could be relevant. =back For the latest version of this code, please get the C module from CPAN. =head1 COPYRIGHT Version 3.x, Copyright (c) 2004-2009, Marcus Holland-Moritz. Version 2.x, Copyright (C) 2001, Paul Marquess. Version 1.x, Copyright (C) 1999, Kenneth Albanowski. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =head1 SEE ALSO See L. =cut use strict; # Disable broken TRIE-optimization BEGIN { eval '${^RE_TRIE_MAXBUF} = -1' if $] >= 5.009004 && $] <= 5.009005 } my $VERSION = 3.19; my %opt = ( quiet => 0, diag => 1, hints => 1, changes => 1, cplusplus => 0, filter => 1, strip => 0, version => 0, ); my($ppport) = $0 =~ /([\w.]+)$/; my $LF = '(?:\r\n|[\r\n])'; # line feed my $HS = "[ \t]"; # horizontal whitespace # Never use C comments in this file! my $ccs = '/'.'*'; my $cce = '*'.'/'; my $rccs = quotemeta $ccs; my $rcce = quotemeta $cce; eval { require Getopt::Long; Getopt::Long::GetOptions(\%opt, qw( help quiet diag! filter! hints! changes! cplusplus strip version patch=s copy=s diff=s compat-version=s list-provided list-unsupported api-info=s )) or usage(); }; if ($@ and grep /^-/, @ARGV) { usage() if "@ARGV" =~ /^--?h(?:elp)?$/; die "Getopt::Long not found. Please don't use any options.\n"; } if ($opt{version}) { print "This is $0 $VERSION.\n"; exit 0; } usage() if $opt{help}; strip() if $opt{strip}; if (exists $opt{'compat-version'}) { my($r,$v,$s) = eval { parse_version($opt{'compat-version'}) }; if ($@) { die "Invalid version number format: '$opt{'compat-version'}'\n"; } die "Only Perl 5 is supported\n" if $r != 5; die "Invalid version number: $opt{'compat-version'}\n" if $v >= 1000 || $s >= 1000; $opt{'compat-version'} = sprintf "%d.%03d%03d", $r, $v, $s; } else { $opt{'compat-version'} = 5; } my %API = map { /^(\w+)\|([^|]*)\|([^|]*)\|(\w*)$/ ? ( $1 => { ($2 ? ( base => $2 ) : ()), ($3 ? ( todo => $3 ) : ()), (index($4, 'v') >= 0 ? ( varargs => 1 ) : ()), (index($4, 'p') >= 0 ? ( provided => 1 ) : ()), (index($4, 'n') >= 0 ? ( nothxarg => 1 ) : ()), } ) : die "invalid spec: $_" } qw( AvFILLp|5.004050||p AvFILL||| CLASS|||n CPERLscope|5.005000||p CX_CURPAD_SAVE||| CX_CURPAD_SV||| CopFILEAV|5.006000||p CopFILEGV_set|5.006000||p CopFILEGV|5.006000||p CopFILESV|5.006000||p CopFILE_set|5.006000||p CopFILE|5.006000||p CopSTASHPV_set|5.006000||p CopSTASHPV|5.006000||p CopSTASH_eq|5.006000||p CopSTASH_set|5.006000||p CopSTASH|5.006000||p CopyD|5.009002||p Copy||| CvPADLIST||| CvSTASH||| CvWEAKOUTSIDE||| DEFSV_set|5.011000||p DEFSV|5.004050||p END_EXTERN_C|5.005000||p ENTER||| ERRSV|5.004050||p EXTEND||| EXTERN_C|5.005000||p F0convert|||n FREETMPS||| GIMME_V||5.004000|n GIMME|||n GROK_NUMERIC_RADIX|5.007002||p G_ARRAY||| G_DISCARD||| G_EVAL||| G_METHOD|5.006001||p G_NOARGS||| G_SCALAR||| G_VOID||5.004000| GetVars||| GvSVn|5.009003||p GvSV||| Gv_AMupdate||| HEf_SVKEY||5.004000| HeHASH||5.004000| HeKEY||5.004000| HeKLEN||5.004000| HePV||5.004000| HeSVKEY_force||5.004000| HeSVKEY_set||5.004000| HeSVKEY||5.004000| HeUTF8||5.011000| HeVAL||5.004000| HvNAMELEN_get|5.009003||p HvNAME_get|5.009003||p HvNAME||| INT2PTR|5.006000||p IN_LOCALE_COMPILETIME|5.007002||p IN_LOCALE_RUNTIME|5.007002||p IN_LOCALE|5.007002||p IN_PERL_COMPILETIME|5.008001||p IS_NUMBER_GREATER_THAN_UV_MAX|5.007002||p IS_NUMBER_INFINITY|5.007002||p IS_NUMBER_IN_UV|5.007002||p IS_NUMBER_NAN|5.007003||p IS_NUMBER_NEG|5.007002||p IS_NUMBER_NOT_INT|5.007002||p IVSIZE|5.006000||p IVTYPE|5.006000||p IVdf|5.006000||p LEAVE||| LVRET||| MARK||| MULTICALL||5.011000| MY_CXT_CLONE|5.009002||p MY_CXT_INIT|5.007003||p MY_CXT|5.007003||p MoveD|5.009002||p Move||| NOOP|5.005000||p NUM2PTR|5.006000||p NVTYPE|5.006000||p NVef|5.006001||p NVff|5.006001||p NVgf|5.006001||p Newxc|5.009003||p Newxz|5.009003||p Newx|5.009003||p Nullav||| Nullch||| Nullcv||| Nullhv||| Nullsv||| ORIGMARK||| PAD_BASE_SV||| PAD_CLONE_VARS||| PAD_COMPNAME_FLAGS||| PAD_COMPNAME_GEN_set||| PAD_COMPNAME_GEN||| PAD_COMPNAME_OURSTASH||| PAD_COMPNAME_PV||| PAD_COMPNAME_TYPE||| PAD_DUP||| PAD_RESTORE_LOCAL||| PAD_SAVE_LOCAL||| PAD_SAVE_SETNULLPAD||| PAD_SETSV||| PAD_SET_CUR_NOSAVE||| PAD_SET_CUR||| PAD_SVl||| PAD_SV||| PERLIO_FUNCS_CAST|5.009003||p PERLIO_FUNCS_DECL|5.009003||p PERL_ABS|5.008001||p PERL_BCDVERSION|5.011000||p PERL_GCC_BRACE_GROUPS_FORBIDDEN|5.008001||p PERL_HASH|5.004000||p PERL_INT_MAX|5.004000||p PERL_INT_MIN|5.004000||p PERL_LONG_MAX|5.004000||p PERL_LONG_MIN|5.004000||p PERL_MAGIC_arylen|5.007002||p PERL_MAGIC_backref|5.007002||p PERL_MAGIC_bm|5.007002||p PERL_MAGIC_collxfrm|5.007002||p PERL_MAGIC_dbfile|5.007002||p PERL_MAGIC_dbline|5.007002||p PERL_MAGIC_defelem|5.007002||p PERL_MAGIC_envelem|5.007002||p PERL_MAGIC_env|5.007002||p PERL_MAGIC_ext|5.007002||p PERL_MAGIC_fm|5.007002||p PERL_MAGIC_glob|5.011000||p PERL_MAGIC_isaelem|5.007002||p PERL_MAGIC_isa|5.007002||p PERL_MAGIC_mutex|5.011000||p PERL_MAGIC_nkeys|5.007002||p PERL_MAGIC_overload_elem|5.007002||p PERL_MAGIC_overload_table|5.007002||p PERL_MAGIC_overload|5.007002||p PERL_MAGIC_pos|5.007002||p PERL_MAGIC_qr|5.007002||p PERL_MAGIC_regdata|5.007002||p PERL_MAGIC_regdatum|5.007002||p PERL_MAGIC_regex_global|5.007002||p PERL_MAGIC_shared_scalar|5.007003||p PERL_MAGIC_shared|5.007003||p PERL_MAGIC_sigelem|5.007002||p PERL_MAGIC_sig|5.007002||p PERL_MAGIC_substr|5.007002||p PERL_MAGIC_sv|5.007002||p PERL_MAGIC_taint|5.007002||p PERL_MAGIC_tiedelem|5.007002||p PERL_MAGIC_tiedscalar|5.007002||p PERL_MAGIC_tied|5.007002||p PERL_MAGIC_utf8|5.008001||p PERL_MAGIC_uvar_elem|5.007003||p PERL_MAGIC_uvar|5.007002||p PERL_MAGIC_vec|5.007002||p PERL_MAGIC_vstring|5.008001||p PERL_PV_ESCAPE_ALL|5.009004||p PERL_PV_ESCAPE_FIRSTCHAR|5.009004||p PERL_PV_ESCAPE_NOBACKSLASH|5.009004||p PERL_PV_ESCAPE_NOCLEAR|5.009004||p PERL_PV_ESCAPE_QUOTE|5.009004||p PERL_PV_ESCAPE_RE|5.009005||p PERL_PV_ESCAPE_UNI_DETECT|5.009004||p PERL_PV_ESCAPE_UNI|5.009004||p PERL_PV_PRETTY_DUMP|5.009004||p PERL_PV_PRETTY_ELLIPSES|5.010000||p PERL_PV_PRETTY_LTGT|5.009004||p PERL_PV_PRETTY_NOCLEAR|5.010000||p PERL_PV_PRETTY_QUOTE|5.009004||p PERL_PV_PRETTY_REGPROP|5.009004||p PERL_QUAD_MAX|5.004000||p PERL_QUAD_MIN|5.004000||p PERL_REVISION|5.006000||p PERL_SCAN_ALLOW_UNDERSCORES|5.007003||p PERL_SCAN_DISALLOW_PREFIX|5.007003||p PERL_SCAN_GREATER_THAN_UV_MAX|5.007003||p PERL_SCAN_SILENT_ILLDIGIT|5.008001||p PERL_SHORT_MAX|5.004000||p PERL_SHORT_MIN|5.004000||p PERL_SIGNALS_UNSAFE_FLAG|5.008001||p PERL_SUBVERSION|5.006000||p PERL_SYS_INIT3||5.006000| PERL_SYS_INIT||| PERL_SYS_TERM||5.011000| PERL_UCHAR_MAX|5.004000||p PERL_UCHAR_MIN|5.004000||p PERL_UINT_MAX|5.004000||p PERL_UINT_MIN|5.004000||p PERL_ULONG_MAX|5.004000||p PERL_ULONG_MIN|5.004000||p PERL_UNUSED_ARG|5.009003||p PERL_UNUSED_CONTEXT|5.009004||p PERL_UNUSED_DECL|5.007002||p PERL_UNUSED_VAR|5.007002||p PERL_UQUAD_MAX|5.004000||p PERL_UQUAD_MIN|5.004000||p PERL_USE_GCC_BRACE_GROUPS|5.009004||p PERL_USHORT_MAX|5.004000||p PERL_USHORT_MIN|5.004000||p PERL_VERSION|5.006000||p PL_DBsignal|5.005000||p PL_DBsingle|||pn PL_DBsub|||pn PL_DBtrace|||pn PL_Sv|5.005000||p PL_bufend|5.011000||p PL_bufptr|5.011000||p PL_compiling|5.004050||p PL_copline|5.011000||p PL_curcop|5.004050||p PL_curstash|5.004050||p PL_debstash|5.004050||p PL_defgv|5.004050||p PL_diehook|5.004050||p PL_dirty|5.004050||p PL_dowarn|||pn PL_errgv|5.004050||p PL_error_count|5.011000||p PL_expect|5.011000||p PL_hexdigit|5.005000||p PL_hints|5.005000||p PL_in_my_stash|5.011000||p PL_in_my|5.011000||p PL_last_in_gv|||n PL_laststatval|5.005000||p PL_lex_state|5.011000||p PL_lex_stuff|5.011000||p PL_linestr|5.011000||p PL_modglobal||5.005000|n PL_na|5.004050||pn PL_no_modify|5.006000||p PL_ofsgv|||n PL_parser|5.009005||p PL_perl_destruct_level|5.004050||p PL_perldb|5.004050||p PL_ppaddr|5.006000||p PL_rsfp_filters|5.004050||p PL_rsfp|5.004050||p PL_rs|||n PL_signals|5.008001||p PL_stack_base|5.004050||p PL_stack_sp|5.004050||p PL_statcache|5.005000||p PL_stdingv|5.004050||p PL_sv_arenaroot|5.004050||p PL_sv_no|5.004050||pn PL_sv_undef|5.004050||pn PL_sv_yes|5.004050||pn PL_tainted|5.004050||p PL_tainting|5.004050||p PL_tokenbuf|5.011000||p POP_MULTICALL||5.011000| POPi|||n POPl|||n POPn|||n POPpbytex||5.007001|n POPpx||5.005030|n POPp|||n POPs|||n PTR2IV|5.006000||p PTR2NV|5.006000||p PTR2UV|5.006000||p PTR2nat|5.009003||p PTR2ul|5.007001||p PTRV|5.006000||p PUSHMARK||| PUSH_MULTICALL||5.011000| PUSHi||| PUSHmortal|5.009002||p PUSHn||| PUSHp||| PUSHs||| PUSHu|5.004000||p PUTBACK||| PerlIO_clearerr||5.007003| PerlIO_close||5.007003| PerlIO_context_layers||5.009004| PerlIO_eof||5.007003| PerlIO_error||5.007003| PerlIO_fileno||5.007003| PerlIO_fill||5.007003| PerlIO_flush||5.007003| PerlIO_get_base||5.007003| PerlIO_get_bufsiz||5.007003| PerlIO_get_cnt||5.007003| PerlIO_get_ptr||5.007003| PerlIO_read||5.007003| PerlIO_seek||5.007003| PerlIO_set_cnt||5.007003| PerlIO_set_ptrcnt||5.007003| PerlIO_setlinebuf||5.007003| PerlIO_stderr||5.007003| PerlIO_stdin||5.007003| PerlIO_stdout||5.007003| PerlIO_tell||5.007003| PerlIO_unread||5.007003| PerlIO_write||5.007003| Perl_signbit||5.009005|n PoisonFree|5.009004||p PoisonNew|5.009004||p PoisonWith|5.009004||p Poison|5.008000||p RETVAL|||n Renewc||| Renew||| SAVECLEARSV||| SAVECOMPPAD||| SAVEPADSV||| SAVETMPS||| SAVE_DEFSV|5.004050||p SPAGAIN||| SP||| START_EXTERN_C|5.005000||p START_MY_CXT|5.007003||p STMT_END|||p STMT_START|||p STR_WITH_LEN|5.009003||p ST||| SV_CONST_RETURN|5.009003||p SV_COW_DROP_PV|5.008001||p SV_COW_SHARED_HASH_KEYS|5.009005||p SV_GMAGIC|5.007002||p SV_HAS_TRAILING_NUL|5.009004||p SV_IMMEDIATE_UNREF|5.007001||p SV_MUTABLE_RETURN|5.009003||p SV_NOSTEAL|5.009002||p SV_SMAGIC|5.009003||p SV_UTF8_NO_ENCODING|5.008001||p SVfARG|5.009005||p SVf_UTF8|5.006000||p SVf|5.006000||p SVt_IV||| SVt_NV||| SVt_PVAV||| SVt_PVCV||| SVt_PVHV||| SVt_PVMG||| SVt_PV||| Safefree||| Slab_Alloc||| Slab_Free||| Slab_to_rw||| StructCopy||| SvCUR_set||| SvCUR||| SvEND||| SvGAMAGIC||5.006001| SvGETMAGIC|5.004050||p SvGROW||| SvIOK_UV||5.006000| SvIOK_notUV||5.006000| SvIOK_off||| SvIOK_only_UV||5.006000| SvIOK_only||| SvIOK_on||| SvIOKp||| SvIOK||| SvIVX||| SvIV_nomg|5.009001||p SvIV_set||| SvIVx||| SvIV||| SvIsCOW_shared_hash||5.008003| SvIsCOW||5.008003| SvLEN_set||| SvLEN||| SvLOCK||5.007003| SvMAGIC_set|5.009003||p SvNIOK_off||| SvNIOKp||| SvNIOK||| SvNOK_off||| SvNOK_only||| SvNOK_on||| SvNOKp||| SvNOK||| SvNVX||| SvNV_set||| SvNVx||| SvNV||| SvOK||| SvOOK_offset||5.011000| SvOOK||| SvPOK_off||| SvPOK_only_UTF8||5.006000| SvPOK_only||| SvPOK_on||| SvPOKp||| SvPOK||| SvPVX_const|5.009003||p SvPVX_mutable|5.009003||p SvPVX||| SvPV_const|5.009003||p SvPV_flags_const_nolen|5.009003||p SvPV_flags_const|5.009003||p SvPV_flags_mutable|5.009003||p SvPV_flags|5.007002||p SvPV_force_flags_mutable|5.009003||p SvPV_force_flags_nolen|5.009003||p SvPV_force_flags|5.007002||p SvPV_force_mutable|5.009003||p SvPV_force_nolen|5.009003||p SvPV_force_nomg_nolen|5.009003||p SvPV_force_nomg|5.007002||p SvPV_force|||p SvPV_mutable|5.009003||p SvPV_nolen_const|5.009003||p SvPV_nolen|5.006000||p SvPV_nomg_const_nolen|5.009003||p SvPV_nomg_const|5.009003||p SvPV_nomg|5.007002||p SvPV_renew|5.009003||p SvPV_set||| SvPVbyte_force||5.009002| SvPVbyte_nolen||5.006000| SvPVbytex_force||5.006000| SvPVbytex||5.006000| SvPVbyte|5.006000||p SvPVutf8_force||5.006000| SvPVutf8_nolen||5.006000| SvPVutf8x_force||5.006000| SvPVutf8x||5.006000| SvPVutf8||5.006000| SvPVx||| SvPV||| SvREFCNT_dec||| SvREFCNT_inc_NN|5.009004||p SvREFCNT_inc_simple_NN|5.009004||p SvREFCNT_inc_simple_void_NN|5.009004||p SvREFCNT_inc_simple_void|5.009004||p SvREFCNT_inc_simple|5.009004||p SvREFCNT_inc_void_NN|5.009004||p SvREFCNT_inc_void|5.009004||p SvREFCNT_inc|||p SvREFCNT||| SvROK_off||| SvROK_on||| SvROK||| SvRV_set|5.009003||p SvRV||| SvRXOK||5.009005| SvRX||5.009005| SvSETMAGIC||| SvSHARED_HASH|5.009003||p SvSHARE||5.007003| SvSTASH_set|5.009003||p SvSTASH||| SvSetMagicSV_nosteal||5.004000| SvSetMagicSV||5.004000| SvSetSV_nosteal||5.004000| SvSetSV||| SvTAINTED_off||5.004000| SvTAINTED_on||5.004000| SvTAINTED||5.004000| SvTAINT||| SvTRUE||| SvTYPE||| SvUNLOCK||5.007003| SvUOK|5.007001|5.006000|p SvUPGRADE||| SvUTF8_off||5.006000| SvUTF8_on||5.006000| SvUTF8||5.006000| SvUVXx|5.004000||p SvUVX|5.004000||p SvUV_nomg|5.009001||p SvUV_set|5.009003||p SvUVx|5.004000||p SvUV|5.004000||p SvVOK||5.008001| SvVSTRING_mg|5.009004||p THIS|||n UNDERBAR|5.009002||p UTF8_MAXBYTES|5.009002||p UVSIZE|5.006000||p UVTYPE|5.006000||p UVXf|5.007001||p UVof|5.006000||p UVuf|5.006000||p UVxf|5.006000||p WARN_ALL|5.006000||p WARN_AMBIGUOUS|5.006000||p WARN_ASSERTIONS|5.011000||p WARN_BAREWORD|5.006000||p WARN_CLOSED|5.006000||p WARN_CLOSURE|5.006000||p WARN_DEBUGGING|5.006000||p WARN_DEPRECATED|5.006000||p WARN_DIGIT|5.006000||p WARN_EXEC|5.006000||p WARN_EXITING|5.006000||p WARN_GLOB|5.006000||p WARN_INPLACE|5.006000||p WARN_INTERNAL|5.006000||p WARN_IO|5.006000||p WARN_LAYER|5.008000||p WARN_MALLOC|5.006000||p WARN_MISC|5.006000||p WARN_NEWLINE|5.006000||p WARN_NUMERIC|5.006000||p WARN_ONCE|5.006000||p WARN_OVERFLOW|5.006000||p WARN_PACK|5.006000||p WARN_PARENTHESIS|5.006000||p WARN_PIPE|5.006000||p WARN_PORTABLE|5.006000||p WARN_PRECEDENCE|5.006000||p WARN_PRINTF|5.006000||p WARN_PROTOTYPE|5.006000||p WARN_QW|5.006000||p WARN_RECURSION|5.006000||p WARN_REDEFINE|5.006000||p WARN_REGEXP|5.006000||p WARN_RESERVED|5.006000||p WARN_SEMICOLON|5.006000||p WARN_SEVERE|5.006000||p WARN_SIGNAL|5.006000||p WARN_SUBSTR|5.006000||p WARN_SYNTAX|5.006000||p WARN_TAINT|5.006000||p WARN_THREADS|5.008000||p WARN_UNINITIALIZED|5.006000||p WARN_UNOPENED|5.006000||p WARN_UNPACK|5.006000||p WARN_UNTIE|5.006000||p WARN_UTF8|5.006000||p WARN_VOID|5.006000||p XCPT_CATCH|5.009002||p XCPT_RETHROW|5.009002||p XCPT_TRY_END|5.009002||p XCPT_TRY_START|5.009002||p XPUSHi||| XPUSHmortal|5.009002||p XPUSHn||| XPUSHp||| XPUSHs||| XPUSHu|5.004000||p XSPROTO|5.010000||p XSRETURN_EMPTY||| XSRETURN_IV||| XSRETURN_NO||| XSRETURN_NV||| XSRETURN_PV||| XSRETURN_UNDEF||| XSRETURN_UV|5.008001||p XSRETURN_YES||| XSRETURN|||p XST_mIV||| XST_mNO||| XST_mNV||| XST_mPV||| XST_mUNDEF||| XST_mUV|5.008001||p XST_mYES||| XS_VERSION_BOOTCHECK||| XS_VERSION||| XSprePUSH|5.006000||p XS||| ZeroD|5.009002||p Zero||| _aMY_CXT|5.007003||p _pMY_CXT|5.007003||p aMY_CXT_|5.007003||p aMY_CXT|5.007003||p aTHXR_|5.011000||p aTHXR|5.011000||p aTHX_|5.006000||p aTHX|5.006000||p add_data|||n addmad||| allocmy||| amagic_call||| amagic_cmp_locale||| amagic_cmp||| amagic_i_ncmp||| amagic_ncmp||| any_dup||| ao||| append_elem||| append_list||| append_madprops||| apply_attrs_my||| apply_attrs_string||5.006001| apply_attrs||| apply||| atfork_lock||5.007003|n atfork_unlock||5.007003|n av_arylen_p||5.009003| av_clear||| av_create_and_push||5.009005| av_create_and_unshift_one||5.009005| av_delete||5.006000| av_exists||5.006000| av_extend||| av_fetch||| av_fill||| av_iter_p||5.011000| av_len||| av_make||| av_pop||| av_push||| av_reify||| av_shift||| av_store||| av_undef||| av_unshift||| ax|||n bad_type||| bind_match||| block_end||| block_gimme||5.004000| block_start||| boolSV|5.004000||p boot_core_PerlIO||| boot_core_UNIVERSAL||| boot_core_mro||| bytes_from_utf8||5.007001| bytes_to_uni|||n bytes_to_utf8||5.006001| call_argv|5.006000||p call_atexit||5.006000| call_list||5.004000| call_method|5.006000||p call_pv|5.006000||p call_sv|5.006000||p calloc||5.007002|n cando||| cast_i32||5.006000| cast_iv||5.006000| cast_ulong||5.006000| cast_uv||5.006000| check_type_and_open||| check_uni||| checkcomma||| checkposixcc||| ckWARN|5.006000||p ck_anoncode||| ck_bitop||| ck_concat||| ck_defined||| ck_delete||| ck_die||| ck_each||| ck_eof||| ck_eval||| ck_exec||| ck_exists||| ck_exit||| ck_ftst||| ck_fun||| ck_glob||| ck_grep||| ck_index||| ck_join||| ck_lfun||| ck_listiob||| ck_match||| ck_method||| ck_null||| ck_open||| ck_readline||| ck_repeat||| ck_require||| ck_return||| ck_rfun||| ck_rvconst||| ck_sassign||| ck_select||| ck_shift||| ck_sort||| ck_spair||| ck_split||| ck_subr||| ck_substr||| ck_svconst||| ck_trunc||| ck_unpack||| ckwarn_d||5.009003| ckwarn||5.009003| cl_and|||n cl_anything|||n cl_init_zero|||n cl_init|||n cl_is_anything|||n cl_or|||n clear_placeholders||| closest_cop||| convert||| cop_free||| cr_textfilter||| create_eval_scope||| croak_nocontext|||vn croak_xs_usage||5.011000| croak|||v csighandler||5.009003|n curmad||| custom_op_desc||5.007003| custom_op_name||5.007003| cv_ckproto_len||| cv_clone||| cv_const_sv||5.004000| cv_dump||| cv_undef||| cx_dump||5.005000| cx_dup||| cxinc||| dAXMARK|5.009003||p dAX|5.007002||p dITEMS|5.007002||p dMARK||| dMULTICALL||5.009003| dMY_CXT_SV|5.007003||p dMY_CXT|5.007003||p dNOOP|5.006000||p dORIGMARK||| dSP||| dTHR|5.004050||p dTHXR|5.011000||p dTHXa|5.006000||p dTHXoa|5.006000||p dTHX|5.006000||p dUNDERBAR|5.009002||p dVAR|5.009003||p dXCPT|5.009002||p dXSARGS||| dXSI32||| dXSTARG|5.006000||p deb_curcv||| deb_nocontext|||vn deb_stack_all||| deb_stack_n||| debop||5.005000| debprofdump||5.005000| debprof||| debstackptrs||5.007003| debstack||5.007003| debug_start_match||| deb||5.007003|v del_sv||| delete_eval_scope||| delimcpy||5.004000| deprecate_old||| deprecate||| despatch_signals||5.007001| destroy_matcher||| die_nocontext|||vn die_where||| die|||v dirp_dup||| div128||| djSP||| do_aexec5||| do_aexec||| do_aspawn||| do_binmode||5.004050| do_chomp||| do_chop||| do_close||| do_dump_pad||| do_eof||| do_exec3||| do_execfree||| do_exec||| do_gv_dump||5.006000| do_gvgv_dump||5.006000| do_hv_dump||5.006000| do_ipcctl||| do_ipcget||| do_join||| do_kv||| do_magic_dump||5.006000| do_msgrcv||| do_msgsnd||| do_oddball||| do_op_dump||5.006000| do_op_xmldump||| do_open9||5.006000| do_openn||5.007001| do_open||5.004000| do_pmop_dump||5.006000| do_pmop_xmldump||| do_print||| do_readline||| do_seek||| do_semop||| do_shmio||| do_smartmatch||| do_spawn_nowait||| do_spawn||| do_sprintf||| do_sv_dump||5.006000| do_sysseek||| do_tell||| do_trans_complex_utf8||| do_trans_complex||| do_trans_count_utf8||| do_trans_count||| do_trans_simple_utf8||| do_trans_simple||| do_trans||| do_vecget||| do_vecset||| do_vop||| docatch||| doeval||| dofile||| dofindlabel||| doform||| doing_taint||5.008001|n dooneliner||| doopen_pm||| doparseform||| dopoptoeval||| dopoptogiven||| dopoptolabel||| dopoptoloop||| dopoptosub_at||| dopoptowhen||| doref||5.009003| dounwind||| dowantarray||| dump_all||5.006000| dump_eval||5.006000| dump_exec_pos||| dump_fds||| dump_form||5.006000| dump_indent||5.006000|v dump_mstats||| dump_packsubs||5.006000| dump_sub||5.006000| dump_sv_child||| dump_trie_interim_list||| dump_trie_interim_table||| dump_trie||| dump_vindent||5.006000| dumpuntil||| dup_attrlist||| emulate_cop_io||| eval_pv|5.006000||p eval_sv|5.006000||p exec_failed||| expect_number||| fbm_compile||5.005000| fbm_instr||5.005000| feature_is_enabled||| fetch_cop_label||5.011000| filter_add||| filter_del||| filter_gets||| filter_read||| find_and_forget_pmops||| find_array_subscript||| find_beginning||| find_byclass||| find_hash_subscript||| find_in_my_stash||| find_runcv||5.008001| find_rundefsvoffset||5.009002| find_script||| find_uninit_var||| first_symbol|||n fold_constants||| forbid_setid||| force_ident||| force_list||| force_next||| force_version||| force_word||| forget_pmop||| form_nocontext|||vn form||5.004000|v fp_dup||| fprintf_nocontext|||vn free_global_struct||| free_tied_hv_pool||| free_tmps||| gen_constant_list||| get_arena||| get_aux_mg||| get_av|5.006000||p get_context||5.006000|n get_cvn_flags||5.009005| get_cv|5.006000||p get_db_sub||| get_debug_opts||| get_hash_seed||| get_hv|5.006000||p get_isa_hash||| get_mstats||| get_no_modify||| get_num||| get_op_descs||5.005000| get_op_names||5.005000| get_opargs||| get_ppaddr||5.006000| get_re_arg||| get_sv|5.006000||p get_vtbl||5.005030| getcwd_sv||5.007002| getenv_len||| glob_2number||| glob_assign_glob||| glob_assign_ref||| gp_dup||| gp_free||| gp_ref||| grok_bin|5.007003||p grok_hex|5.007003||p grok_number|5.007002||p grok_numeric_radix|5.007002||p grok_oct|5.007003||p group_end||| gv_AVadd||| gv_HVadd||| gv_IOadd||| gv_SVadd||| gv_autoload4||5.004000| gv_check||| gv_const_sv||5.009003| gv_dump||5.006000| gv_efullname3||5.004000| gv_efullname4||5.006001| gv_efullname||| gv_ename||| gv_fetchfile_flags||5.009005| gv_fetchfile||| gv_fetchmeth_autoload||5.007003| gv_fetchmethod_autoload||5.004000| gv_fetchmethod_flags||5.011000| gv_fetchmethod||| gv_fetchmeth||| gv_fetchpvn_flags|5.009002||p gv_fetchpvs|5.009004||p gv_fetchpv||| gv_fetchsv||5.009002| gv_fullname3||5.004000| gv_fullname4||5.006001| gv_fullname||| gv_get_super_pkg||| gv_handler||5.007001| gv_init_sv||| gv_init||| gv_name_set||5.009004| gv_stashpvn|5.004000||p gv_stashpvs|5.009003||p gv_stashpv||| gv_stashsv||| he_dup||| hek_dup||| hfreeentries||| hsplit||| hv_assert||5.011000| hv_auxinit|||n hv_backreferences_p||| hv_clear_placeholders||5.009001| hv_clear||| hv_common_key_len||5.010000| hv_common||5.010000| hv_copy_hints_hv||| hv_delayfree_ent||5.004000| hv_delete_common||| hv_delete_ent||5.004000| hv_delete||| hv_eiter_p||5.009003| hv_eiter_set||5.009003| hv_exists_ent||5.004000| hv_exists||| hv_fetch_ent||5.004000| hv_fetchs|5.009003||p hv_fetch||| hv_free_ent||5.004000| hv_iterinit||| hv_iterkeysv||5.004000| hv_iterkey||| hv_iternext_flags||5.008000| hv_iternextsv||| hv_iternext||| hv_iterval||| hv_kill_backrefs||| hv_ksplit||5.004000| hv_magic_check|||n hv_magic||| hv_name_set||5.009003| hv_notallowed||| hv_placeholders_get||5.009003| hv_placeholders_p||5.009003| hv_placeholders_set||5.009003| hv_riter_p||5.009003| hv_riter_set||5.009003| hv_scalar||5.009001| hv_store_ent||5.004000| hv_store_flags||5.008000| hv_stores|5.009004||p hv_store||| hv_undef||| ibcmp_locale||5.004000| ibcmp_utf8||5.007003| ibcmp||| incline||| incpush_if_exists||| incpush_use_sep||| incpush||| ingroup||| init_argv_symbols||| init_debugger||| init_global_struct||| init_i18nl10n||5.006000| init_i18nl14n||5.006000| init_ids||| init_interp||| init_main_stash||| init_perllib||| init_postdump_symbols||| init_predump_symbols||| init_stacks||5.005000| init_tm||5.007002| instr||| intro_my||| intuit_method||| intuit_more||| invert||| io_close||| isALNUMC|5.006000||p isALNUM||| isALPHA||| isASCII|5.006000||p isBLANK|5.006001||p isCNTRL|5.006000||p isDIGIT||| isGRAPH|5.006000||p isGV_with_GP|5.009004||p isLOWER||| isPRINT|5.004000||p isPSXSPC|5.006001||p isPUNCT|5.006000||p isSPACE||| isUPPER||| isXDIGIT|5.006000||p is_an_int||| is_gv_magical_sv||| is_handle_constructor|||n is_list_assignment||| is_lvalue_sub||5.007001| is_uni_alnum_lc||5.006000| is_uni_alnumc_lc||5.006000| is_uni_alnumc||5.006000| is_uni_alnum||5.006000| is_uni_alpha_lc||5.006000| is_uni_alpha||5.006000| is_uni_ascii_lc||5.006000| is_uni_ascii||5.006000| is_uni_cntrl_lc||5.006000| is_uni_cntrl||5.006000| is_uni_digit_lc||5.006000| is_uni_digit||5.006000| is_uni_graph_lc||5.006000| is_uni_graph||5.006000| is_uni_idfirst_lc||5.006000| is_uni_idfirst||5.006000| is_uni_lower_lc||5.006000| is_uni_lower||5.006000| is_uni_print_lc||5.006000| is_uni_print||5.006000| is_uni_punct_lc||5.006000| is_uni_punct||5.006000| is_uni_space_lc||5.006000| is_uni_space||5.006000| is_uni_upper_lc||5.006000| is_uni_upper||5.006000| is_uni_xdigit_lc||5.006000| is_uni_xdigit||5.006000| is_utf8_alnumc||5.006000| is_utf8_alnum||5.006000| is_utf8_alpha||5.006000| is_utf8_ascii||5.006000| is_utf8_char_slow|||n is_utf8_char||5.006000| is_utf8_cntrl||5.006000| is_utf8_common||| is_utf8_digit||5.006000| is_utf8_graph||5.006000| is_utf8_idcont||5.008000| is_utf8_idfirst||5.006000| is_utf8_lower||5.006000| is_utf8_mark||5.006000| is_utf8_print||5.006000| is_utf8_punct||5.006000| is_utf8_space||5.006000| is_utf8_string_loclen||5.009003| is_utf8_string_loc||5.008001| is_utf8_string||5.006001| is_utf8_upper||5.006000| is_utf8_xdigit||5.006000| isa_lookup||| items|||n ix|||n jmaybe||| join_exact||| keyword||| leave_scope||| lex_end||| lex_start||| linklist||| listkids||| list||| load_module_nocontext|||vn load_module|5.006000||pv localize||| looks_like_bool||| looks_like_number||| lop||| mPUSHi|5.009002||p mPUSHn|5.009002||p mPUSHp|5.009002||p mPUSHs|5.011000||p mPUSHu|5.009002||p mXPUSHi|5.009002||p mXPUSHn|5.009002||p mXPUSHp|5.009002||p mXPUSHs|5.011000||p mXPUSHu|5.009002||p mad_free||| madlex||| madparse||| magic_clear_all_env||| magic_clearenv||| magic_clearhint||| magic_clearisa||| magic_clearpack||| magic_clearsig||| magic_dump||5.006000| magic_existspack||| magic_freearylen_p||| magic_freeovrld||| magic_getarylen||| magic_getdefelem||| magic_getnkeys||| magic_getpack||| magic_getpos||| magic_getsig||| magic_getsubstr||| magic_gettaint||| magic_getuvar||| magic_getvec||| magic_get||| magic_killbackrefs||| magic_len||| magic_methcall||| magic_methpack||| magic_nextpack||| magic_regdata_cnt||| magic_regdatum_get||| magic_regdatum_set||| magic_scalarpack||| magic_set_all_env||| magic_setamagic||| magic_setarylen||| magic_setcollxfrm||| magic_setdbline||| magic_setdefelem||| magic_setenv||| magic_sethint||| magic_setisa||| magic_setmglob||| magic_setnkeys||| magic_setpack||| magic_setpos||| magic_setregexp||| magic_setsig||| magic_setsubstr||| magic_settaint||| magic_setutf8||| magic_setuvar||| magic_setvec||| magic_set||| magic_sizepack||| magic_wipepack||| make_matcher||| make_trie_failtable||| make_trie||| malloc_good_size|||n malloced_size|||n malloc||5.007002|n markstack_grow||| matcher_matches_sv||| measure_struct||| memEQ|5.004000||p memNE|5.004000||p mem_collxfrm||| mem_log_common|||n mess_alloc||| mess_nocontext|||vn mess||5.006000|v method_common||| mfree||5.007002|n mg_clear||| mg_copy||| mg_dup||| mg_find||| mg_free||| mg_get||| mg_length||5.005000| mg_localize||| mg_magical||| mg_set||| mg_size||5.005000| mini_mktime||5.007002| missingterm||| mode_from_discipline||| modkids||| mod||| more_bodies||| more_sv||| moreswitches||| mro_get_from_name||5.011000| mro_get_linear_isa_dfs||| mro_get_linear_isa||5.009005| mro_get_private_data||5.011000| mro_isa_changed_in||| mro_meta_dup||| mro_meta_init||| mro_method_changed_in||5.009005| mro_register||5.011000| mro_set_mro||5.011000| mro_set_private_data||5.011000| mul128||| mulexp10|||n my_atof2||5.007002| my_atof||5.006000| my_attrs||| my_bcopy|||n my_betoh16|||n my_betoh32|||n my_betoh64|||n my_betohi|||n my_betohl|||n my_betohs|||n my_bzero|||n my_chsize||| my_clearenv||| my_cxt_index||| my_cxt_init||| my_dirfd||5.009005| my_exit_jump||| my_exit||| my_failure_exit||5.004000| my_fflush_all||5.006000| my_fork||5.007003|n my_htobe16|||n my_htobe32|||n my_htobe64|||n my_htobei|||n my_htobel|||n my_htobes|||n my_htole16|||n my_htole32|||n my_htole64|||n my_htolei|||n my_htolel|||n my_htoles|||n my_htonl||| my_kid||| my_letoh16|||n my_letoh32|||n my_letoh64|||n my_letohi|||n my_letohl|||n my_letohs|||n my_lstat||| my_memcmp||5.004000|n my_memset|||n my_ntohl||| my_pclose||5.004000| my_popen_list||5.007001| my_popen||5.004000| my_setenv||| my_snprintf|5.009004||pvn my_socketpair||5.007003|n my_sprintf|5.009003||pvn my_stat||| my_strftime||5.007002| my_strlcat|5.009004||pn my_strlcpy|5.009004||pn my_swabn|||n my_swap||| my_unexec||| my_vsnprintf||5.009004|n need_utf8|||n newANONATTRSUB||5.006000| newANONHASH||| newANONLIST||| newANONSUB||| newASSIGNOP||| newATTRSUB||5.006000| newAVREF||| newAV||| newBINOP||| newCONDOP||| newCONSTSUB|5.004050||p newCVREF||| newDEFSVOP||| newFORM||| newFOROP||| newGIVENOP||5.009003| newGIVWHENOP||| newGP||| newGVOP||| newGVREF||| newGVgen||| newHVREF||| newHVhv||5.005000| newHV||| newIO||| newLISTOP||| newLOGOP||| newLOOPEX||| newLOOPOP||| newMADPROP||| newMADsv||| newMYSUB||| newNULLLIST||| newOP||| newPADOP||| newPMOP||| newPROG||| newPVOP||| newRANGE||| newRV_inc|5.004000||p newRV_noinc|5.004000||p newRV||| newSLICEOP||| newSTATEOP||| newSUB||| newSVOP||| newSVREF||| newSV_type|5.009005||p newSVhek||5.009003| newSViv||| newSVnv||| newSVpvf_nocontext|||vn newSVpvf||5.004000|v newSVpvn_flags|5.011000||p newSVpvn_share|5.007001||p newSVpvn_utf8|5.011000||p newSVpvn|5.004050||p newSVpvs_flags|5.011000||p newSVpvs_share||5.009003| newSVpvs|5.009003||p newSVpv||| newSVrv||| newSVsv||| newSVuv|5.006000||p newSV||| newTOKEN||| newUNOP||| newWHENOP||5.009003| newWHILEOP||5.009003| newXS_flags||5.009004| newXSproto||5.006000| newXS||5.006000| new_collate||5.006000| new_constant||| new_ctype||5.006000| new_he||| new_logop||| new_numeric||5.006000| new_stackinfo||5.005000| new_version||5.009000| new_warnings_bitfield||| next_symbol||| nextargv||| nextchar||| ninstr||| no_bareword_allowed||| no_fh_allowed||| no_op||| not_a_number||| nothreadhook||5.008000| nuke_stacks||| num_overflow|||n offer_nice_chunk||| oopsAV||| oopsHV||| op_clear||| op_const_sv||| op_dump||5.006000| op_free||| op_getmad_weak||| op_getmad||| op_null||5.007002| op_refcnt_dec||| op_refcnt_inc||| op_refcnt_lock||5.009002| op_refcnt_unlock||5.009002| op_xmldump||| open_script||| pMY_CXT_|5.007003||p pMY_CXT|5.007003||p pTHX_|5.006000||p pTHX|5.006000||p packWARN|5.007003||p pack_cat||5.007003| pack_rec||| package||| packlist||5.008001| pad_add_anon||| pad_add_name||| pad_alloc||| pad_block_start||| pad_check_dup||| pad_compname_type||| pad_findlex||| pad_findmy||| pad_fixup_inner_anons||| pad_free||| pad_leavemy||| pad_new||| pad_peg|||n pad_push||| pad_reset||| pad_setsv||| pad_sv||5.011000| pad_swipe||| pad_tidy||| pad_undef||| parse_body||| parse_unicode_opts||| parser_dup||| parser_free||| path_is_absolute|||n peep||| pending_Slabs_to_ro||| perl_alloc_using|||n perl_alloc|||n perl_clone_using|||n perl_clone|||n perl_construct|||n perl_destruct||5.007003|n perl_free|||n perl_parse||5.006000|n perl_run|||n pidgone||| pm_description||| pmflag||| pmop_dump||5.006000| pmop_xmldump||| pmruntime||| pmtrans||| pop_scope||| pregcomp||5.009005| pregexec||| pregfree2||5.011000| pregfree||| prepend_elem||| prepend_madprops||| printbuf||| printf_nocontext|||vn process_special_blocks||| ptr_table_clear||5.009005| ptr_table_fetch||5.009005| ptr_table_find|||n ptr_table_free||5.009005| ptr_table_new||5.009005| ptr_table_split||5.009005| ptr_table_store||5.009005| push_scope||| put_byte||| pv_display|5.006000||p pv_escape|5.009004||p pv_pretty|5.009004||p pv_uni_display||5.007003| qerror||| qsortsvu||| re_compile||5.009005| re_croak2||| re_dup_guts||| re_intuit_start||5.009005| re_intuit_string||5.006000| readpipe_override||| realloc||5.007002|n reentrant_free||| reentrant_init||| reentrant_retry|||vn reentrant_size||| ref_array_or_hash||| refcounted_he_chain_2hv||| refcounted_he_fetch||| refcounted_he_free||| refcounted_he_new_common||| refcounted_he_new||| refcounted_he_value||| refkids||| refto||| ref||5.011000| reg_check_named_buff_matched||| reg_named_buff_all||5.009005| reg_named_buff_exists||5.009005| reg_named_buff_fetch||5.009005| reg_named_buff_firstkey||5.009005| reg_named_buff_iter||| reg_named_buff_nextkey||5.009005| reg_named_buff_scalar||5.009005| reg_named_buff||| reg_namedseq||| reg_node||| reg_numbered_buff_fetch||| reg_numbered_buff_length||| reg_numbered_buff_store||| reg_qr_package||| reg_recode||| reg_scan_name||| reg_skipcomment||| reg_temp_copy||| reganode||| regatom||| regbranch||| regclass_swash||5.009004| regclass||| regcppop||| regcppush||| regcurly|||n regdump_extflags||| regdump||5.005000| regdupe_internal||| regexec_flags||5.005000| regfree_internal||5.009005| reghop3|||n reghop4|||n reghopmaybe3|||n reginclass||| reginitcolors||5.006000| reginsert||| regmatch||| regnext||5.005000| regpiece||| regpposixcc||| regprop||| regrepeat||| regtail_study||| regtail||| regtry||| reguni||| regwhite|||n reg||| repeatcpy||| report_evil_fh||| report_uninit||| require_pv||5.006000| require_tie_mod||| restore_magic||| rninstr||| rsignal_restore||| rsignal_save||| rsignal_state||5.004000| rsignal||5.004000| run_body||| run_user_filter||| runops_debug||5.005000| runops_standard||5.005000| rvpv_dup||| rxres_free||| rxres_restore||| rxres_save||| safesyscalloc||5.006000|n safesysfree||5.006000|n safesysmalloc||5.006000|n safesysrealloc||5.006000|n same_dirent||| save_I16||5.004000| save_I32||| save_I8||5.006000| save_adelete||5.011000| save_aelem||5.004050| save_alloc||5.006000| save_aptr||| save_ary||| save_bool||5.008001| save_clearsv||| save_delete||| save_destructor_x||5.006000| save_destructor||5.006000| save_freeop||| save_freepv||| save_freesv||| save_generic_pvref||5.006001| save_generic_svref||5.005030| save_gp||5.004000| save_hash||| save_hek_flags|||n save_helem_flags||5.011000| save_helem||5.004050| save_hints||| save_hptr||| save_int||| save_item||| save_iv||5.005000| save_lines||| save_list||| save_long||| save_magic||| save_mortalizesv||5.007001| save_nogv||| save_op||| save_padsv_and_mortalize||5.011000| save_pptr||| save_pushi32ptr||| save_pushptri32ptr||| save_pushptrptr||| save_pushptr||5.011000| save_re_context||5.006000| save_scalar_at||| save_scalar||| save_set_svflags||5.009000| save_shared_pvref||5.007003| save_sptr||| save_svref||| save_vptr||5.006000| savepvn||| savepvs||5.009003| savepv||| savesharedpvn||5.009005| savesharedpv||5.007003| savestack_grow_cnt||5.008001| savestack_grow||| savesvpv||5.009002| sawparens||| scalar_mod_type|||n scalarboolean||| scalarkids||| scalarseq||| scalarvoid||| scalar||| scan_bin||5.006000| scan_commit||| scan_const||| scan_formline||| scan_heredoc||| scan_hex||| scan_ident||| scan_inputsymbol||| scan_num||5.007001| scan_oct||| scan_pat||| scan_str||| scan_subst||| scan_trans||| scan_version||5.009001| scan_vstring||5.009005| scan_word||| scope||| screaminstr||5.005000| search_const||| seed||5.008001| sequence_num||| sequence_tail||| sequence||| set_context||5.006000|n set_numeric_local||5.006000| set_numeric_radix||5.006000| set_numeric_standard||5.006000| setdefout||| share_hek_flags||| share_hek||5.004000| si_dup||| sighandler|||n simplify_sort||| skipspace0||| skipspace1||| skipspace2||| skipspace||| softref2xv||| sortcv_stacked||| sortcv_xsub||| sortcv||| sortsv_flags||5.009003| sortsv||5.007003| space_join_names_mortal||| ss_dup||| stack_grow||| start_force||| start_glob||| start_subparse||5.004000| stashpv_hvname_match||5.011000| stdize_locale||| store_cop_label||| strEQ||| strGE||| strGT||| strLE||| strLT||| strNE||| str_to_version||5.006000| strip_return||| strnEQ||| strnNE||| study_chunk||| sub_crush_depth||| sublex_done||| sublex_push||| sublex_start||| sv_2bool||| sv_2cv||| sv_2io||| sv_2iuv_common||| sv_2iuv_non_preserve||| sv_2iv_flags||5.009001| sv_2iv||| sv_2mortal||| sv_2num||| sv_2nv||| sv_2pv_flags|5.007002||p sv_2pv_nolen|5.006000||p sv_2pvbyte_nolen|5.006000||p sv_2pvbyte|5.006000||p sv_2pvutf8_nolen||5.006000| sv_2pvutf8||5.006000| sv_2pv||| sv_2uv_flags||5.009001| sv_2uv|5.004000||p sv_add_arena||| sv_add_backref||| sv_backoff||| sv_bless||| sv_cat_decode||5.008001| sv_catpv_mg|5.004050||p sv_catpvf_mg_nocontext|||pvn sv_catpvf_mg|5.006000|5.004000|pv sv_catpvf_nocontext|||vn sv_catpvf||5.004000|v sv_catpvn_flags||5.007002| sv_catpvn_mg|5.004050||p sv_catpvn_nomg|5.007002||p sv_catpvn||| sv_catpvs|5.009003||p sv_catpv||| sv_catsv_flags||5.007002| sv_catsv_mg|5.004050||p sv_catsv_nomg|5.007002||p sv_catsv||| sv_catxmlpvn||| sv_catxmlsv||| sv_chop||| sv_clean_all||| sv_clean_objs||| sv_clear||| sv_cmp_locale||5.004000| sv_cmp||| sv_collxfrm||| sv_compile_2op||5.008001| sv_copypv||5.007003| sv_dec||| sv_del_backref||| sv_derived_from||5.004000| sv_destroyable||5.010000| sv_does||5.009004| sv_dump||| sv_dup_inc_multiple||| sv_dup||| sv_eq||| sv_exp_grow||| sv_force_normal_flags||5.007001| sv_force_normal||5.006000| sv_free2||| sv_free_arenas||| sv_free||| sv_gets||5.004000| sv_grow||| sv_i_ncmp||| sv_inc||| sv_insert_flags||5.011000| sv_insert||| sv_isa||| sv_isobject||| sv_iv||5.005000| sv_kill_backrefs||| sv_len_utf8||5.006000| sv_len||| sv_magic_portable|5.011000|5.004000|p sv_magicext||5.007003| sv_magic||| sv_mortalcopy||| sv_ncmp||| sv_newmortal||| sv_newref||| sv_nolocking||5.007003| sv_nosharing||5.007003| sv_nounlocking||| sv_nv||5.005000| sv_peek||5.005000| sv_pos_b2u_midway||| sv_pos_b2u||5.006000| sv_pos_u2b_cached||| sv_pos_u2b_forwards|||n sv_pos_u2b_midway|||n sv_pos_u2b||5.006000| sv_pvbyten_force||5.006000| sv_pvbyten||5.006000| sv_pvbyte||5.006000| sv_pvn_force_flags|5.007002||p sv_pvn_force||| sv_pvn_nomg|5.007003|5.005000|p sv_pvn||5.005000| sv_pvutf8n_force||5.006000| sv_pvutf8n||5.006000| sv_pvutf8||5.006000| sv_pv||5.006000| sv_recode_to_utf8||5.007003| sv_reftype||| sv_release_COW||| sv_replace||| sv_report_used||| sv_reset||| sv_rvweaken||5.006000| sv_setiv_mg|5.004050||p sv_setiv||| sv_setnv_mg|5.006000||p sv_setnv||| sv_setpv_mg|5.004050||p sv_setpvf_mg_nocontext|||pvn sv_setpvf_mg|5.006000|5.004000|pv sv_setpvf_nocontext|||vn sv_setpvf||5.004000|v sv_setpviv_mg||5.008001| sv_setpviv||5.008001| sv_setpvn_mg|5.004050||p sv_setpvn||| sv_setpvs|5.009004||p sv_setpv||| sv_setref_iv||| sv_setref_nv||| sv_setref_pvn||| sv_setref_pv||| sv_setref_uv||5.007001| sv_setsv_cow||| sv_setsv_flags||5.007002| sv_setsv_mg|5.004050||p sv_setsv_nomg|5.007002||p sv_setsv||| sv_setuv_mg|5.004050||p sv_setuv|5.004000||p sv_tainted||5.004000| sv_taint||5.004000| sv_true||5.005000| sv_unglob||| sv_uni_display||5.007003| sv_unmagic||| sv_unref_flags||5.007001| sv_unref||| sv_untaint||5.004000| sv_upgrade||| sv_usepvn_flags||5.009004| sv_usepvn_mg|5.004050||p sv_usepvn||| sv_utf8_decode||5.006000| sv_utf8_downgrade||5.006000| sv_utf8_encode||5.006000| sv_utf8_upgrade_flags_grow||5.011000| sv_utf8_upgrade_flags||5.007002| sv_utf8_upgrade_nomg||5.007002| sv_utf8_upgrade||5.007001| sv_uv|5.005000||p sv_vcatpvf_mg|5.006000|5.004000|p sv_vcatpvfn||5.004000| sv_vcatpvf|5.006000|5.004000|p sv_vsetpvf_mg|5.006000|5.004000|p sv_vsetpvfn||5.004000| sv_vsetpvf|5.006000|5.004000|p sv_xmlpeek||| svtype||| swallow_bom||| swap_match_buff||| swash_fetch||5.007002| swash_get||| swash_init||5.006000| sys_init3||5.010000|n sys_init||5.010000|n sys_intern_clear||| sys_intern_dup||| sys_intern_init||| sys_term||5.010000|n taint_env||| taint_proper||| tmps_grow||5.006000| toLOWER||| toUPPER||| to_byte_substr||| to_uni_fold||5.007003| to_uni_lower_lc||5.006000| to_uni_lower||5.007003| to_uni_title_lc||5.006000| to_uni_title||5.007003| to_uni_upper_lc||5.006000| to_uni_upper||5.007003| to_utf8_case||5.007003| to_utf8_fold||5.007003| to_utf8_lower||5.007003| to_utf8_substr||| to_utf8_title||5.007003| to_utf8_upper||5.007003| token_free||| token_getmad||| tokenize_use||| tokeq||| tokereport||| too_few_arguments||| too_many_arguments||| uiv_2buf|||n unlnk||| unpack_rec||| unpack_str||5.007003| unpackstring||5.008001| unshare_hek_or_pvn||| unshare_hek||| unsharepvn||5.004000| unwind_handler_stack||| update_debugger_info||| upg_version||5.009005| usage||| utf16_to_utf8_reversed||5.006001| utf16_to_utf8||5.006001| utf8_distance||5.006000| utf8_hop||5.006000| utf8_length||5.007001| utf8_mg_pos_cache_update||| utf8_to_bytes||5.006001| utf8_to_uvchr||5.007001| utf8_to_uvuni||5.007001| utf8n_to_uvchr||| utf8n_to_uvuni||5.007001| utilize||| uvchr_to_utf8_flags||5.007003| uvchr_to_utf8||| uvuni_to_utf8_flags||5.007003| uvuni_to_utf8||5.007001| validate_suid||| varname||| vcmp||5.009000| vcroak||5.006000| vdeb||5.007003| vdie_common||| vdie_croak_common||| vdie||| vform||5.006000| visit||| vivify_defelem||| vivify_ref||| vload_module|5.006000||p vmess||5.006000| vnewSVpvf|5.006000|5.004000|p vnormal||5.009002| vnumify||5.009000| vstringify||5.009000| vverify||5.009003| vwarner||5.006000| vwarn||5.006000| wait4pid||| warn_nocontext|||vn warner_nocontext|||vn warner|5.006000|5.004000|pv warn|||v watch||| whichsig||| write_no_mem||| write_to_stderr||| xmldump_all||| xmldump_attr||| xmldump_eval||| xmldump_form||| xmldump_indent|||v xmldump_packsubs||| xmldump_sub||| xmldump_vindent||| yyerror||| yylex||| yyparse||| yywarn||| ); if (exists $opt{'list-unsupported'}) { my $f; for $f (sort { lc $a cmp lc $b } keys %API) { next unless $API{$f}{todo}; print "$f ", '.'x(40-length($f)), " ", format_version($API{$f}{todo}), "\n"; } exit 0; } # Scan for possible replacement candidates my(%replace, %need, %hints, %warnings, %depends); my $replace = 0; my($hint, $define, $function); sub find_api { my $code = shift; $code =~ s{ / (?: \*[^*]*\*+(?:[^$ccs][^*]*\*+)* / | /[^\r\n]*) | "[^"\\]*(?:\\.[^"\\]*)*" | '[^'\\]*(?:\\.[^'\\]*)*' }{}egsx; grep { exists $API{$_} } $code =~ /(\w+)/mg; } while () { if ($hint) { my $h = $hint->[0] eq 'Hint' ? \%hints : \%warnings; if (m{^\s*\*\s(.*?)\s*$}) { for (@{$hint->[1]}) { $h->{$_} ||= ''; # suppress warning with older perls $h->{$_} .= "$1\n"; } } else { undef $hint } } $hint = [$1, [split /,?\s+/, $2]] if m{^\s*$rccs\s+(Hint|Warning):\s+(\w+(?:,?\s+\w+)*)\s*$}; if ($define) { if ($define->[1] =~ /\\$/) { $define->[1] .= $_; } else { if (exists $API{$define->[0]} && $define->[1] !~ /^DPPP_\(/) { my @n = find_api($define->[1]); push @{$depends{$define->[0]}}, @n if @n } undef $define; } } $define = [$1, $2] if m{^\s*#\s*define\s+(\w+)(?:\([^)]*\))?\s+(.*)}; if ($function) { if (/^}/) { if (exists $API{$function->[0]}) { my @n = find_api($function->[1]); push @{$depends{$function->[0]}}, @n if @n } undef $function; } else { $function->[1] .= $_; } } $function = [$1, ''] if m{^DPPP_\(my_(\w+)\)}; $replace = $1 if m{^\s*$rccs\s+Replace:\s+(\d+)\s+$rcce\s*$}; $replace{$2} = $1 if $replace and m{^\s*#\s*define\s+(\w+)(?:\([^)]*\))?\s+(\w+)}; $replace{$2} = $1 if m{^\s*#\s*define\s+(\w+)(?:\([^)]*\))?\s+(\w+).*$rccs\s+Replace\s+$rcce}; $replace{$1} = $2 if m{^\s*$rccs\s+Replace (\w+) with (\w+)\s+$rcce\s*$}; if (m{^\s*$rccs\s+(\w+(\s*,\s*\w+)*)\s+depends\s+on\s+(\w+(\s*,\s*\w+)*)\s+$rcce\s*$}) { my @deps = map { s/\s+//g; $_ } split /,/, $3; my $d; for $d (map { s/\s+//g; $_ } split /,/, $1) { push @{$depends{$d}}, @deps; } } $need{$1} = 1 if m{^#if\s+defined\(NEED_(\w+)(?:_GLOBAL)?\)}; } for (values %depends) { my %s; $_ = [sort grep !$s{$_}++, @$_]; } if (exists $opt{'api-info'}) { my $f; my $count = 0; my $match = $opt{'api-info'} =~ m!^/(.*)/$! ? $1 : "^\Q$opt{'api-info'}\E\$"; for $f (sort { lc $a cmp lc $b } keys %API) { next unless $f =~ /$match/; print "\n=== $f ===\n\n"; my $info = 0; if ($API{$f}{base} || $API{$f}{todo}) { my $base = format_version($API{$f}{base} || $API{$f}{todo}); print "Supported at least starting from perl-$base.\n"; $info++; } if ($API{$f}{provided}) { my $todo = $API{$f}{todo} ? format_version($API{$f}{todo}) : "5.003"; print "Support by $ppport provided back to perl-$todo.\n"; print "Support needs to be explicitly requested by NEED_$f.\n" if exists $need{$f}; print "Depends on: ", join(', ', @{$depends{$f}}), ".\n" if exists $depends{$f}; print "\n$hints{$f}" if exists $hints{$f}; print "\nWARNING:\n$warnings{$f}" if exists $warnings{$f}; $info++; } print "No portability information available.\n" unless $info; $count++; } $count or print "Found no API matching '$opt{'api-info'}'."; print "\n"; exit 0; } if (exists $opt{'list-provided'}) { my $f; for $f (sort { lc $a cmp lc $b } keys %API) { next unless $API{$f}{provided}; my @flags; push @flags, 'explicit' if exists $need{$f}; push @flags, 'depend' if exists $depends{$f}; push @flags, 'hint' if exists $hints{$f}; push @flags, 'warning' if exists $warnings{$f}; my $flags = @flags ? ' ['.join(', ', @flags).']' : ''; print "$f$flags\n"; } exit 0; } my @files; my @srcext = qw( .xs .c .h .cc .cpp -c.inc -xs.inc ); my $srcext = join '|', map { quotemeta $_ } @srcext; if (@ARGV) { my %seen; for (@ARGV) { if (-e) { if (-f) { push @files, $_ unless $seen{$_}++; } else { warn "'$_' is not a file.\n" } } else { my @new = grep { -f } glob $_ or warn "'$_' does not exist.\n"; push @files, grep { !$seen{$_}++ } @new; } } } else { eval { require File::Find; File::Find::find(sub { $File::Find::name =~ /($srcext)$/i and push @files, $File::Find::name; }, '.'); }; if ($@) { @files = map { glob "*$_" } @srcext; } } if (!@ARGV || $opt{filter}) { my(@in, @out); my %xsc = map { /(.*)\.xs$/ ? ("$1.c" => 1, "$1.cc" => 1) : () } @files; for (@files) { my $out = exists $xsc{$_} || /\b\Q$ppport\E$/i || !/($srcext)$/i; push @{ $out ? \@out : \@in }, $_; } if (@ARGV && @out) { warning("Skipping the following files (use --nofilter to avoid this):\n| ", join "\n| ", @out); } @files = @in; } die "No input files given!\n" unless @files; my(%files, %global, %revreplace); %revreplace = reverse %replace; my $filename; my $patch_opened = 0; for $filename (@files) { unless (open IN, "<$filename") { warn "Unable to read from $filename: $!\n"; next; } info("Scanning $filename ..."); my $c = do { local $/; }; close IN; my %file = (orig => $c, changes => 0); # Temporarily remove C/XS comments and strings from the code my @ccom; $c =~ s{ ( ^$HS*\#$HS*include\b[^\r\n]+\b(?:\Q$ppport\E|XSUB\.h)\b[^\r\n]* | ^$HS*\#$HS*(?:define|elif|if(?:def)?)\b[^\r\n]* ) | ( ^$HS*\#[^\r\n]* | "[^"\\]*(?:\\.[^"\\]*)*" | '[^'\\]*(?:\\.[^'\\]*)*' | / (?: \*[^*]*\*+(?:[^$ccs][^*]*\*+)* / | /[^\r\n]* ) ) }{ defined $2 and push @ccom, $2; defined $1 ? $1 : "$ccs$#ccom$cce" }mgsex; $file{ccom} = \@ccom; $file{code} = $c; $file{has_inc_ppport} = $c =~ /^$HS*#$HS*include[^\r\n]+\b\Q$ppport\E\b/m; my $func; for $func (keys %API) { my $match = $func; $match .= "|$revreplace{$func}" if exists $revreplace{$func}; if ($c =~ /\b(?:Perl_)?($match)\b/) { $file{uses_replace}{$1}++ if exists $revreplace{$func} && $1 eq $revreplace{$func}; $file{uses_Perl}{$func}++ if $c =~ /\bPerl_$func\b/; if (exists $API{$func}{provided}) { $file{uses_provided}{$func}++; if (!exists $API{$func}{base} || $API{$func}{base} > $opt{'compat-version'}) { $file{uses}{$func}++; my @deps = rec_depend($func); if (@deps) { $file{uses_deps}{$func} = \@deps; for (@deps) { $file{uses}{$_} = 0 unless exists $file{uses}{$_}; } } for ($func, @deps) { $file{needs}{$_} = 'static' if exists $need{$_}; } } } if (exists $API{$func}{todo} && $API{$func}{todo} > $opt{'compat-version'}) { if ($c =~ /\b$func\b/) { $file{uses_todo}{$func}++; } } } } while ($c =~ /^$HS*#$HS*define$HS+(NEED_(\w+?)(_GLOBAL)?)\b/mg) { if (exists $need{$2}) { $file{defined $3 ? 'needed_global' : 'needed_static'}{$2}++; } else { warning("Possibly wrong #define $1 in $filename") } } for (qw(uses needs uses_todo needed_global needed_static)) { for $func (keys %{$file{$_}}) { push @{$global{$_}{$func}}, $filename; } } $files{$filename} = \%file; } # Globally resolve NEED_'s my $need; for $need (keys %{$global{needs}}) { if (@{$global{needs}{$need}} > 1) { my @targets = @{$global{needs}{$need}}; my @t = grep $files{$_}{needed_global}{$need}, @targets; @targets = @t if @t; @t = grep /\.xs$/i, @targets; @targets = @t if @t; my $target = shift @targets; $files{$target}{needs}{$need} = 'global'; for (@{$global{needs}{$need}}) { $files{$_}{needs}{$need} = 'extern' if $_ ne $target; } } } for $filename (@files) { exists $files{$filename} or next; info("=== Analyzing $filename ==="); my %file = %{$files{$filename}}; my $func; my $c = $file{code}; my $warnings = 0; for $func (sort keys %{$file{uses_Perl}}) { if ($API{$func}{varargs}) { unless ($API{$func}{nothxarg}) { my $changes = ($c =~ s{\b(Perl_$func\s*\(\s*)(?!aTHX_?)(\)|[^\s)]*\))} { $1 . ($2 eq ')' ? 'aTHX' : 'aTHX_ ') . $2 }ge); if ($changes) { warning("Doesn't pass interpreter argument aTHX to Perl_$func"); $file{changes} += $changes; } } } else { warning("Uses Perl_$func instead of $func"); $file{changes} += ($c =~ s{\bPerl_$func(\s*)\((\s*aTHX_?)?\s*} {$func$1(}g); } } for $func (sort keys %{$file{uses_replace}}) { warning("Uses $func instead of $replace{$func}"); $file{changes} += ($c =~ s/\b$func\b/$replace{$func}/g); } for $func (sort keys %{$file{uses_provided}}) { if ($file{uses}{$func}) { if (exists $file{uses_deps}{$func}) { diag("Uses $func, which depends on ", join(', ', @{$file{uses_deps}{$func}})); } else { diag("Uses $func"); } } $warnings += hint($func); } unless ($opt{quiet}) { for $func (sort keys %{$file{uses_todo}}) { print "*** WARNING: Uses $func, which may not be portable below perl ", format_version($API{$func}{todo}), ", even with '$ppport'\n"; $warnings++; } } for $func (sort keys %{$file{needed_static}}) { my $message = ''; if (not exists $file{uses}{$func}) { $message = "No need to define NEED_$func if $func is never used"; } elsif (exists $file{needs}{$func} && $file{needs}{$func} ne 'static') { $message = "No need to define NEED_$func when already needed globally"; } if ($message) { diag($message); $file{changes} += ($c =~ s/^$HS*#$HS*define$HS+NEED_$func\b.*$LF//mg); } } for $func (sort keys %{$file{needed_global}}) { my $message = ''; if (not exists $global{uses}{$func}) { $message = "No need to define NEED_${func}_GLOBAL if $func is never used"; } elsif (exists $file{needs}{$func}) { if ($file{needs}{$func} eq 'extern') { $message = "No need to define NEED_${func}_GLOBAL when already needed globally"; } elsif ($file{needs}{$func} eq 'static') { $message = "No need to define NEED_${func}_GLOBAL when only used in this file"; } } if ($message) { diag($message); $file{changes} += ($c =~ s/^$HS*#$HS*define$HS+NEED_${func}_GLOBAL\b.*$LF//mg); } } $file{needs_inc_ppport} = keys %{$file{uses}}; if ($file{needs_inc_ppport}) { my $pp = ''; for $func (sort keys %{$file{needs}}) { my $type = $file{needs}{$func}; next if $type eq 'extern'; my $suffix = $type eq 'global' ? '_GLOBAL' : ''; unless (exists $file{"needed_$type"}{$func}) { if ($type eq 'global') { diag("Files [@{$global{needs}{$func}}] need $func, adding global request"); } else { diag("File needs $func, adding static request"); } $pp .= "#define NEED_$func$suffix\n"; } } if ($pp && ($c =~ s/^(?=$HS*#$HS*define$HS+NEED_\w+)/$pp/m)) { $pp = ''; $file{changes}++; } unless ($file{has_inc_ppport}) { diag("Needs to include '$ppport'"); $pp .= qq(#include "$ppport"\n) } if ($pp) { $file{changes} += ($c =~ s/^($HS*#$HS*define$HS+NEED_\w+.*?)^/$1$pp/ms) || ($c =~ s/^(?=$HS*#$HS*include.*\Q$ppport\E)/$pp/m) || ($c =~ s/^($HS*#$HS*include.*XSUB.*\s*?)^/$1$pp/m) || ($c =~ s/^/$pp/); } } else { if ($file{has_inc_ppport}) { diag("No need to include '$ppport'"); $file{changes} += ($c =~ s/^$HS*?#$HS*include.*\Q$ppport\E.*?$LF//m); } } # put back in our C comments my $ix; my $cppc = 0; my @ccom = @{$file{ccom}}; for $ix (0 .. $#ccom) { if (!$opt{cplusplus} && $ccom[$ix] =~ s!^//!!) { $cppc++; $file{changes} += $c =~ s/$rccs$ix$rcce/$ccs$ccom[$ix] $cce/; } else { $c =~ s/$rccs$ix$rcce/$ccom[$ix]/; } } if ($cppc) { my $s = $cppc != 1 ? 's' : ''; warning("Uses $cppc C++ style comment$s, which is not portable"); } my $s = $warnings != 1 ? 's' : ''; my $warn = $warnings ? " ($warnings warning$s)" : ''; info("Analysis completed$warn"); if ($file{changes}) { if (exists $opt{copy}) { my $newfile = "$filename$opt{copy}"; if (-e $newfile) { error("'$newfile' already exists, refusing to write copy of '$filename'"); } else { local *F; if (open F, ">$newfile") { info("Writing copy of '$filename' with changes to '$newfile'"); print F $c; close F; } else { error("Cannot open '$newfile' for writing: $!"); } } } elsif (exists $opt{patch} || $opt{changes}) { if (exists $opt{patch}) { unless ($patch_opened) { if (open PATCH, ">$opt{patch}") { $patch_opened = 1; } else { error("Cannot open '$opt{patch}' for writing: $!"); delete $opt{patch}; $opt{changes} = 1; goto fallback; } } mydiff(\*PATCH, $filename, $c); } else { fallback: info("Suggested changes:"); mydiff(\*STDOUT, $filename, $c); } } else { my $s = $file{changes} == 1 ? '' : 's'; info("$file{changes} potentially required change$s detected"); } } else { info("Looks good"); } } close PATCH if $patch_opened; exit 0; sub try_use { eval "use @_;"; return $@ eq '' } sub mydiff { local *F = shift; my($file, $str) = @_; my $diff; if (exists $opt{diff}) { $diff = run_diff($opt{diff}, $file, $str); } if (!defined $diff and try_use('Text::Diff')) { $diff = Text::Diff::diff($file, \$str, { STYLE => 'Unified' }); $diff = <

$tmp") { print F $str; close F; if (open F, "$prog $file $tmp |") { while () { s/\Q$tmp\E/$file.patched/; $diff .= $_; } close F; unlink $tmp; return $diff; } unlink $tmp; } else { error("Cannot open '$tmp' for writing: $!"); } return undef; } sub rec_depend { my($func, $seen) = @_; return () unless exists $depends{$func}; $seen = {%{$seen||{}}}; return () if $seen->{$func}++; my %s; grep !$s{$_}++, map { ($_, rec_depend($_, $seen)) } @{$depends{$func}}; } sub parse_version { my $ver = shift; if ($ver =~ /^(\d+)\.(\d+)\.(\d+)$/) { return ($1, $2, $3); } elsif ($ver !~ /^\d+\.[\d_]+$/) { die "cannot parse version '$ver'\n"; } $ver =~ s/_//g; $ver =~ s/$/000000/; my($r,$v,$s) = $ver =~ /(\d+)\.(\d{3})(\d{3})/; $v = int $v; $s = int $s; if ($r < 5 || ($r == 5 && $v < 6)) { if ($s % 10) { die "cannot parse version '$ver'\n"; } } return ($r, $v, $s); } sub format_version { my $ver = shift; $ver =~ s/$/000000/; my($r,$v,$s) = $ver =~ /(\d+)\.(\d{3})(\d{3})/; $v = int $v; $s = int $s; if ($r < 5 || ($r == 5 && $v < 6)) { if ($s % 10) { die "invalid version '$ver'\n"; } $s /= 10; $ver = sprintf "%d.%03d", $r, $v; $s > 0 and $ver .= sprintf "_%02d", $s; return $ver; } return sprintf "%d.%d.%d", $r, $v, $s; } sub info { $opt{quiet} and return; print @_, "\n"; } sub diag { $opt{quiet} and return; $opt{diag} and print @_, "\n"; } sub warning { $opt{quiet} and return; print "*** ", @_, "\n"; } sub error { print "*** ERROR: ", @_, "\n"; } my %given_hints; my %given_warnings; sub hint { $opt{quiet} and return; my $func = shift; my $rv = 0; if (exists $warnings{$func} && !$given_warnings{$func}++) { my $warn = $warnings{$func}; $warn =~ s!^!*** !mg; print "*** WARNING: $func\n", $warn; $rv++; } if ($opt{hints} && exists $hints{$func} && !$given_hints{$func}++) { my $hint = $hints{$func}; $hint =~ s/^/ /mg; print " --- hint for $func ---\n", $hint; } $rv; } sub usage { my($usage) = do { local(@ARGV,$/)=($0); <> } =~ /^=head\d$HS+SYNOPSIS\s*^(.*?)\s*^=/ms; my %M = ( 'I' => '*' ); $usage =~ s/^\s*perl\s+\S+/$^X $0/; $usage =~ s/([A-Z])<([^>]+)>/$M{$1}$2$M{$1}/g; print < }; my($copy) = $self =~ /^=head\d\s+COPYRIGHT\s*^(.*?)^=\w+/ms; $copy =~ s/^(?=\S+)/ /gms; $self =~ s/^$HS+Do NOT edit.*?(?=^-)/$copy/ms; $self =~ s/^SKIP.*(?=^__DATA__)/SKIP if (\@ARGV && \$ARGV[0] eq '--unstrip') { eval { require Devel::PPPort }; \$@ and die "Cannot require Devel::PPPort, please install.\\n"; if (eval \$Devel::PPPort::VERSION < $VERSION) { die "$0 was originally generated with Devel::PPPort $VERSION.\\n" . "Your Devel::PPPort is only version \$Devel::PPPort::VERSION.\\n" . "Please install a newer version, or --unstrip will not work.\\n"; } Devel::PPPort::WriteFile(\$0); exit 0; } print <$0" or die "cannot strip $0: $!\n"; print OUT "$pl$c\n"; exit 0; } __DATA__ */ #ifndef _P_P_PORTABILITY_H_ #define _P_P_PORTABILITY_H_ #ifndef DPPP_NAMESPACE # define DPPP_NAMESPACE DPPP_ #endif #define DPPP_CAT2(x,y) CAT2(x,y) #define DPPP_(name) DPPP_CAT2(DPPP_NAMESPACE, name) #ifndef PERL_REVISION # if !defined(__PATCHLEVEL_H_INCLUDED__) && !(defined(PATCHLEVEL) && defined(SUBVERSION)) # define PERL_PATCHLEVEL_H_IMPLICIT # include # endif # if !(defined(PERL_VERSION) || (defined(SUBVERSION) && defined(PATCHLEVEL))) # include # endif # ifndef PERL_REVISION # define PERL_REVISION (5) /* Replace: 1 */ # define PERL_VERSION PATCHLEVEL # define PERL_SUBVERSION SUBVERSION /* Replace PERL_PATCHLEVEL with PERL_VERSION */ /* Replace: 0 */ # endif #endif #define _dpppDEC2BCD(dec) ((((dec)/100)<<8)|((((dec)%100)/10)<<4)|((dec)%10)) #define PERL_BCDVERSION ((_dpppDEC2BCD(PERL_REVISION)<<24)|(_dpppDEC2BCD(PERL_VERSION)<<12)|_dpppDEC2BCD(PERL_SUBVERSION)) /* It is very unlikely that anyone will try to use this with Perl 6 (or greater), but who knows. */ #if PERL_REVISION != 5 # error ppport.h only works with Perl version 5 #endif /* PERL_REVISION != 5 */ #ifndef dTHR # define dTHR dNOOP #endif #ifndef dTHX # define dTHX dNOOP #endif #ifndef dTHXa # define dTHXa(x) dNOOP #endif #ifndef pTHX # define pTHX void #endif #ifndef pTHX_ # define pTHX_ #endif #ifndef aTHX # define aTHX #endif #ifndef aTHX_ # define aTHX_ #endif #if (PERL_BCDVERSION < 0x5006000) # ifdef USE_THREADS # define aTHXR thr # define aTHXR_ thr, # else # define aTHXR # define aTHXR_ # endif # define dTHXR dTHR #else # define aTHXR aTHX # define aTHXR_ aTHX_ # define dTHXR dTHX #endif #ifndef dTHXoa # define dTHXoa(x) dTHXa(x) #endif #ifdef I_LIMITS # include #endif #ifndef PERL_UCHAR_MIN # define PERL_UCHAR_MIN ((unsigned char)0) #endif #ifndef PERL_UCHAR_MAX # ifdef UCHAR_MAX # define PERL_UCHAR_MAX ((unsigned char)UCHAR_MAX) # else # ifdef MAXUCHAR # define PERL_UCHAR_MAX ((unsigned char)MAXUCHAR) # else # define PERL_UCHAR_MAX ((unsigned char)~(unsigned)0) # endif # endif #endif #ifndef PERL_USHORT_MIN # define PERL_USHORT_MIN ((unsigned short)0) #endif #ifndef PERL_USHORT_MAX # ifdef USHORT_MAX # define PERL_USHORT_MAX ((unsigned short)USHORT_MAX) # else # ifdef MAXUSHORT # define PERL_USHORT_MAX ((unsigned short)MAXUSHORT) # else # ifdef USHRT_MAX # define PERL_USHORT_MAX ((unsigned short)USHRT_MAX) # else # define PERL_USHORT_MAX ((unsigned short)~(unsigned)0) # endif # endif # endif #endif #ifndef PERL_SHORT_MAX # ifdef SHORT_MAX # define PERL_SHORT_MAX ((short)SHORT_MAX) # else # ifdef MAXSHORT /* Often used in */ # define PERL_SHORT_MAX ((short)MAXSHORT) # else # ifdef SHRT_MAX # define PERL_SHORT_MAX ((short)SHRT_MAX) # else # define PERL_SHORT_MAX ((short) (PERL_USHORT_MAX >> 1)) # endif # endif # endif #endif #ifndef PERL_SHORT_MIN # ifdef SHORT_MIN # define PERL_SHORT_MIN ((short)SHORT_MIN) # else # ifdef MINSHORT # define PERL_SHORT_MIN ((short)MINSHORT) # else # ifdef SHRT_MIN # define PERL_SHORT_MIN ((short)SHRT_MIN) # else # define PERL_SHORT_MIN (-PERL_SHORT_MAX - ((3 & -1) == 3)) # endif # endif # endif #endif #ifndef PERL_UINT_MAX # ifdef UINT_MAX # define PERL_UINT_MAX ((unsigned int)UINT_MAX) # else # ifdef MAXUINT # define PERL_UINT_MAX ((unsigned int)MAXUINT) # else # define PERL_UINT_MAX (~(unsigned int)0) # endif # endif #endif #ifndef PERL_UINT_MIN # define PERL_UINT_MIN ((unsigned int)0) #endif #ifndef PERL_INT_MAX # ifdef INT_MAX # define PERL_INT_MAX ((int)INT_MAX) # else # ifdef MAXINT /* Often used in */ # define PERL_INT_MAX ((int)MAXINT) # else # define PERL_INT_MAX ((int)(PERL_UINT_MAX >> 1)) # endif # endif #endif #ifndef PERL_INT_MIN # ifdef INT_MIN # define PERL_INT_MIN ((int)INT_MIN) # else # ifdef MININT # define PERL_INT_MIN ((int)MININT) # else # define PERL_INT_MIN (-PERL_INT_MAX - ((3 & -1) == 3)) # endif # endif #endif #ifndef PERL_ULONG_MAX # ifdef ULONG_MAX # define PERL_ULONG_MAX ((unsigned long)ULONG_MAX) # else # ifdef MAXULONG # define PERL_ULONG_MAX ((unsigned long)MAXULONG) # else # define PERL_ULONG_MAX (~(unsigned long)0) # endif # endif #endif #ifndef PERL_ULONG_MIN # define PERL_ULONG_MIN ((unsigned long)0L) #endif #ifndef PERL_LONG_MAX # ifdef LONG_MAX # define PERL_LONG_MAX ((long)LONG_MAX) # else # ifdef MAXLONG # define PERL_LONG_MAX ((long)MAXLONG) # else # define PERL_LONG_MAX ((long) (PERL_ULONG_MAX >> 1)) # endif # endif #endif #ifndef PERL_LONG_MIN # ifdef LONG_MIN # define PERL_LONG_MIN ((long)LONG_MIN) # else # ifdef MINLONG # define PERL_LONG_MIN ((long)MINLONG) # else # define PERL_LONG_MIN (-PERL_LONG_MAX - ((3 & -1) == 3)) # endif # endif #endif #if defined(HAS_QUAD) && (defined(convex) || defined(uts)) # ifndef PERL_UQUAD_MAX # ifdef ULONGLONG_MAX # define PERL_UQUAD_MAX ((unsigned long long)ULONGLONG_MAX) # else # ifdef MAXULONGLONG # define PERL_UQUAD_MAX ((unsigned long long)MAXULONGLONG) # else # define PERL_UQUAD_MAX (~(unsigned long long)0) # endif # endif # endif # ifndef PERL_UQUAD_MIN # define PERL_UQUAD_MIN ((unsigned long long)0L) # endif # ifndef PERL_QUAD_MAX # ifdef LONGLONG_MAX # define PERL_QUAD_MAX ((long long)LONGLONG_MAX) # else # ifdef MAXLONGLONG # define PERL_QUAD_MAX ((long long)MAXLONGLONG) # else # define PERL_QUAD_MAX ((long long) (PERL_UQUAD_MAX >> 1)) # endif # endif # endif # ifndef PERL_QUAD_MIN # ifdef LONGLONG_MIN # define PERL_QUAD_MIN ((long long)LONGLONG_MIN) # else # ifdef MINLONGLONG # define PERL_QUAD_MIN ((long long)MINLONGLONG) # else # define PERL_QUAD_MIN (-PERL_QUAD_MAX - ((3 & -1) == 3)) # endif # endif # endif #endif /* This is based on code from 5.003 perl.h */ #ifdef HAS_QUAD # ifdef cray #ifndef IVTYPE # define IVTYPE int #endif #ifndef IV_MIN # define IV_MIN PERL_INT_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_INT_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_UINT_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_UINT_MAX #endif # ifdef INTSIZE #ifndef IVSIZE # define IVSIZE INTSIZE #endif # endif # else # if defined(convex) || defined(uts) #ifndef IVTYPE # define IVTYPE long long #endif #ifndef IV_MIN # define IV_MIN PERL_QUAD_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_QUAD_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_UQUAD_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_UQUAD_MAX #endif # ifdef LONGLONGSIZE #ifndef IVSIZE # define IVSIZE LONGLONGSIZE #endif # endif # else #ifndef IVTYPE # define IVTYPE long #endif #ifndef IV_MIN # define IV_MIN PERL_LONG_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_LONG_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_ULONG_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_ULONG_MAX #endif # ifdef LONGSIZE #ifndef IVSIZE # define IVSIZE LONGSIZE #endif # endif # endif # endif #ifndef IVSIZE # define IVSIZE 8 #endif #ifndef PERL_QUAD_MIN # define PERL_QUAD_MIN IV_MIN #endif #ifndef PERL_QUAD_MAX # define PERL_QUAD_MAX IV_MAX #endif #ifndef PERL_UQUAD_MIN # define PERL_UQUAD_MIN UV_MIN #endif #ifndef PERL_UQUAD_MAX # define PERL_UQUAD_MAX UV_MAX #endif #else #ifndef IVTYPE # define IVTYPE long #endif #ifndef IV_MIN # define IV_MIN PERL_LONG_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_LONG_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_ULONG_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_ULONG_MAX #endif #endif #ifndef IVSIZE # ifdef LONGSIZE # define IVSIZE LONGSIZE # else # define IVSIZE 4 /* A bold guess, but the best we can make. */ # endif #endif #ifndef UVTYPE # define UVTYPE unsigned IVTYPE #endif #ifndef UVSIZE # define UVSIZE IVSIZE #endif #ifndef sv_setuv # define sv_setuv(sv, uv) \ STMT_START { \ UV TeMpUv = uv; \ if (TeMpUv <= IV_MAX) \ sv_setiv(sv, TeMpUv); \ else \ sv_setnv(sv, (double)TeMpUv); \ } STMT_END #endif #ifndef newSVuv # define newSVuv(uv) ((uv) <= IV_MAX ? newSViv((IV)uv) : newSVnv((NV)uv)) #endif #ifndef sv_2uv # define sv_2uv(sv) ((PL_Sv = (sv)), (UV) (SvNOK(PL_Sv) ? SvNV(PL_Sv) : sv_2nv(PL_Sv))) #endif #ifndef SvUVX # define SvUVX(sv) ((UV)SvIVX(sv)) #endif #ifndef SvUVXx # define SvUVXx(sv) SvUVX(sv) #endif #ifndef SvUV # define SvUV(sv) (SvIOK(sv) ? SvUVX(sv) : sv_2uv(sv)) #endif #ifndef SvUVx # define SvUVx(sv) ((PL_Sv = (sv)), SvUV(PL_Sv)) #endif /* Hint: sv_uv * Always use the SvUVx() macro instead of sv_uv(). */ #ifndef sv_uv # define sv_uv(sv) SvUVx(sv) #endif #if !defined(SvUOK) && defined(SvIOK_UV) # define SvUOK(sv) SvIOK_UV(sv) #endif #ifndef XST_mUV # define XST_mUV(i,v) (ST(i) = sv_2mortal(newSVuv(v)) ) #endif #ifndef XSRETURN_UV # define XSRETURN_UV(v) STMT_START { XST_mUV(0,v); XSRETURN(1); } STMT_END #endif #ifndef PUSHu # define PUSHu(u) STMT_START { sv_setuv(TARG, (UV)(u)); PUSHTARG; } STMT_END #endif #ifndef XPUSHu # define XPUSHu(u) STMT_START { sv_setuv(TARG, (UV)(u)); XPUSHTARG; } STMT_END #endif #ifdef HAS_MEMCMP #ifndef memNE # define memNE(s1,s2,l) (memcmp(s1,s2,l)) #endif #ifndef memEQ # define memEQ(s1,s2,l) (!memcmp(s1,s2,l)) #endif #else #ifndef memNE # define memNE(s1,s2,l) (bcmp(s1,s2,l)) #endif #ifndef memEQ # define memEQ(s1,s2,l) (!bcmp(s1,s2,l)) #endif #endif #ifndef MoveD # define MoveD(s,d,n,t) memmove((char*)(d),(char*)(s), (n) * sizeof(t)) #endif #ifndef CopyD # define CopyD(s,d,n,t) memcpy((char*)(d),(char*)(s), (n) * sizeof(t)) #endif #ifdef HAS_MEMSET #ifndef ZeroD # define ZeroD(d,n,t) memzero((char*)(d), (n) * sizeof(t)) #endif #else #ifndef ZeroD # define ZeroD(d,n,t) ((void)memzero((char*)(d), (n) * sizeof(t)), d) #endif #endif #ifndef PoisonWith # define PoisonWith(d,n,t,b) (void)memset((char*)(d), (U8)(b), (n) * sizeof(t)) #endif #ifndef PoisonNew # define PoisonNew(d,n,t) PoisonWith(d,n,t,0xAB) #endif #ifndef PoisonFree # define PoisonFree(d,n,t) PoisonWith(d,n,t,0xEF) #endif #ifndef Poison # define Poison(d,n,t) PoisonFree(d,n,t) #endif #ifndef Newx # define Newx(v,n,t) New(0,v,n,t) #endif #ifndef Newxc # define Newxc(v,n,t,c) Newc(0,v,n,t,c) #endif #ifndef Newxz # define Newxz(v,n,t) Newz(0,v,n,t) #endif #ifndef PERL_UNUSED_DECL # ifdef HASATTRIBUTE # if (defined(__GNUC__) && defined(__cplusplus)) || defined(__INTEL_COMPILER) # define PERL_UNUSED_DECL # else # define PERL_UNUSED_DECL __attribute__((unused)) # endif # else # define PERL_UNUSED_DECL # endif #endif #ifndef PERL_UNUSED_ARG # if defined(lint) && defined(S_SPLINT_S) /* www.splint.org */ # include # define PERL_UNUSED_ARG(x) NOTE(ARGUNUSED(x)) # else # define PERL_UNUSED_ARG(x) ((void)x) # endif #endif #ifndef PERL_UNUSED_VAR # define PERL_UNUSED_VAR(x) ((void)x) #endif #ifndef PERL_UNUSED_CONTEXT # ifdef USE_ITHREADS # define PERL_UNUSED_CONTEXT PERL_UNUSED_ARG(my_perl) # else # define PERL_UNUSED_CONTEXT # endif #endif #ifndef NOOP # define NOOP /*EMPTY*/(void)0 #endif #ifndef dNOOP # define dNOOP extern int /*@unused@*/ Perl___notused PERL_UNUSED_DECL #endif #ifndef NVTYPE # if defined(USE_LONG_DOUBLE) && defined(HAS_LONG_DOUBLE) # define NVTYPE long double # else # define NVTYPE double # endif typedef NVTYPE NV; #endif #ifndef INT2PTR # if (IVSIZE == PTRSIZE) && (UVSIZE == PTRSIZE) # define PTRV UV # define INT2PTR(any,d) (any)(d) # else # if PTRSIZE == LONGSIZE # define PTRV unsigned long # else # define PTRV unsigned # endif # define INT2PTR(any,d) (any)(PTRV)(d) # endif #endif #ifndef PTR2ul # if PTRSIZE == LONGSIZE # define PTR2ul(p) (unsigned long)(p) # else # define PTR2ul(p) INT2PTR(unsigned long,p) # endif #endif #ifndef PTR2nat # define PTR2nat(p) (PTRV)(p) #endif #ifndef NUM2PTR # define NUM2PTR(any,d) (any)PTR2nat(d) #endif #ifndef PTR2IV # define PTR2IV(p) INT2PTR(IV,p) #endif #ifndef PTR2UV # define PTR2UV(p) INT2PTR(UV,p) #endif #ifndef PTR2NV # define PTR2NV(p) NUM2PTR(NV,p) #endif #undef START_EXTERN_C #undef END_EXTERN_C #undef EXTERN_C #ifdef __cplusplus # define START_EXTERN_C extern "C" { # define END_EXTERN_C } # define EXTERN_C extern "C" #else # define START_EXTERN_C # define END_EXTERN_C # define EXTERN_C extern #endif #if defined(PERL_GCC_PEDANTIC) # ifndef PERL_GCC_BRACE_GROUPS_FORBIDDEN # define PERL_GCC_BRACE_GROUPS_FORBIDDEN # endif #endif #if defined(__GNUC__) && !defined(PERL_GCC_BRACE_GROUPS_FORBIDDEN) && !defined(__cplusplus) # ifndef PERL_USE_GCC_BRACE_GROUPS # define PERL_USE_GCC_BRACE_GROUPS # endif #endif #undef STMT_START #undef STMT_END #ifdef PERL_USE_GCC_BRACE_GROUPS # define STMT_START (void)( /* gcc supports ``({ STATEMENTS; })'' */ # define STMT_END ) #else # if defined(VOIDFLAGS) && (VOIDFLAGS) && (defined(sun) || defined(__sun__)) && !defined(__GNUC__) # define STMT_START if (1) # define STMT_END else (void)0 # else # define STMT_START do # define STMT_END while (0) # endif #endif #ifndef boolSV # define boolSV(b) ((b) ? &PL_sv_yes : &PL_sv_no) #endif /* DEFSV appears first in 5.004_56 */ #ifndef DEFSV # define DEFSV GvSV(PL_defgv) #endif #ifndef SAVE_DEFSV # define SAVE_DEFSV SAVESPTR(GvSV(PL_defgv)) #endif #ifndef DEFSV_set # define DEFSV_set(sv) (DEFSV = (sv)) #endif /* Older perls (<=5.003) lack AvFILLp */ #ifndef AvFILLp # define AvFILLp AvFILL #endif #ifndef ERRSV # define ERRSV get_sv("@",FALSE) #endif /* Hint: gv_stashpvn * This function's backport doesn't support the length parameter, but * rather ignores it. Portability can only be ensured if the length * parameter is used for speed reasons, but the length can always be * correctly computed from the string argument. */ #ifndef gv_stashpvn # define gv_stashpvn(str,len,create) gv_stashpv(str,create) #endif /* Replace: 1 */ #ifndef get_cv # define get_cv perl_get_cv #endif #ifndef get_sv # define get_sv perl_get_sv #endif #ifndef get_av # define get_av perl_get_av #endif #ifndef get_hv # define get_hv perl_get_hv #endif /* Replace: 0 */ #ifndef dUNDERBAR # define dUNDERBAR dNOOP #endif #ifndef UNDERBAR # define UNDERBAR DEFSV #endif #ifndef dAX # define dAX I32 ax = MARK - PL_stack_base + 1 #endif #ifndef dITEMS # define dITEMS I32 items = SP - MARK #endif #ifndef dXSTARG # define dXSTARG SV * targ = sv_newmortal() #endif #ifndef dAXMARK # define dAXMARK I32 ax = POPMARK; \ register SV ** const mark = PL_stack_base + ax++ #endif #ifndef XSprePUSH # define XSprePUSH (sp = PL_stack_base + ax - 1) #endif #if (PERL_BCDVERSION < 0x5005000) # undef XSRETURN # define XSRETURN(off) \ STMT_START { \ PL_stack_sp = PL_stack_base + ax + ((off) - 1); \ return; \ } STMT_END #endif #ifndef XSPROTO # define XSPROTO(name) void name(pTHX_ CV* cv) #endif #ifndef SVfARG # define SVfARG(p) ((void*)(p)) #endif #ifndef PERL_ABS # define PERL_ABS(x) ((x) < 0 ? -(x) : (x)) #endif #ifndef dVAR # define dVAR dNOOP #endif #ifndef SVf # define SVf "_" #endif #ifndef UTF8_MAXBYTES # define UTF8_MAXBYTES UTF8_MAXLEN #endif #ifndef CPERLscope # define CPERLscope(x) x #endif #ifndef PERL_HASH # define PERL_HASH(hash,str,len) \ STMT_START { \ const char *s_PeRlHaSh = str; \ I32 i_PeRlHaSh = len; \ U32 hash_PeRlHaSh = 0; \ while (i_PeRlHaSh--) \ hash_PeRlHaSh = hash_PeRlHaSh * 33 + *s_PeRlHaSh++; \ (hash) = hash_PeRlHaSh; \ } STMT_END #endif #ifndef PERLIO_FUNCS_DECL # ifdef PERLIO_FUNCS_CONST # define PERLIO_FUNCS_DECL(funcs) const PerlIO_funcs funcs # define PERLIO_FUNCS_CAST(funcs) (PerlIO_funcs*)(funcs) # else # define PERLIO_FUNCS_DECL(funcs) PerlIO_funcs funcs # define PERLIO_FUNCS_CAST(funcs) (funcs) # endif #endif /* provide these typedefs for older perls */ #if (PERL_BCDVERSION < 0x5009003) # ifdef ARGSproto typedef OP* (CPERLscope(*Perl_ppaddr_t))(ARGSproto); # else typedef OP* (CPERLscope(*Perl_ppaddr_t))(pTHX); # endif typedef OP* (CPERLscope(*Perl_check_t)) (pTHX_ OP*); #endif #ifndef isPSXSPC # define isPSXSPC(c) (isSPACE(c) || (c) == '\v') #endif #ifndef isBLANK # define isBLANK(c) ((c) == ' ' || (c) == '\t') #endif #ifdef EBCDIC #ifndef isALNUMC # define isALNUMC(c) isalnum(c) #endif #ifndef isASCII # define isASCII(c) isascii(c) #endif #ifndef isCNTRL # define isCNTRL(c) iscntrl(c) #endif #ifndef isGRAPH # define isGRAPH(c) isgraph(c) #endif #ifndef isPRINT # define isPRINT(c) isprint(c) #endif #ifndef isPUNCT # define isPUNCT(c) ispunct(c) #endif #ifndef isXDIGIT # define isXDIGIT(c) isxdigit(c) #endif #else # if (PERL_BCDVERSION < 0x5010000) /* Hint: isPRINT * The implementation in older perl versions includes all of the * isSPACE() characters, which is wrong. The version provided by * Devel::PPPort always overrides a present buggy version. */ # undef isPRINT # endif #ifndef isALNUMC # define isALNUMC(c) (isALPHA(c) || isDIGIT(c)) #endif #ifndef isASCII # define isASCII(c) ((c) <= 127) #endif #ifndef isCNTRL # define isCNTRL(c) ((c) < ' ' || (c) == 127) #endif #ifndef isGRAPH # define isGRAPH(c) (isALNUM(c) || isPUNCT(c)) #endif #ifndef isPRINT # define isPRINT(c) (((c) >= 32 && (c) < 127)) #endif #ifndef isPUNCT # define isPUNCT(c) (((c) >= 33 && (c) <= 47) || ((c) >= 58 && (c) <= 64) || ((c) >= 91 && (c) <= 96) || ((c) >= 123 && (c) <= 126)) #endif #ifndef isXDIGIT # define isXDIGIT(c) (isDIGIT(c) || ((c) >= 'a' && (c) <= 'f') || ((c) >= 'A' && (c) <= 'F')) #endif #endif #ifndef PERL_SIGNALS_UNSAFE_FLAG #define PERL_SIGNALS_UNSAFE_FLAG 0x0001 #if (PERL_BCDVERSION < 0x5008000) # define D_PPP_PERL_SIGNALS_INIT PERL_SIGNALS_UNSAFE_FLAG #else # define D_PPP_PERL_SIGNALS_INIT 0 #endif #if defined(NEED_PL_signals) static U32 DPPP_(my_PL_signals) = D_PPP_PERL_SIGNALS_INIT; #elif defined(NEED_PL_signals_GLOBAL) U32 DPPP_(my_PL_signals) = D_PPP_PERL_SIGNALS_INIT; #else extern U32 DPPP_(my_PL_signals); #endif #define PL_signals DPPP_(my_PL_signals) #endif /* Hint: PL_ppaddr * Calling an op via PL_ppaddr requires passing a context argument * for threaded builds. Since the context argument is different for * 5.005 perls, you can use aTHXR (supplied by ppport.h), which will * automatically be defined as the correct argument. */ #if (PERL_BCDVERSION <= 0x5005005) /* Replace: 1 */ # define PL_ppaddr ppaddr # define PL_no_modify no_modify /* Replace: 0 */ #endif #if (PERL_BCDVERSION <= 0x5004005) /* Replace: 1 */ # define PL_DBsignal DBsignal # define PL_DBsingle DBsingle # define PL_DBsub DBsub # define PL_DBtrace DBtrace # define PL_Sv Sv # define PL_bufend bufend # define PL_bufptr bufptr # define PL_compiling compiling # define PL_copline copline # define PL_curcop curcop # define PL_curstash curstash # define PL_debstash debstash # define PL_defgv defgv # define PL_diehook diehook # define PL_dirty dirty # define PL_dowarn dowarn # define PL_errgv errgv # define PL_error_count error_count # define PL_expect expect # define PL_hexdigit hexdigit # define PL_hints hints # define PL_in_my in_my # define PL_laststatval laststatval # define PL_lex_state lex_state # define PL_lex_stuff lex_stuff # define PL_linestr linestr # define PL_na na # define PL_perl_destruct_level perl_destruct_level # define PL_perldb perldb # define PL_rsfp_filters rsfp_filters # define PL_rsfp rsfp # define PL_stack_base stack_base # define PL_stack_sp stack_sp # define PL_statcache statcache # define PL_stdingv stdingv # define PL_sv_arenaroot sv_arenaroot # define PL_sv_no sv_no # define PL_sv_undef sv_undef # define PL_sv_yes sv_yes # define PL_tainted tainted # define PL_tainting tainting # define PL_tokenbuf tokenbuf /* Replace: 0 */ #endif /* Warning: PL_parser * For perl versions earlier than 5.9.5, this is an always * non-NULL dummy. Also, it cannot be dereferenced. Don't * use it if you can avoid is and unless you absolutely know * what you're doing. * If you always check that PL_parser is non-NULL, you can * define DPPP_PL_parser_NO_DUMMY to avoid the creation of * a dummy parser structure. */ #if (PERL_BCDVERSION >= 0x5009005) # ifdef DPPP_PL_parser_NO_DUMMY # define D_PPP_my_PL_parser_var(var) ((PL_parser ? PL_parser : \ (croak("panic: PL_parser == NULL in %s:%d", \ __FILE__, __LINE__), (yy_parser *) NULL))->var) # else # ifdef DPPP_PL_parser_NO_DUMMY_WARNING # define D_PPP_parser_dummy_warning(var) # else # define D_PPP_parser_dummy_warning(var) \ warn("warning: dummy PL_" #var " used in %s:%d", __FILE__, __LINE__), # endif # define D_PPP_my_PL_parser_var(var) ((PL_parser ? PL_parser : \ (D_PPP_parser_dummy_warning(var) &DPPP_(dummy_PL_parser)))->var) #if defined(NEED_PL_parser) static yy_parser DPPP_(dummy_PL_parser); #elif defined(NEED_PL_parser_GLOBAL) yy_parser DPPP_(dummy_PL_parser); #else extern yy_parser DPPP_(dummy_PL_parser); #endif # endif /* PL_expect, PL_copline, PL_rsfp, PL_rsfp_filters, PL_linestr, PL_bufptr, PL_bufend, PL_lex_state, PL_lex_stuff, PL_tokenbuf depends on PL_parser */ /* Warning: PL_expect, PL_copline, PL_rsfp, PL_rsfp_filters, PL_linestr, PL_bufptr, PL_bufend, PL_lex_state, PL_lex_stuff, PL_tokenbuf * Do not use this variable unless you know exactly what you're * doint. It is internal to the perl parser and may change or even * be removed in the future. As of perl 5.9.5, you have to check * for (PL_parser != NULL) for this variable to have any effect. * An always non-NULL PL_parser dummy is provided for earlier * perl versions. * If PL_parser is NULL when you try to access this variable, a * dummy is being accessed instead and a warning is issued unless * you define DPPP_PL_parser_NO_DUMMY_WARNING. * If DPPP_PL_parser_NO_DUMMY is defined, the code trying to access * this variable will croak with a panic message. */ # define PL_expect D_PPP_my_PL_parser_var(expect) # define PL_copline D_PPP_my_PL_parser_var(copline) # define PL_rsfp D_PPP_my_PL_parser_var(rsfp) # define PL_rsfp_filters D_PPP_my_PL_parser_var(rsfp_filters) # define PL_linestr D_PPP_my_PL_parser_var(linestr) # define PL_bufptr D_PPP_my_PL_parser_var(bufptr) # define PL_bufend D_PPP_my_PL_parser_var(bufend) # define PL_lex_state D_PPP_my_PL_parser_var(lex_state) # define PL_lex_stuff D_PPP_my_PL_parser_var(lex_stuff) # define PL_tokenbuf D_PPP_my_PL_parser_var(tokenbuf) # define PL_in_my D_PPP_my_PL_parser_var(in_my) # define PL_in_my_stash D_PPP_my_PL_parser_var(in_my_stash) # define PL_error_count D_PPP_my_PL_parser_var(error_count) #else /* ensure that PL_parser != NULL and cannot be dereferenced */ # define PL_parser ((void *) 1) #endif #ifndef mPUSHs # define mPUSHs(s) PUSHs(sv_2mortal(s)) #endif #ifndef PUSHmortal # define PUSHmortal PUSHs(sv_newmortal()) #endif #ifndef mPUSHp # define mPUSHp(p,l) sv_setpvn(PUSHmortal, (p), (l)) #endif #ifndef mPUSHn # define mPUSHn(n) sv_setnv(PUSHmortal, (NV)(n)) #endif #ifndef mPUSHi # define mPUSHi(i) sv_setiv(PUSHmortal, (IV)(i)) #endif #ifndef mPUSHu # define mPUSHu(u) sv_setuv(PUSHmortal, (UV)(u)) #endif #ifndef mXPUSHs # define mXPUSHs(s) XPUSHs(sv_2mortal(s)) #endif #ifndef XPUSHmortal # define XPUSHmortal XPUSHs(sv_newmortal()) #endif #ifndef mXPUSHp # define mXPUSHp(p,l) STMT_START { EXTEND(sp,1); sv_setpvn(PUSHmortal, (p), (l)); } STMT_END #endif #ifndef mXPUSHn # define mXPUSHn(n) STMT_START { EXTEND(sp,1); sv_setnv(PUSHmortal, (NV)(n)); } STMT_END #endif #ifndef mXPUSHi # define mXPUSHi(i) STMT_START { EXTEND(sp,1); sv_setiv(PUSHmortal, (IV)(i)); } STMT_END #endif #ifndef mXPUSHu # define mXPUSHu(u) STMT_START { EXTEND(sp,1); sv_setuv(PUSHmortal, (UV)(u)); } STMT_END #endif /* Replace: 1 */ #ifndef call_sv # define call_sv perl_call_sv #endif #ifndef call_pv # define call_pv perl_call_pv #endif #ifndef call_argv # define call_argv perl_call_argv #endif #ifndef call_method # define call_method perl_call_method #endif #ifndef eval_sv # define eval_sv perl_eval_sv #endif /* Replace: 0 */ #ifndef PERL_LOADMOD_DENY # define PERL_LOADMOD_DENY 0x1 #endif #ifndef PERL_LOADMOD_NOIMPORT # define PERL_LOADMOD_NOIMPORT 0x2 #endif #ifndef PERL_LOADMOD_IMPORT_OPS # define PERL_LOADMOD_IMPORT_OPS 0x4 #endif #ifndef G_METHOD # define G_METHOD 64 # ifdef call_sv # undef call_sv # endif # if (PERL_BCDVERSION < 0x5006000) # define call_sv(sv, flags) ((flags) & G_METHOD ? perl_call_method((char *) SvPV_nolen_const(sv), \ (flags) & ~G_METHOD) : perl_call_sv(sv, flags)) # else # define call_sv(sv, flags) ((flags) & G_METHOD ? Perl_call_method(aTHX_ (char *) SvPV_nolen_const(sv), \ (flags) & ~G_METHOD) : Perl_call_sv(aTHX_ sv, flags)) # endif #endif /* Replace perl_eval_pv with eval_pv */ #ifndef eval_pv #if defined(NEED_eval_pv) static SV* DPPP_(my_eval_pv)(char *p, I32 croak_on_error); static #else extern SV* DPPP_(my_eval_pv)(char *p, I32 croak_on_error); #endif #ifdef eval_pv # undef eval_pv #endif #define eval_pv(a,b) DPPP_(my_eval_pv)(aTHX_ a,b) #define Perl_eval_pv DPPP_(my_eval_pv) #if defined(NEED_eval_pv) || defined(NEED_eval_pv_GLOBAL) SV* DPPP_(my_eval_pv)(char *p, I32 croak_on_error) { dSP; SV* sv = newSVpv(p, 0); PUSHMARK(sp); eval_sv(sv, G_SCALAR); SvREFCNT_dec(sv); SPAGAIN; sv = POPs; PUTBACK; if (croak_on_error && SvTRUE(GvSV(errgv))) croak(SvPVx(GvSV(errgv), na)); return sv; } #endif #endif #ifndef vload_module #if defined(NEED_vload_module) static void DPPP_(my_vload_module)(U32 flags, SV *name, SV *ver, va_list *args); static #else extern void DPPP_(my_vload_module)(U32 flags, SV *name, SV *ver, va_list *args); #endif #ifdef vload_module # undef vload_module #endif #define vload_module(a,b,c,d) DPPP_(my_vload_module)(aTHX_ a,b,c,d) #define Perl_vload_module DPPP_(my_vload_module) #if defined(NEED_vload_module) || defined(NEED_vload_module_GLOBAL) void DPPP_(my_vload_module)(U32 flags, SV *name, SV *ver, va_list *args) { dTHR; dVAR; OP *veop, *imop; OP * const modname = newSVOP(OP_CONST, 0, name); /* 5.005 has a somewhat hacky force_normal that doesn't croak on SvREADONLY() if PL_compling is true. Current perls take care in ck_require() to correctly turn off SvREADONLY before calling force_normal_flags(). This seems a better fix than fudging PL_compling */ SvREADONLY_off(((SVOP*)modname)->op_sv); modname->op_private |= OPpCONST_BARE; if (ver) { veop = newSVOP(OP_CONST, 0, ver); } else veop = NULL; if (flags & PERL_LOADMOD_NOIMPORT) { imop = sawparens(newNULLLIST()); } else if (flags & PERL_LOADMOD_IMPORT_OPS) { imop = va_arg(*args, OP*); } else { SV *sv; imop = NULL; sv = va_arg(*args, SV*); while (sv) { imop = append_elem(OP_LIST, imop, newSVOP(OP_CONST, 0, sv)); sv = va_arg(*args, SV*); } } { const line_t ocopline = PL_copline; COP * const ocurcop = PL_curcop; const int oexpect = PL_expect; #if (PERL_BCDVERSION >= 0x5004000) utilize(!(flags & PERL_LOADMOD_DENY), start_subparse(FALSE, 0), veop, modname, imop); #else utilize(!(flags & PERL_LOADMOD_DENY), start_subparse(), modname, imop); #endif PL_expect = oexpect; PL_copline = ocopline; PL_curcop = ocurcop; } } #endif #endif #ifndef load_module #if defined(NEED_load_module) static void DPPP_(my_load_module)(U32 flags, SV *name, SV *ver, ...); static #else extern void DPPP_(my_load_module)(U32 flags, SV *name, SV *ver, ...); #endif #ifdef load_module # undef load_module #endif #define load_module DPPP_(my_load_module) #define Perl_load_module DPPP_(my_load_module) #if defined(NEED_load_module) || defined(NEED_load_module_GLOBAL) void DPPP_(my_load_module)(U32 flags, SV *name, SV *ver, ...) { va_list args; va_start(args, ver); vload_module(flags, name, ver, &args); va_end(args); } #endif #endif #ifndef newRV_inc # define newRV_inc(sv) newRV(sv) /* Replace */ #endif #ifndef newRV_noinc #if defined(NEED_newRV_noinc) static SV * DPPP_(my_newRV_noinc)(SV *sv); static #else extern SV * DPPP_(my_newRV_noinc)(SV *sv); #endif #ifdef newRV_noinc # undef newRV_noinc #endif #define newRV_noinc(a) DPPP_(my_newRV_noinc)(aTHX_ a) #define Perl_newRV_noinc DPPP_(my_newRV_noinc) #if defined(NEED_newRV_noinc) || defined(NEED_newRV_noinc_GLOBAL) SV * DPPP_(my_newRV_noinc)(SV *sv) { SV *rv = (SV *)newRV(sv); SvREFCNT_dec(sv); return rv; } #endif #endif /* Hint: newCONSTSUB * Returns a CV* as of perl-5.7.1. This return value is not supported * by Devel::PPPort. */ /* newCONSTSUB from IO.xs is in the core starting with 5.004_63 */ #if (PERL_BCDVERSION < 0x5004063) && (PERL_BCDVERSION != 0x5004005) #if defined(NEED_newCONSTSUB) static void DPPP_(my_newCONSTSUB)(HV *stash, const char *name, SV *sv); static #else extern void DPPP_(my_newCONSTSUB)(HV *stash, const char *name, SV *sv); #endif #ifdef newCONSTSUB # undef newCONSTSUB #endif #define newCONSTSUB(a,b,c) DPPP_(my_newCONSTSUB)(aTHX_ a,b,c) #define Perl_newCONSTSUB DPPP_(my_newCONSTSUB) #if defined(NEED_newCONSTSUB) || defined(NEED_newCONSTSUB_GLOBAL) /* This is just a trick to avoid a dependency of newCONSTSUB on PL_parser */ /* (There's no PL_parser in perl < 5.005, so this is completely safe) */ #define D_PPP_PL_copline PL_copline void DPPP_(my_newCONSTSUB)(HV *stash, const char *name, SV *sv) { U32 oldhints = PL_hints; HV *old_cop_stash = PL_curcop->cop_stash; HV *old_curstash = PL_curstash; line_t oldline = PL_curcop->cop_line; PL_curcop->cop_line = D_PPP_PL_copline; PL_hints &= ~HINT_BLOCK_SCOPE; if (stash) PL_curstash = PL_curcop->cop_stash = stash; newSUB( #if (PERL_BCDVERSION < 0x5003022) start_subparse(), #elif (PERL_BCDVERSION == 0x5003022) start_subparse(0), #else /* 5.003_23 onwards */ start_subparse(FALSE, 0), #endif newSVOP(OP_CONST, 0, newSVpv((char *) name, 0)), newSVOP(OP_CONST, 0, &PL_sv_no), /* SvPV(&PL_sv_no) == "" -- GMB */ newSTATEOP(0, Nullch, newSVOP(OP_CONST, 0, sv)) ); PL_hints = oldhints; PL_curcop->cop_stash = old_cop_stash; PL_curstash = old_curstash; PL_curcop->cop_line = oldline; } #endif #endif /* * Boilerplate macros for initializing and accessing interpreter-local * data from C. All statics in extensions should be reworked to use * this, if you want to make the extension thread-safe. See ext/re/re.xs * for an example of the use of these macros. * * Code that uses these macros is responsible for the following: * 1. #define MY_CXT_KEY to a unique string, e.g. "DynaLoader_guts" * 2. Declare a typedef named my_cxt_t that is a structure that contains * all the data that needs to be interpreter-local. * 3. Use the START_MY_CXT macro after the declaration of my_cxt_t. * 4. Use the MY_CXT_INIT macro such that it is called exactly once * (typically put in the BOOT: section). * 5. Use the members of the my_cxt_t structure everywhere as * MY_CXT.member. * 6. Use the dMY_CXT macro (a declaration) in all the functions that * access MY_CXT. */ #if defined(MULTIPLICITY) || defined(PERL_OBJECT) || \ defined(PERL_CAPI) || defined(PERL_IMPLICIT_CONTEXT) #ifndef START_MY_CXT /* This must appear in all extensions that define a my_cxt_t structure, * right after the definition (i.e. at file scope). The non-threads * case below uses it to declare the data as static. */ #define START_MY_CXT #if (PERL_BCDVERSION < 0x5004068) /* Fetches the SV that keeps the per-interpreter data. */ #define dMY_CXT_SV \ SV *my_cxt_sv = get_sv(MY_CXT_KEY, FALSE) #else /* >= perl5.004_68 */ #define dMY_CXT_SV \ SV *my_cxt_sv = *hv_fetch(PL_modglobal, MY_CXT_KEY, \ sizeof(MY_CXT_KEY)-1, TRUE) #endif /* < perl5.004_68 */ /* This declaration should be used within all functions that use the * interpreter-local data. */ #define dMY_CXT \ dMY_CXT_SV; \ my_cxt_t *my_cxtp = INT2PTR(my_cxt_t*,SvUV(my_cxt_sv)) /* Creates and zeroes the per-interpreter data. * (We allocate my_cxtp in a Perl SV so that it will be released when * the interpreter goes away.) */ #define MY_CXT_INIT \ dMY_CXT_SV; \ /* newSV() allocates one more than needed */ \ my_cxt_t *my_cxtp = (my_cxt_t*)SvPVX(newSV(sizeof(my_cxt_t)-1));\ Zero(my_cxtp, 1, my_cxt_t); \ sv_setuv(my_cxt_sv, PTR2UV(my_cxtp)) /* This macro must be used to access members of the my_cxt_t structure. * e.g. MYCXT.some_data */ #define MY_CXT (*my_cxtp) /* Judicious use of these macros can reduce the number of times dMY_CXT * is used. Use is similar to pTHX, aTHX etc. */ #define pMY_CXT my_cxt_t *my_cxtp #define pMY_CXT_ pMY_CXT, #define _pMY_CXT ,pMY_CXT #define aMY_CXT my_cxtp #define aMY_CXT_ aMY_CXT, #define _aMY_CXT ,aMY_CXT #endif /* START_MY_CXT */ #ifndef MY_CXT_CLONE /* Clones the per-interpreter data. */ #define MY_CXT_CLONE \ dMY_CXT_SV; \ my_cxt_t *my_cxtp = (my_cxt_t*)SvPVX(newSV(sizeof(my_cxt_t)-1));\ Copy(INT2PTR(my_cxt_t*, SvUV(my_cxt_sv)), my_cxtp, 1, my_cxt_t);\ sv_setuv(my_cxt_sv, PTR2UV(my_cxtp)) #endif #else /* single interpreter */ #ifndef START_MY_CXT #define START_MY_CXT static my_cxt_t my_cxt; #define dMY_CXT_SV dNOOP #define dMY_CXT dNOOP #define MY_CXT_INIT NOOP #define MY_CXT my_cxt #define pMY_CXT void #define pMY_CXT_ #define _pMY_CXT #define aMY_CXT #define aMY_CXT_ #define _aMY_CXT #endif /* START_MY_CXT */ #ifndef MY_CXT_CLONE #define MY_CXT_CLONE NOOP #endif #endif #ifndef IVdf # if IVSIZE == LONGSIZE # define IVdf "ld" # define UVuf "lu" # define UVof "lo" # define UVxf "lx" # define UVXf "lX" # else # if IVSIZE == INTSIZE # define IVdf "d" # define UVuf "u" # define UVof "o" # define UVxf "x" # define UVXf "X" # endif # endif #endif #ifndef NVef # if defined(USE_LONG_DOUBLE) && defined(HAS_LONG_DOUBLE) && \ defined(PERL_PRIfldbl) && (PERL_BCDVERSION != 0x5006000) /* Not very likely, but let's try anyway. */ # define NVef PERL_PRIeldbl # define NVff PERL_PRIfldbl # define NVgf PERL_PRIgldbl # else # define NVef "e" # define NVff "f" # define NVgf "g" # endif #endif #ifndef SvREFCNT_inc # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc(sv) \ ({ \ SV * const _sv = (SV*)(sv); \ if (_sv) \ (SvREFCNT(_sv))++; \ _sv; \ }) # else # define SvREFCNT_inc(sv) \ ((PL_Sv=(SV*)(sv)) ? (++(SvREFCNT(PL_Sv)),PL_Sv) : NULL) # endif #endif #ifndef SvREFCNT_inc_simple # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc_simple(sv) \ ({ \ if (sv) \ (SvREFCNT(sv))++; \ (SV *)(sv); \ }) # else # define SvREFCNT_inc_simple(sv) \ ((sv) ? (SvREFCNT(sv)++,(SV*)(sv)) : NULL) # endif #endif #ifndef SvREFCNT_inc_NN # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc_NN(sv) \ ({ \ SV * const _sv = (SV*)(sv); \ SvREFCNT(_sv)++; \ _sv; \ }) # else # define SvREFCNT_inc_NN(sv) \ (PL_Sv=(SV*)(sv),++(SvREFCNT(PL_Sv)),PL_Sv) # endif #endif #ifndef SvREFCNT_inc_void # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc_void(sv) \ ({ \ SV * const _sv = (SV*)(sv); \ if (_sv) \ (void)(SvREFCNT(_sv)++); \ }) # else # define SvREFCNT_inc_void(sv) \ (void)((PL_Sv=(SV*)(sv)) ? ++(SvREFCNT(PL_Sv)) : 0) # endif #endif #ifndef SvREFCNT_inc_simple_void # define SvREFCNT_inc_simple_void(sv) STMT_START { if (sv) SvREFCNT(sv)++; } STMT_END #endif #ifndef SvREFCNT_inc_simple_NN # define SvREFCNT_inc_simple_NN(sv) (++SvREFCNT(sv), (SV*)(sv)) #endif #ifndef SvREFCNT_inc_void_NN # define SvREFCNT_inc_void_NN(sv) (void)(++SvREFCNT((SV*)(sv))) #endif #ifndef SvREFCNT_inc_simple_void_NN # define SvREFCNT_inc_simple_void_NN(sv) (void)(++SvREFCNT((SV*)(sv))) #endif #ifndef newSV_type #if defined(NEED_newSV_type) static SV* DPPP_(my_newSV_type)(pTHX_ svtype const t); static #else extern SV* DPPP_(my_newSV_type)(pTHX_ svtype const t); #endif #ifdef newSV_type # undef newSV_type #endif #define newSV_type(a) DPPP_(my_newSV_type)(aTHX_ a) #define Perl_newSV_type DPPP_(my_newSV_type) #if defined(NEED_newSV_type) || defined(NEED_newSV_type_GLOBAL) SV* DPPP_(my_newSV_type)(pTHX_ svtype const t) { SV* const sv = newSV(0); sv_upgrade(sv, t); return sv; } #endif #endif #if (PERL_BCDVERSION < 0x5006000) # define D_PPP_CONSTPV_ARG(x) ((char *) (x)) #else # define D_PPP_CONSTPV_ARG(x) (x) #endif #ifndef newSVpvn # define newSVpvn(data,len) ((data) \ ? ((len) ? newSVpv((data), (len)) : newSVpv("", 0)) \ : newSV(0)) #endif #ifndef newSVpvn_utf8 # define newSVpvn_utf8(s, len, u) newSVpvn_flags((s), (len), (u) ? SVf_UTF8 : 0) #endif #ifndef SVf_UTF8 # define SVf_UTF8 0 #endif #ifndef newSVpvn_flags #if defined(NEED_newSVpvn_flags) static SV * DPPP_(my_newSVpvn_flags)(pTHX_ const char *s, STRLEN len, U32 flags); static #else extern SV * DPPP_(my_newSVpvn_flags)(pTHX_ const char *s, STRLEN len, U32 flags); #endif #ifdef newSVpvn_flags # undef newSVpvn_flags #endif #define newSVpvn_flags(a,b,c) DPPP_(my_newSVpvn_flags)(aTHX_ a,b,c) #define Perl_newSVpvn_flags DPPP_(my_newSVpvn_flags) #if defined(NEED_newSVpvn_flags) || defined(NEED_newSVpvn_flags_GLOBAL) SV * DPPP_(my_newSVpvn_flags)(pTHX_ const char *s, STRLEN len, U32 flags) { SV *sv = newSVpvn(D_PPP_CONSTPV_ARG(s), len); SvFLAGS(sv) |= (flags & SVf_UTF8); return (flags & SVs_TEMP) ? sv_2mortal(sv) : sv; } #endif #endif /* Backwards compatibility stuff... :-( */ #if !defined(NEED_sv_2pv_flags) && defined(NEED_sv_2pv_nolen) # define NEED_sv_2pv_flags #endif #if !defined(NEED_sv_2pv_flags_GLOBAL) && defined(NEED_sv_2pv_nolen_GLOBAL) # define NEED_sv_2pv_flags_GLOBAL #endif /* Hint: sv_2pv_nolen * Use the SvPV_nolen() or SvPV_nolen_const() macros instead of sv_2pv_nolen(). */ #ifndef sv_2pv_nolen # define sv_2pv_nolen(sv) SvPV_nolen(sv) #endif #ifdef SvPVbyte /* Hint: SvPVbyte * Does not work in perl-5.6.1, ppport.h implements a version * borrowed from perl-5.7.3. */ #if (PERL_BCDVERSION < 0x5007000) #if defined(NEED_sv_2pvbyte) static char * DPPP_(my_sv_2pvbyte)(pTHX_ SV *sv, STRLEN *lp); static #else extern char * DPPP_(my_sv_2pvbyte)(pTHX_ SV *sv, STRLEN *lp); #endif #ifdef sv_2pvbyte # undef sv_2pvbyte #endif #define sv_2pvbyte(a,b) DPPP_(my_sv_2pvbyte)(aTHX_ a,b) #define Perl_sv_2pvbyte DPPP_(my_sv_2pvbyte) #if defined(NEED_sv_2pvbyte) || defined(NEED_sv_2pvbyte_GLOBAL) char * DPPP_(my_sv_2pvbyte)(pTHX_ SV *sv, STRLEN *lp) { sv_utf8_downgrade(sv,0); return SvPV(sv,*lp); } #endif /* Hint: sv_2pvbyte * Use the SvPVbyte() macro instead of sv_2pvbyte(). */ #undef SvPVbyte #define SvPVbyte(sv, lp) \ ((SvFLAGS(sv) & (SVf_POK|SVf_UTF8)) == (SVf_POK) \ ? ((lp = SvCUR(sv)), SvPVX(sv)) : sv_2pvbyte(sv, &lp)) #endif #else # define SvPVbyte SvPV # define sv_2pvbyte sv_2pv #endif #ifndef sv_2pvbyte_nolen # define sv_2pvbyte_nolen(sv) sv_2pv_nolen(sv) #endif /* Hint: sv_pvn * Always use the SvPV() macro instead of sv_pvn(). */ /* Hint: sv_pvn_force * Always use the SvPV_force() macro instead of sv_pvn_force(). */ /* If these are undefined, they're not handled by the core anyway */ #ifndef SV_IMMEDIATE_UNREF # define SV_IMMEDIATE_UNREF 0 #endif #ifndef SV_GMAGIC # define SV_GMAGIC 0 #endif #ifndef SV_COW_DROP_PV # define SV_COW_DROP_PV 0 #endif #ifndef SV_UTF8_NO_ENCODING # define SV_UTF8_NO_ENCODING 0 #endif #ifndef SV_NOSTEAL # define SV_NOSTEAL 0 #endif #ifndef SV_CONST_RETURN # define SV_CONST_RETURN 0 #endif #ifndef SV_MUTABLE_RETURN # define SV_MUTABLE_RETURN 0 #endif #ifndef SV_SMAGIC # define SV_SMAGIC 0 #endif #ifndef SV_HAS_TRAILING_NUL # define SV_HAS_TRAILING_NUL 0 #endif #ifndef SV_COW_SHARED_HASH_KEYS # define SV_COW_SHARED_HASH_KEYS 0 #endif #if (PERL_BCDVERSION < 0x5007002) #if defined(NEED_sv_2pv_flags) static char * DPPP_(my_sv_2pv_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags); static #else extern char * DPPP_(my_sv_2pv_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags); #endif #ifdef sv_2pv_flags # undef sv_2pv_flags #endif #define sv_2pv_flags(a,b,c) DPPP_(my_sv_2pv_flags)(aTHX_ a,b,c) #define Perl_sv_2pv_flags DPPP_(my_sv_2pv_flags) #if defined(NEED_sv_2pv_flags) || defined(NEED_sv_2pv_flags_GLOBAL) char * DPPP_(my_sv_2pv_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags) { STRLEN n_a = (STRLEN) flags; return sv_2pv(sv, lp ? lp : &n_a); } #endif #if defined(NEED_sv_pvn_force_flags) static char * DPPP_(my_sv_pvn_force_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags); static #else extern char * DPPP_(my_sv_pvn_force_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags); #endif #ifdef sv_pvn_force_flags # undef sv_pvn_force_flags #endif #define sv_pvn_force_flags(a,b,c) DPPP_(my_sv_pvn_force_flags)(aTHX_ a,b,c) #define Perl_sv_pvn_force_flags DPPP_(my_sv_pvn_force_flags) #if defined(NEED_sv_pvn_force_flags) || defined(NEED_sv_pvn_force_flags_GLOBAL) char * DPPP_(my_sv_pvn_force_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags) { STRLEN n_a = (STRLEN) flags; return sv_pvn_force(sv, lp ? lp : &n_a); } #endif #endif #if (PERL_BCDVERSION < 0x5008008) || ( (PERL_BCDVERSION >= 0x5009000) && (PERL_BCDVERSION < 0x5009003) ) # define DPPP_SVPV_NOLEN_LP_ARG &PL_na #else # define DPPP_SVPV_NOLEN_LP_ARG 0 #endif #ifndef SvPV_const # define SvPV_const(sv, lp) SvPV_flags_const(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_mutable # define SvPV_mutable(sv, lp) SvPV_flags_mutable(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_flags # define SvPV_flags(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX(sv)) : sv_2pv_flags(sv, &lp, flags)) #endif #ifndef SvPV_flags_const # define SvPV_flags_const(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX_const(sv)) : \ (const char*) sv_2pv_flags(sv, &lp, flags|SV_CONST_RETURN)) #endif #ifndef SvPV_flags_const_nolen # define SvPV_flags_const_nolen(sv, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? SvPVX_const(sv) : \ (const char*) sv_2pv_flags(sv, DPPP_SVPV_NOLEN_LP_ARG, flags|SV_CONST_RETURN)) #endif #ifndef SvPV_flags_mutable # define SvPV_flags_mutable(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX_mutable(sv)) : \ sv_2pv_flags(sv, &lp, flags|SV_MUTABLE_RETURN)) #endif #ifndef SvPV_force # define SvPV_force(sv, lp) SvPV_force_flags(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_force_nolen # define SvPV_force_nolen(sv) SvPV_force_flags_nolen(sv, SV_GMAGIC) #endif #ifndef SvPV_force_mutable # define SvPV_force_mutable(sv, lp) SvPV_force_flags_mutable(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_force_nomg # define SvPV_force_nomg(sv, lp) SvPV_force_flags(sv, lp, 0) #endif #ifndef SvPV_force_nomg_nolen # define SvPV_force_nomg_nolen(sv) SvPV_force_flags_nolen(sv, 0) #endif #ifndef SvPV_force_flags # define SvPV_force_flags(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK|SVf_THINKFIRST)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX(sv)) : sv_pvn_force_flags(sv, &lp, flags)) #endif #ifndef SvPV_force_flags_nolen # define SvPV_force_flags_nolen(sv, flags) \ ((SvFLAGS(sv) & (SVf_POK|SVf_THINKFIRST)) == SVf_POK \ ? SvPVX(sv) : sv_pvn_force_flags(sv, DPPP_SVPV_NOLEN_LP_ARG, flags)) #endif #ifndef SvPV_force_flags_mutable # define SvPV_force_flags_mutable(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK|SVf_THINKFIRST)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX_mutable(sv)) \ : sv_pvn_force_flags(sv, &lp, flags|SV_MUTABLE_RETURN)) #endif #ifndef SvPV_nolen # define SvPV_nolen(sv) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? SvPVX(sv) : sv_2pv_flags(sv, DPPP_SVPV_NOLEN_LP_ARG, SV_GMAGIC)) #endif #ifndef SvPV_nolen_const # define SvPV_nolen_const(sv) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? SvPVX_const(sv) : sv_2pv_flags(sv, DPPP_SVPV_NOLEN_LP_ARG, SV_GMAGIC|SV_CONST_RETURN)) #endif #ifndef SvPV_nomg # define SvPV_nomg(sv, lp) SvPV_flags(sv, lp, 0) #endif #ifndef SvPV_nomg_const # define SvPV_nomg_const(sv, lp) SvPV_flags_const(sv, lp, 0) #endif #ifndef SvPV_nomg_const_nolen # define SvPV_nomg_const_nolen(sv) SvPV_flags_const_nolen(sv, 0) #endif #ifndef SvPV_renew # define SvPV_renew(sv,n) STMT_START { SvLEN_set(sv, n); \ SvPV_set((sv), (char *) saferealloc( \ (Malloc_t)SvPVX(sv), (MEM_SIZE)((n)))); \ } STMT_END #endif #ifndef SvMAGIC_set # define SvMAGIC_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_PVMG); \ (((XPVMG*) SvANY(sv))->xmg_magic = (val)); } STMT_END #endif #if (PERL_BCDVERSION < 0x5009003) #ifndef SvPVX_const # define SvPVX_const(sv) ((const char*) (0 + SvPVX(sv))) #endif #ifndef SvPVX_mutable # define SvPVX_mutable(sv) (0 + SvPVX(sv)) #endif #ifndef SvRV_set # define SvRV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_RV); \ (((XRV*) SvANY(sv))->xrv_rv = (val)); } STMT_END #endif #else #ifndef SvPVX_const # define SvPVX_const(sv) ((const char*)((sv)->sv_u.svu_pv)) #endif #ifndef SvPVX_mutable # define SvPVX_mutable(sv) ((sv)->sv_u.svu_pv) #endif #ifndef SvRV_set # define SvRV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_RV); \ ((sv)->sv_u.svu_rv = (val)); } STMT_END #endif #endif #ifndef SvSTASH_set # define SvSTASH_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_PVMG); \ (((XPVMG*) SvANY(sv))->xmg_stash = (val)); } STMT_END #endif #if (PERL_BCDVERSION < 0x5004000) #ifndef SvUV_set # define SvUV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) == SVt_IV || SvTYPE(sv) >= SVt_PVIV); \ (((XPVIV*) SvANY(sv))->xiv_iv = (IV) (val)); } STMT_END #endif #else #ifndef SvUV_set # define SvUV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) == SVt_IV || SvTYPE(sv) >= SVt_PVIV); \ (((XPVUV*) SvANY(sv))->xuv_uv = (val)); } STMT_END #endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(vnewSVpvf) #if defined(NEED_vnewSVpvf) static SV * DPPP_(my_vnewSVpvf)(pTHX_ const char *pat, va_list *args); static #else extern SV * DPPP_(my_vnewSVpvf)(pTHX_ const char *pat, va_list *args); #endif #ifdef vnewSVpvf # undef vnewSVpvf #endif #define vnewSVpvf(a,b) DPPP_(my_vnewSVpvf)(aTHX_ a,b) #define Perl_vnewSVpvf DPPP_(my_vnewSVpvf) #if defined(NEED_vnewSVpvf) || defined(NEED_vnewSVpvf_GLOBAL) SV * DPPP_(my_vnewSVpvf)(pTHX_ const char *pat, va_list *args) { register SV *sv = newSV(0); sv_vsetpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)); return sv; } #endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vcatpvf) # define sv_vcatpvf(sv, pat, args) sv_vcatpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)) #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vsetpvf) # define sv_vsetpvf(sv, pat, args) sv_vsetpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)) #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_catpvf_mg) #if defined(NEED_sv_catpvf_mg) static void DPPP_(my_sv_catpvf_mg)(pTHX_ SV *sv, const char *pat, ...); static #else extern void DPPP_(my_sv_catpvf_mg)(pTHX_ SV *sv, const char *pat, ...); #endif #define Perl_sv_catpvf_mg DPPP_(my_sv_catpvf_mg) #if defined(NEED_sv_catpvf_mg) || defined(NEED_sv_catpvf_mg_GLOBAL) void DPPP_(my_sv_catpvf_mg)(pTHX_ SV *sv, const char *pat, ...) { va_list args; va_start(args, pat); sv_vcatpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #ifdef PERL_IMPLICIT_CONTEXT #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_catpvf_mg_nocontext) #if defined(NEED_sv_catpvf_mg_nocontext) static void DPPP_(my_sv_catpvf_mg_nocontext)(SV *sv, const char *pat, ...); static #else extern void DPPP_(my_sv_catpvf_mg_nocontext)(SV *sv, const char *pat, ...); #endif #define sv_catpvf_mg_nocontext DPPP_(my_sv_catpvf_mg_nocontext) #define Perl_sv_catpvf_mg_nocontext DPPP_(my_sv_catpvf_mg_nocontext) #if defined(NEED_sv_catpvf_mg_nocontext) || defined(NEED_sv_catpvf_mg_nocontext_GLOBAL) void DPPP_(my_sv_catpvf_mg_nocontext)(SV *sv, const char *pat, ...) { dTHX; va_list args; va_start(args, pat); sv_vcatpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #endif /* sv_catpvf_mg depends on sv_catpvf_mg_nocontext */ #ifndef sv_catpvf_mg # ifdef PERL_IMPLICIT_CONTEXT # define sv_catpvf_mg Perl_sv_catpvf_mg_nocontext # else # define sv_catpvf_mg Perl_sv_catpvf_mg # endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vcatpvf_mg) # define sv_vcatpvf_mg(sv, pat, args) \ STMT_START { \ sv_vcatpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)); \ SvSETMAGIC(sv); \ } STMT_END #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_setpvf_mg) #if defined(NEED_sv_setpvf_mg) static void DPPP_(my_sv_setpvf_mg)(pTHX_ SV *sv, const char *pat, ...); static #else extern void DPPP_(my_sv_setpvf_mg)(pTHX_ SV *sv, const char *pat, ...); #endif #define Perl_sv_setpvf_mg DPPP_(my_sv_setpvf_mg) #if defined(NEED_sv_setpvf_mg) || defined(NEED_sv_setpvf_mg_GLOBAL) void DPPP_(my_sv_setpvf_mg)(pTHX_ SV *sv, const char *pat, ...) { va_list args; va_start(args, pat); sv_vsetpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #ifdef PERL_IMPLICIT_CONTEXT #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_setpvf_mg_nocontext) #if defined(NEED_sv_setpvf_mg_nocontext) static void DPPP_(my_sv_setpvf_mg_nocontext)(SV *sv, const char *pat, ...); static #else extern void DPPP_(my_sv_setpvf_mg_nocontext)(SV *sv, const char *pat, ...); #endif #define sv_setpvf_mg_nocontext DPPP_(my_sv_setpvf_mg_nocontext) #define Perl_sv_setpvf_mg_nocontext DPPP_(my_sv_setpvf_mg_nocontext) #if defined(NEED_sv_setpvf_mg_nocontext) || defined(NEED_sv_setpvf_mg_nocontext_GLOBAL) void DPPP_(my_sv_setpvf_mg_nocontext)(SV *sv, const char *pat, ...) { dTHX; va_list args; va_start(args, pat); sv_vsetpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #endif /* sv_setpvf_mg depends on sv_setpvf_mg_nocontext */ #ifndef sv_setpvf_mg # ifdef PERL_IMPLICIT_CONTEXT # define sv_setpvf_mg Perl_sv_setpvf_mg_nocontext # else # define sv_setpvf_mg Perl_sv_setpvf_mg # endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vsetpvf_mg) # define sv_vsetpvf_mg(sv, pat, args) \ STMT_START { \ sv_vsetpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)); \ SvSETMAGIC(sv); \ } STMT_END #endif #ifndef newSVpvn_share #if defined(NEED_newSVpvn_share) static SV * DPPP_(my_newSVpvn_share)(pTHX_ const char *src, I32 len, U32 hash); static #else extern SV * DPPP_(my_newSVpvn_share)(pTHX_ const char *src, I32 len, U32 hash); #endif #ifdef newSVpvn_share # undef newSVpvn_share #endif #define newSVpvn_share(a,b,c) DPPP_(my_newSVpvn_share)(aTHX_ a,b,c) #define Perl_newSVpvn_share DPPP_(my_newSVpvn_share) #if defined(NEED_newSVpvn_share) || defined(NEED_newSVpvn_share_GLOBAL) SV * DPPP_(my_newSVpvn_share)(pTHX_ const char *src, I32 len, U32 hash) { SV *sv; if (len < 0) len = -len; if (!hash) PERL_HASH(hash, (char*) src, len); sv = newSVpvn((char *) src, len); sv_upgrade(sv, SVt_PVIV); SvIVX(sv) = hash; SvREADONLY_on(sv); SvPOK_on(sv); return sv; } #endif #endif #ifndef SvSHARED_HASH # define SvSHARED_HASH(sv) (0 + SvUVX(sv)) #endif #ifndef HvNAME_get # define HvNAME_get(hv) HvNAME(hv) #endif #ifndef HvNAMELEN_get # define HvNAMELEN_get(hv) (HvNAME_get(hv) ? (I32)strlen(HvNAME_get(hv)) : 0) #endif #ifndef GvSVn # define GvSVn(gv) GvSV(gv) #endif #ifndef isGV_with_GP # define isGV_with_GP(gv) isGV(gv) #endif #ifndef WARN_ALL # define WARN_ALL 0 #endif #ifndef WARN_CLOSURE # define WARN_CLOSURE 1 #endif #ifndef WARN_DEPRECATED # define WARN_DEPRECATED 2 #endif #ifndef WARN_EXITING # define WARN_EXITING 3 #endif #ifndef WARN_GLOB # define WARN_GLOB 4 #endif #ifndef WARN_IO # define WARN_IO 5 #endif #ifndef WARN_CLOSED # define WARN_CLOSED 6 #endif #ifndef WARN_EXEC # define WARN_EXEC 7 #endif #ifndef WARN_LAYER # define WARN_LAYER 8 #endif #ifndef WARN_NEWLINE # define WARN_NEWLINE 9 #endif #ifndef WARN_PIPE # define WARN_PIPE 10 #endif #ifndef WARN_UNOPENED # define WARN_UNOPENED 11 #endif #ifndef WARN_MISC # define WARN_MISC 12 #endif #ifndef WARN_NUMERIC # define WARN_NUMERIC 13 #endif #ifndef WARN_ONCE # define WARN_ONCE 14 #endif #ifndef WARN_OVERFLOW # define WARN_OVERFLOW 15 #endif #ifndef WARN_PACK # define WARN_PACK 16 #endif #ifndef WARN_PORTABLE # define WARN_PORTABLE 17 #endif #ifndef WARN_RECURSION # define WARN_RECURSION 18 #endif #ifndef WARN_REDEFINE # define WARN_REDEFINE 19 #endif #ifndef WARN_REGEXP # define WARN_REGEXP 20 #endif #ifndef WARN_SEVERE # define WARN_SEVERE 21 #endif #ifndef WARN_DEBUGGING # define WARN_DEBUGGING 22 #endif #ifndef WARN_INPLACE # define WARN_INPLACE 23 #endif #ifndef WARN_INTERNAL # define WARN_INTERNAL 24 #endif #ifndef WARN_MALLOC # define WARN_MALLOC 25 #endif #ifndef WARN_SIGNAL # define WARN_SIGNAL 26 #endif #ifndef WARN_SUBSTR # define WARN_SUBSTR 27 #endif #ifndef WARN_SYNTAX # define WARN_SYNTAX 28 #endif #ifndef WARN_AMBIGUOUS # define WARN_AMBIGUOUS 29 #endif #ifndef WARN_BAREWORD # define WARN_BAREWORD 30 #endif #ifndef WARN_DIGIT # define WARN_DIGIT 31 #endif #ifndef WARN_PARENTHESIS # define WARN_PARENTHESIS 32 #endif #ifndef WARN_PRECEDENCE # define WARN_PRECEDENCE 33 #endif #ifndef WARN_PRINTF # define WARN_PRINTF 34 #endif #ifndef WARN_PROTOTYPE # define WARN_PROTOTYPE 35 #endif #ifndef WARN_QW # define WARN_QW 36 #endif #ifndef WARN_RESERVED # define WARN_RESERVED 37 #endif #ifndef WARN_SEMICOLON # define WARN_SEMICOLON 38 #endif #ifndef WARN_TAINT # define WARN_TAINT 39 #endif #ifndef WARN_THREADS # define WARN_THREADS 40 #endif #ifndef WARN_UNINITIALIZED # define WARN_UNINITIALIZED 41 #endif #ifndef WARN_UNPACK # define WARN_UNPACK 42 #endif #ifndef WARN_UNTIE # define WARN_UNTIE 43 #endif #ifndef WARN_UTF8 # define WARN_UTF8 44 #endif #ifndef WARN_VOID # define WARN_VOID 45 #endif #ifndef WARN_ASSERTIONS # define WARN_ASSERTIONS 46 #endif #ifndef packWARN # define packWARN(a) (a) #endif #ifndef ckWARN # ifdef G_WARN_ON # define ckWARN(a) (PL_dowarn & G_WARN_ON) # else # define ckWARN(a) PL_dowarn # endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(warner) #if defined(NEED_warner) static void DPPP_(my_warner)(U32 err, const char *pat, ...); static #else extern void DPPP_(my_warner)(U32 err, const char *pat, ...); #endif #define Perl_warner DPPP_(my_warner) #if defined(NEED_warner) || defined(NEED_warner_GLOBAL) void DPPP_(my_warner)(U32 err, const char *pat, ...) { SV *sv; va_list args; PERL_UNUSED_ARG(err); va_start(args, pat); sv = vnewSVpvf(pat, &args); va_end(args); sv_2mortal(sv); warn("%s", SvPV_nolen(sv)); } #define warner Perl_warner #define Perl_warner_nocontext Perl_warner #endif #endif /* concatenating with "" ensures that only literal strings are accepted as argument * note that STR_WITH_LEN() can't be used as argument to macros or functions that * under some configurations might be macros */ #ifndef STR_WITH_LEN # define STR_WITH_LEN(s) (s ""), (sizeof(s)-1) #endif #ifndef newSVpvs # define newSVpvs(str) newSVpvn(str "", sizeof(str) - 1) #endif #ifndef newSVpvs_flags # define newSVpvs_flags(str, flags) newSVpvn_flags(str "", sizeof(str) - 1, flags) #endif #ifndef sv_catpvs # define sv_catpvs(sv, str) sv_catpvn(sv, str "", sizeof(str) - 1) #endif #ifndef sv_setpvs # define sv_setpvs(sv, str) sv_setpvn(sv, str "", sizeof(str) - 1) #endif #ifndef hv_fetchs # define hv_fetchs(hv, key, lval) hv_fetch(hv, key "", sizeof(key) - 1, lval) #endif #ifndef hv_stores # define hv_stores(hv, key, val) hv_store(hv, key "", sizeof(key) - 1, val, 0) #endif #ifndef gv_fetchpvn_flags # define gv_fetchpvn_flags(name, len, flags, svt) gv_fetchpv(name, flags, svt) #endif #ifndef gv_fetchpvs # define gv_fetchpvs(name, flags, svt) gv_fetchpvn_flags(name "", sizeof(name) - 1, flags, svt) #endif #ifndef gv_stashpvs # define gv_stashpvs(name, flags) gv_stashpvn(name "", sizeof(name) - 1, flags) #endif #ifndef SvGETMAGIC # define SvGETMAGIC(x) STMT_START { if (SvGMAGICAL(x)) mg_get(x); } STMT_END #endif #ifndef PERL_MAGIC_sv # define PERL_MAGIC_sv '\0' #endif #ifndef PERL_MAGIC_overload # define PERL_MAGIC_overload 'A' #endif #ifndef PERL_MAGIC_overload_elem # define PERL_MAGIC_overload_elem 'a' #endif #ifndef PERL_MAGIC_overload_table # define PERL_MAGIC_overload_table 'c' #endif #ifndef PERL_MAGIC_bm # define PERL_MAGIC_bm 'B' #endif #ifndef PERL_MAGIC_regdata # define PERL_MAGIC_regdata 'D' #endif #ifndef PERL_MAGIC_regdatum # define PERL_MAGIC_regdatum 'd' #endif #ifndef PERL_MAGIC_env # define PERL_MAGIC_env 'E' #endif #ifndef PERL_MAGIC_envelem # define PERL_MAGIC_envelem 'e' #endif #ifndef PERL_MAGIC_fm # define PERL_MAGIC_fm 'f' #endif #ifndef PERL_MAGIC_regex_global # define PERL_MAGIC_regex_global 'g' #endif #ifndef PERL_MAGIC_isa # define PERL_MAGIC_isa 'I' #endif #ifndef PERL_MAGIC_isaelem # define PERL_MAGIC_isaelem 'i' #endif #ifndef PERL_MAGIC_nkeys # define PERL_MAGIC_nkeys 'k' #endif #ifndef PERL_MAGIC_dbfile # define PERL_MAGIC_dbfile 'L' #endif #ifndef PERL_MAGIC_dbline # define PERL_MAGIC_dbline 'l' #endif #ifndef PERL_MAGIC_mutex # define PERL_MAGIC_mutex 'm' #endif #ifndef PERL_MAGIC_shared # define PERL_MAGIC_shared 'N' #endif #ifndef PERL_MAGIC_shared_scalar # define PERL_MAGIC_shared_scalar 'n' #endif #ifndef PERL_MAGIC_collxfrm # define PERL_MAGIC_collxfrm 'o' #endif #ifndef PERL_MAGIC_tied # define PERL_MAGIC_tied 'P' #endif #ifndef PERL_MAGIC_tiedelem # define PERL_MAGIC_tiedelem 'p' #endif #ifndef PERL_MAGIC_tiedscalar # define PERL_MAGIC_tiedscalar 'q' #endif #ifndef PERL_MAGIC_qr # define PERL_MAGIC_qr 'r' #endif #ifndef PERL_MAGIC_sig # define PERL_MAGIC_sig 'S' #endif #ifndef PERL_MAGIC_sigelem # define PERL_MAGIC_sigelem 's' #endif #ifndef PERL_MAGIC_taint # define PERL_MAGIC_taint 't' #endif #ifndef PERL_MAGIC_uvar # define PERL_MAGIC_uvar 'U' #endif #ifndef PERL_MAGIC_uvar_elem # define PERL_MAGIC_uvar_elem 'u' #endif #ifndef PERL_MAGIC_vstring # define PERL_MAGIC_vstring 'V' #endif #ifndef PERL_MAGIC_vec # define PERL_MAGIC_vec 'v' #endif #ifndef PERL_MAGIC_utf8 # define PERL_MAGIC_utf8 'w' #endif #ifndef PERL_MAGIC_substr # define PERL_MAGIC_substr 'x' #endif #ifndef PERL_MAGIC_defelem # define PERL_MAGIC_defelem 'y' #endif #ifndef PERL_MAGIC_glob # define PERL_MAGIC_glob '*' #endif #ifndef PERL_MAGIC_arylen # define PERL_MAGIC_arylen '#' #endif #ifndef PERL_MAGIC_pos # define PERL_MAGIC_pos '.' #endif #ifndef PERL_MAGIC_backref # define PERL_MAGIC_backref '<' #endif #ifndef PERL_MAGIC_ext # define PERL_MAGIC_ext '~' #endif /* That's the best we can do... */ #ifndef sv_catpvn_nomg # define sv_catpvn_nomg sv_catpvn #endif #ifndef sv_catsv_nomg # define sv_catsv_nomg sv_catsv #endif #ifndef sv_setsv_nomg # define sv_setsv_nomg sv_setsv #endif #ifndef sv_pvn_nomg # define sv_pvn_nomg sv_pvn #endif #ifndef SvIV_nomg # define SvIV_nomg SvIV #endif #ifndef SvUV_nomg # define SvUV_nomg SvUV #endif #ifndef sv_catpv_mg # define sv_catpv_mg(sv, ptr) \ STMT_START { \ SV *TeMpSv = sv; \ sv_catpv(TeMpSv,ptr); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_catpvn_mg # define sv_catpvn_mg(sv, ptr, len) \ STMT_START { \ SV *TeMpSv = sv; \ sv_catpvn(TeMpSv,ptr,len); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_catsv_mg # define sv_catsv_mg(dsv, ssv) \ STMT_START { \ SV *TeMpSv = dsv; \ sv_catsv(TeMpSv,ssv); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setiv_mg # define sv_setiv_mg(sv, i) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setiv(TeMpSv,i); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setnv_mg # define sv_setnv_mg(sv, num) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setnv(TeMpSv,num); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setpv_mg # define sv_setpv_mg(sv, ptr) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setpv(TeMpSv,ptr); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setpvn_mg # define sv_setpvn_mg(sv, ptr, len) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setpvn(TeMpSv,ptr,len); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setsv_mg # define sv_setsv_mg(dsv, ssv) \ STMT_START { \ SV *TeMpSv = dsv; \ sv_setsv(TeMpSv,ssv); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setuv_mg # define sv_setuv_mg(sv, i) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setuv(TeMpSv,i); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_usepvn_mg # define sv_usepvn_mg(sv, ptr, len) \ STMT_START { \ SV *TeMpSv = sv; \ sv_usepvn(TeMpSv,ptr,len); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef SvVSTRING_mg # define SvVSTRING_mg(sv) (SvMAGICAL(sv) ? mg_find(sv, PERL_MAGIC_vstring) : NULL) #endif /* Hint: sv_magic_portable * This is a compatibility function that is only available with * Devel::PPPort. It is NOT in the perl core. * Its purpose is to mimic the 5.8.0 behaviour of sv_magic() when * it is being passed a name pointer with namlen == 0. In that * case, perl 5.8.0 and later store the pointer, not a copy of it. * The compatibility can be provided back to perl 5.004. With * earlier versions, the code will not compile. */ #if (PERL_BCDVERSION < 0x5004000) /* code that uses sv_magic_portable will not compile */ #elif (PERL_BCDVERSION < 0x5008000) # define sv_magic_portable(sv, obj, how, name, namlen) \ STMT_START { \ SV *SvMp_sv = (sv); \ char *SvMp_name = (char *) (name); \ I32 SvMp_namlen = (namlen); \ if (SvMp_name && SvMp_namlen == 0) \ { \ MAGIC *mg; \ sv_magic(SvMp_sv, obj, how, 0, 0); \ mg = SvMAGIC(SvMp_sv); \ mg->mg_len = -42; /* XXX: this is the tricky part */ \ mg->mg_ptr = SvMp_name; \ } \ else \ { \ sv_magic(SvMp_sv, obj, how, SvMp_name, SvMp_namlen); \ } \ } STMT_END #else # define sv_magic_portable(a, b, c, d, e) sv_magic(a, b, c, d, e) #endif #ifdef USE_ITHREADS #ifndef CopFILE # define CopFILE(c) ((c)->cop_file) #endif #ifndef CopFILEGV # define CopFILEGV(c) (CopFILE(c) ? gv_fetchfile(CopFILE(c)) : Nullgv) #endif #ifndef CopFILE_set # define CopFILE_set(c,pv) ((c)->cop_file = savepv(pv)) #endif #ifndef CopFILESV # define CopFILESV(c) (CopFILE(c) ? GvSV(gv_fetchfile(CopFILE(c))) : Nullsv) #endif #ifndef CopFILEAV # define CopFILEAV(c) (CopFILE(c) ? GvAV(gv_fetchfile(CopFILE(c))) : Nullav) #endif #ifndef CopSTASHPV # define CopSTASHPV(c) ((c)->cop_stashpv) #endif #ifndef CopSTASHPV_set # define CopSTASHPV_set(c,pv) ((c)->cop_stashpv = ((pv) ? savepv(pv) : Nullch)) #endif #ifndef CopSTASH # define CopSTASH(c) (CopSTASHPV(c) ? gv_stashpv(CopSTASHPV(c),GV_ADD) : Nullhv) #endif #ifndef CopSTASH_set # define CopSTASH_set(c,hv) CopSTASHPV_set(c, (hv) ? HvNAME(hv) : Nullch) #endif #ifndef CopSTASH_eq # define CopSTASH_eq(c,hv) ((hv) && (CopSTASHPV(c) == HvNAME(hv) \ || (CopSTASHPV(c) && HvNAME(hv) \ && strEQ(CopSTASHPV(c), HvNAME(hv))))) #endif #else #ifndef CopFILEGV # define CopFILEGV(c) ((c)->cop_filegv) #endif #ifndef CopFILEGV_set # define CopFILEGV_set(c,gv) ((c)->cop_filegv = (GV*)SvREFCNT_inc(gv)) #endif #ifndef CopFILE_set # define CopFILE_set(c,pv) CopFILEGV_set((c), gv_fetchfile(pv)) #endif #ifndef CopFILESV # define CopFILESV(c) (CopFILEGV(c) ? GvSV(CopFILEGV(c)) : Nullsv) #endif #ifndef CopFILEAV # define CopFILEAV(c) (CopFILEGV(c) ? GvAV(CopFILEGV(c)) : Nullav) #endif #ifndef CopFILE # define CopFILE(c) (CopFILESV(c) ? SvPVX(CopFILESV(c)) : Nullch) #endif #ifndef CopSTASH # define CopSTASH(c) ((c)->cop_stash) #endif #ifndef CopSTASH_set # define CopSTASH_set(c,hv) ((c)->cop_stash = (hv)) #endif #ifndef CopSTASHPV # define CopSTASHPV(c) (CopSTASH(c) ? HvNAME(CopSTASH(c)) : Nullch) #endif #ifndef CopSTASHPV_set # define CopSTASHPV_set(c,pv) CopSTASH_set((c), gv_stashpv(pv,GV_ADD)) #endif #ifndef CopSTASH_eq # define CopSTASH_eq(c,hv) (CopSTASH(c) == (hv)) #endif #endif /* USE_ITHREADS */ #ifndef IN_PERL_COMPILETIME # define IN_PERL_COMPILETIME (PL_curcop == &PL_compiling) #endif #ifndef IN_LOCALE_RUNTIME # define IN_LOCALE_RUNTIME (PL_curcop->op_private & HINT_LOCALE) #endif #ifndef IN_LOCALE_COMPILETIME # define IN_LOCALE_COMPILETIME (PL_hints & HINT_LOCALE) #endif #ifndef IN_LOCALE # define IN_LOCALE (IN_PERL_COMPILETIME ? IN_LOCALE_COMPILETIME : IN_LOCALE_RUNTIME) #endif #ifndef IS_NUMBER_IN_UV # define IS_NUMBER_IN_UV 0x01 #endif #ifndef IS_NUMBER_GREATER_THAN_UV_MAX # define IS_NUMBER_GREATER_THAN_UV_MAX 0x02 #endif #ifndef IS_NUMBER_NOT_INT # define IS_NUMBER_NOT_INT 0x04 #endif #ifndef IS_NUMBER_NEG # define IS_NUMBER_NEG 0x08 #endif #ifndef IS_NUMBER_INFINITY # define IS_NUMBER_INFINITY 0x10 #endif #ifndef IS_NUMBER_NAN # define IS_NUMBER_NAN 0x20 #endif #ifndef GROK_NUMERIC_RADIX # define GROK_NUMERIC_RADIX(sp, send) grok_numeric_radix(sp, send) #endif #ifndef PERL_SCAN_GREATER_THAN_UV_MAX # define PERL_SCAN_GREATER_THAN_UV_MAX 0x02 #endif #ifndef PERL_SCAN_SILENT_ILLDIGIT # define PERL_SCAN_SILENT_ILLDIGIT 0x04 #endif #ifndef PERL_SCAN_ALLOW_UNDERSCORES # define PERL_SCAN_ALLOW_UNDERSCORES 0x01 #endif #ifndef PERL_SCAN_DISALLOW_PREFIX # define PERL_SCAN_DISALLOW_PREFIX 0x02 #endif #ifndef grok_numeric_radix #if defined(NEED_grok_numeric_radix) static bool DPPP_(my_grok_numeric_radix)(pTHX_ const char ** sp, const char * send); static #else extern bool DPPP_(my_grok_numeric_radix)(pTHX_ const char ** sp, const char * send); #endif #ifdef grok_numeric_radix # undef grok_numeric_radix #endif #define grok_numeric_radix(a,b) DPPP_(my_grok_numeric_radix)(aTHX_ a,b) #define Perl_grok_numeric_radix DPPP_(my_grok_numeric_radix) #if defined(NEED_grok_numeric_radix) || defined(NEED_grok_numeric_radix_GLOBAL) bool DPPP_(my_grok_numeric_radix)(pTHX_ const char **sp, const char *send) { #ifdef USE_LOCALE_NUMERIC #ifdef PL_numeric_radix_sv if (PL_numeric_radix_sv && IN_LOCALE) { STRLEN len; char* radix = SvPV(PL_numeric_radix_sv, len); if (*sp + len <= send && memEQ(*sp, radix, len)) { *sp += len; return TRUE; } } #else /* older perls don't have PL_numeric_radix_sv so the radix * must manually be requested from locale.h */ #include dTHR; /* needed for older threaded perls */ struct lconv *lc = localeconv(); char *radix = lc->decimal_point; if (radix && IN_LOCALE) { STRLEN len = strlen(radix); if (*sp + len <= send && memEQ(*sp, radix, len)) { *sp += len; return TRUE; } } #endif #endif /* USE_LOCALE_NUMERIC */ /* always try "." if numeric radix didn't match because * we may have data from different locales mixed */ if (*sp < send && **sp == '.') { ++*sp; return TRUE; } return FALSE; } #endif #endif #ifndef grok_number #if defined(NEED_grok_number) static int DPPP_(my_grok_number)(pTHX_ const char * pv, STRLEN len, UV * valuep); static #else extern int DPPP_(my_grok_number)(pTHX_ const char * pv, STRLEN len, UV * valuep); #endif #ifdef grok_number # undef grok_number #endif #define grok_number(a,b,c) DPPP_(my_grok_number)(aTHX_ a,b,c) #define Perl_grok_number DPPP_(my_grok_number) #if defined(NEED_grok_number) || defined(NEED_grok_number_GLOBAL) int DPPP_(my_grok_number)(pTHX_ const char *pv, STRLEN len, UV *valuep) { const char *s = pv; const char *send = pv + len; const UV max_div_10 = UV_MAX / 10; const char max_mod_10 = UV_MAX % 10; int numtype = 0; int sawinf = 0; int sawnan = 0; while (s < send && isSPACE(*s)) s++; if (s == send) { return 0; } else if (*s == '-') { s++; numtype = IS_NUMBER_NEG; } else if (*s == '+') s++; if (s == send) return 0; /* next must be digit or the radix separator or beginning of infinity */ if (isDIGIT(*s)) { /* UVs are at least 32 bits, so the first 9 decimal digits cannot overflow. */ UV value = *s - '0'; /* This construction seems to be more optimiser friendly. (without it gcc does the isDIGIT test and the *s - '0' separately) With it gcc on arm is managing 6 instructions (6 cycles) per digit. In theory the optimiser could deduce how far to unroll the loop before checking for overflow. */ if (++s < send) { int digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { /* Now got 9 digits, so need to check each time for overflow. */ digit = *s - '0'; while (digit >= 0 && digit <= 9 && (value < max_div_10 || (value == max_div_10 && digit <= max_mod_10))) { value = value * 10 + digit; if (++s < send) digit = *s - '0'; else break; } if (digit >= 0 && digit <= 9 && (s < send)) { /* value overflowed. skip the remaining digits, don't worry about setting *valuep. */ do { s++; } while (s < send && isDIGIT(*s)); numtype |= IS_NUMBER_GREATER_THAN_UV_MAX; goto skip_value; } } } } } } } } } } } } } } } } } } numtype |= IS_NUMBER_IN_UV; if (valuep) *valuep = value; skip_value: if (GROK_NUMERIC_RADIX(&s, send)) { numtype |= IS_NUMBER_NOT_INT; while (s < send && isDIGIT(*s)) /* optional digits after the radix */ s++; } } else if (GROK_NUMERIC_RADIX(&s, send)) { numtype |= IS_NUMBER_NOT_INT | IS_NUMBER_IN_UV; /* valuep assigned below */ /* no digits before the radix means we need digits after it */ if (s < send && isDIGIT(*s)) { do { s++; } while (s < send && isDIGIT(*s)); if (valuep) { /* integer approximation is valid - it's 0. */ *valuep = 0; } } else return 0; } else if (*s == 'I' || *s == 'i') { s++; if (s == send || (*s != 'N' && *s != 'n')) return 0; s++; if (s == send || (*s != 'F' && *s != 'f')) return 0; s++; if (s < send && (*s == 'I' || *s == 'i')) { s++; if (s == send || (*s != 'N' && *s != 'n')) return 0; s++; if (s == send || (*s != 'I' && *s != 'i')) return 0; s++; if (s == send || (*s != 'T' && *s != 't')) return 0; s++; if (s == send || (*s != 'Y' && *s != 'y')) return 0; s++; } sawinf = 1; } else if (*s == 'N' || *s == 'n') { /* XXX TODO: There are signaling NaNs and quiet NaNs. */ s++; if (s == send || (*s != 'A' && *s != 'a')) return 0; s++; if (s == send || (*s != 'N' && *s != 'n')) return 0; s++; sawnan = 1; } else return 0; if (sawinf) { numtype &= IS_NUMBER_NEG; /* Keep track of sign */ numtype |= IS_NUMBER_INFINITY | IS_NUMBER_NOT_INT; } else if (sawnan) { numtype &= IS_NUMBER_NEG; /* Keep track of sign */ numtype |= IS_NUMBER_NAN | IS_NUMBER_NOT_INT; } else if (s < send) { /* we can have an optional exponent part */ if (*s == 'e' || *s == 'E') { /* The only flag we keep is sign. Blow away any "it's UV" */ numtype &= IS_NUMBER_NEG; numtype |= IS_NUMBER_NOT_INT; s++; if (s < send && (*s == '-' || *s == '+')) s++; if (s < send && isDIGIT(*s)) { do { s++; } while (s < send && isDIGIT(*s)); } else return 0; } } while (s < send && isSPACE(*s)) s++; if (s >= send) return numtype; if (len == 10 && memEQ(pv, "0 but true", 10)) { if (valuep) *valuep = 0; return IS_NUMBER_IN_UV; } return 0; } #endif #endif /* * The grok_* routines have been modified to use warn() instead of * Perl_warner(). Also, 'hexdigit' was the former name of PL_hexdigit, * which is why the stack variable has been renamed to 'xdigit'. */ #ifndef grok_bin #if defined(NEED_grok_bin) static UV DPPP_(my_grok_bin)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); static #else extern UV DPPP_(my_grok_bin)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); #endif #ifdef grok_bin # undef grok_bin #endif #define grok_bin(a,b,c,d) DPPP_(my_grok_bin)(aTHX_ a,b,c,d) #define Perl_grok_bin DPPP_(my_grok_bin) #if defined(NEED_grok_bin) || defined(NEED_grok_bin_GLOBAL) UV DPPP_(my_grok_bin)(pTHX_ const char *start, STRLEN *len_p, I32 *flags, NV *result) { const char *s = start; STRLEN len = *len_p; UV value = 0; NV value_nv = 0; const UV max_div_2 = UV_MAX / 2; bool allow_underscores = *flags & PERL_SCAN_ALLOW_UNDERSCORES; bool overflowed = FALSE; if (!(*flags & PERL_SCAN_DISALLOW_PREFIX)) { /* strip off leading b or 0b. for compatibility silently suffer "b" and "0b" as valid binary numbers. */ if (len >= 1) { if (s[0] == 'b') { s++; len--; } else if (len >= 2 && s[0] == '0' && s[1] == 'b') { s+=2; len-=2; } } } for (; len-- && *s; s++) { char bit = *s; if (bit == '0' || bit == '1') { /* Write it in this wonky order with a goto to attempt to get the compiler to make the common case integer-only loop pretty tight. With gcc seems to be much straighter code than old scan_bin. */ redo: if (!overflowed) { if (value <= max_div_2) { value = (value << 1) | (bit - '0'); continue; } /* Bah. We're just overflowed. */ warn("Integer overflow in binary number"); overflowed = TRUE; value_nv = (NV) value; } value_nv *= 2.0; /* If an NV has not enough bits in its mantissa to * represent a UV this summing of small low-order numbers * is a waste of time (because the NV cannot preserve * the low-order bits anyway): we could just remember when * did we overflow and in the end just multiply value_nv by the * right amount. */ value_nv += (NV)(bit - '0'); continue; } if (bit == '_' && len && allow_underscores && (bit = s[1]) && (bit == '0' || bit == '1')) { --len; ++s; goto redo; } if (!(*flags & PERL_SCAN_SILENT_ILLDIGIT)) warn("Illegal binary digit '%c' ignored", *s); break; } if ( ( overflowed && value_nv > 4294967295.0) #if UVSIZE > 4 || (!overflowed && value > 0xffffffff ) #endif ) { warn("Binary number > 0b11111111111111111111111111111111 non-portable"); } *len_p = s - start; if (!overflowed) { *flags = 0; return value; } *flags = PERL_SCAN_GREATER_THAN_UV_MAX; if (result) *result = value_nv; return UV_MAX; } #endif #endif #ifndef grok_hex #if defined(NEED_grok_hex) static UV DPPP_(my_grok_hex)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); static #else extern UV DPPP_(my_grok_hex)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); #endif #ifdef grok_hex # undef grok_hex #endif #define grok_hex(a,b,c,d) DPPP_(my_grok_hex)(aTHX_ a,b,c,d) #define Perl_grok_hex DPPP_(my_grok_hex) #if defined(NEED_grok_hex) || defined(NEED_grok_hex_GLOBAL) UV DPPP_(my_grok_hex)(pTHX_ const char *start, STRLEN *len_p, I32 *flags, NV *result) { const char *s = start; STRLEN len = *len_p; UV value = 0; NV value_nv = 0; const UV max_div_16 = UV_MAX / 16; bool allow_underscores = *flags & PERL_SCAN_ALLOW_UNDERSCORES; bool overflowed = FALSE; const char *xdigit; if (!(*flags & PERL_SCAN_DISALLOW_PREFIX)) { /* strip off leading x or 0x. for compatibility silently suffer "x" and "0x" as valid hex numbers. */ if (len >= 1) { if (s[0] == 'x') { s++; len--; } else if (len >= 2 && s[0] == '0' && s[1] == 'x') { s+=2; len-=2; } } } for (; len-- && *s; s++) { xdigit = strchr((char *) PL_hexdigit, *s); if (xdigit) { /* Write it in this wonky order with a goto to attempt to get the compiler to make the common case integer-only loop pretty tight. With gcc seems to be much straighter code than old scan_hex. */ redo: if (!overflowed) { if (value <= max_div_16) { value = (value << 4) | ((xdigit - PL_hexdigit) & 15); continue; } warn("Integer overflow in hexadecimal number"); overflowed = TRUE; value_nv = (NV) value; } value_nv *= 16.0; /* If an NV has not enough bits in its mantissa to * represent a UV this summing of small low-order numbers * is a waste of time (because the NV cannot preserve * the low-order bits anyway): we could just remember when * did we overflow and in the end just multiply value_nv by the * right amount of 16-tuples. */ value_nv += (NV)((xdigit - PL_hexdigit) & 15); continue; } if (*s == '_' && len && allow_underscores && s[1] && (xdigit = strchr((char *) PL_hexdigit, s[1]))) { --len; ++s; goto redo; } if (!(*flags & PERL_SCAN_SILENT_ILLDIGIT)) warn("Illegal hexadecimal digit '%c' ignored", *s); break; } if ( ( overflowed && value_nv > 4294967295.0) #if UVSIZE > 4 || (!overflowed && value > 0xffffffff ) #endif ) { warn("Hexadecimal number > 0xffffffff non-portable"); } *len_p = s - start; if (!overflowed) { *flags = 0; return value; } *flags = PERL_SCAN_GREATER_THAN_UV_MAX; if (result) *result = value_nv; return UV_MAX; } #endif #endif #ifndef grok_oct #if defined(NEED_grok_oct) static UV DPPP_(my_grok_oct)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); static #else extern UV DPPP_(my_grok_oct)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); #endif #ifdef grok_oct # undef grok_oct #endif #define grok_oct(a,b,c,d) DPPP_(my_grok_oct)(aTHX_ a,b,c,d) #define Perl_grok_oct DPPP_(my_grok_oct) #if defined(NEED_grok_oct) || defined(NEED_grok_oct_GLOBAL) UV DPPP_(my_grok_oct)(pTHX_ const char *start, STRLEN *len_p, I32 *flags, NV *result) { const char *s = start; STRLEN len = *len_p; UV value = 0; NV value_nv = 0; const UV max_div_8 = UV_MAX / 8; bool allow_underscores = *flags & PERL_SCAN_ALLOW_UNDERSCORES; bool overflowed = FALSE; for (; len-- && *s; s++) { /* gcc 2.95 optimiser not smart enough to figure that this subtraction out front allows slicker code. */ int digit = *s - '0'; if (digit >= 0 && digit <= 7) { /* Write it in this wonky order with a goto to attempt to get the compiler to make the common case integer-only loop pretty tight. */ redo: if (!overflowed) { if (value <= max_div_8) { value = (value << 3) | digit; continue; } /* Bah. We're just overflowed. */ warn("Integer overflow in octal number"); overflowed = TRUE; value_nv = (NV) value; } value_nv *= 8.0; /* If an NV has not enough bits in its mantissa to * represent a UV this summing of small low-order numbers * is a waste of time (because the NV cannot preserve * the low-order bits anyway): we could just remember when * did we overflow and in the end just multiply value_nv by the * right amount of 8-tuples. */ value_nv += (NV)digit; continue; } if (digit == ('_' - '0') && len && allow_underscores && (digit = s[1] - '0') && (digit >= 0 && digit <= 7)) { --len; ++s; goto redo; } /* Allow \octal to work the DWIM way (that is, stop scanning * as soon as non-octal characters are seen, complain only iff * someone seems to want to use the digits eight and nine). */ if (digit == 8 || digit == 9) { if (!(*flags & PERL_SCAN_SILENT_ILLDIGIT)) warn("Illegal octal digit '%c' ignored", *s); } break; } if ( ( overflowed && value_nv > 4294967295.0) #if UVSIZE > 4 || (!overflowed && value > 0xffffffff ) #endif ) { warn("Octal number > 037777777777 non-portable"); } *len_p = s - start; if (!overflowed) { *flags = 0; return value; } *flags = PERL_SCAN_GREATER_THAN_UV_MAX; if (result) *result = value_nv; return UV_MAX; } #endif #endif #if !defined(my_snprintf) #if defined(NEED_my_snprintf) static int DPPP_(my_my_snprintf)(char * buffer, const Size_t len, const char * format, ...); static #else extern int DPPP_(my_my_snprintf)(char * buffer, const Size_t len, const char * format, ...); #endif #define my_snprintf DPPP_(my_my_snprintf) #define Perl_my_snprintf DPPP_(my_my_snprintf) #if defined(NEED_my_snprintf) || defined(NEED_my_snprintf_GLOBAL) int DPPP_(my_my_snprintf)(char *buffer, const Size_t len, const char *format, ...) { dTHX; int retval; va_list ap; va_start(ap, format); #ifdef HAS_VSNPRINTF retval = vsnprintf(buffer, len, format, ap); #else retval = vsprintf(buffer, format, ap); #endif va_end(ap); if (retval < 0 || (len > 0 && (Size_t)retval >= len)) Perl_croak(aTHX_ "panic: my_snprintf buffer overflow"); return retval; } #endif #endif #if !defined(my_sprintf) #if defined(NEED_my_sprintf) static int DPPP_(my_my_sprintf)(char * buffer, const char * pat, ...); static #else extern int DPPP_(my_my_sprintf)(char * buffer, const char * pat, ...); #endif #define my_sprintf DPPP_(my_my_sprintf) #define Perl_my_sprintf DPPP_(my_my_sprintf) #if defined(NEED_my_sprintf) || defined(NEED_my_sprintf_GLOBAL) int DPPP_(my_my_sprintf)(char *buffer, const char* pat, ...) { va_list args; va_start(args, pat); vsprintf(buffer, pat, args); va_end(args); return strlen(buffer); } #endif #endif #ifdef NO_XSLOCKS # ifdef dJMPENV # define dXCPT dJMPENV; int rEtV = 0 # define XCPT_TRY_START JMPENV_PUSH(rEtV); if (rEtV == 0) # define XCPT_TRY_END JMPENV_POP; # define XCPT_CATCH if (rEtV != 0) # define XCPT_RETHROW JMPENV_JUMP(rEtV) # else # define dXCPT Sigjmp_buf oldTOP; int rEtV = 0 # define XCPT_TRY_START Copy(top_env, oldTOP, 1, Sigjmp_buf); rEtV = Sigsetjmp(top_env, 1); if (rEtV == 0) # define XCPT_TRY_END Copy(oldTOP, top_env, 1, Sigjmp_buf); # define XCPT_CATCH if (rEtV != 0) # define XCPT_RETHROW Siglongjmp(top_env, rEtV) # endif #endif #if !defined(my_strlcat) #if defined(NEED_my_strlcat) static Size_t DPPP_(my_my_strlcat)(char * dst, const char * src, Size_t size); static #else extern Size_t DPPP_(my_my_strlcat)(char * dst, const char * src, Size_t size); #endif #define my_strlcat DPPP_(my_my_strlcat) #define Perl_my_strlcat DPPP_(my_my_strlcat) #if defined(NEED_my_strlcat) || defined(NEED_my_strlcat_GLOBAL) Size_t DPPP_(my_my_strlcat)(char *dst, const char *src, Size_t size) { Size_t used, length, copy; used = strlen(dst); length = strlen(src); if (size > 0 && used < size - 1) { copy = (length >= size - used) ? size - used - 1 : length; memcpy(dst + used, src, copy); dst[used + copy] = '\0'; } return used + length; } #endif #endif #if !defined(my_strlcpy) #if defined(NEED_my_strlcpy) static Size_t DPPP_(my_my_strlcpy)(char * dst, const char * src, Size_t size); static #else extern Size_t DPPP_(my_my_strlcpy)(char * dst, const char * src, Size_t size); #endif #define my_strlcpy DPPP_(my_my_strlcpy) #define Perl_my_strlcpy DPPP_(my_my_strlcpy) #if defined(NEED_my_strlcpy) || defined(NEED_my_strlcpy_GLOBAL) Size_t DPPP_(my_my_strlcpy)(char *dst, const char *src, Size_t size) { Size_t length, copy; length = strlen(src); if (size > 0) { copy = (length >= size) ? size - 1 : length; memcpy(dst, src, copy); dst[copy] = '\0'; } return length; } #endif #endif #ifndef PERL_PV_ESCAPE_QUOTE # define PERL_PV_ESCAPE_QUOTE 0x0001 #endif #ifndef PERL_PV_PRETTY_QUOTE # define PERL_PV_PRETTY_QUOTE PERL_PV_ESCAPE_QUOTE #endif #ifndef PERL_PV_PRETTY_ELLIPSES # define PERL_PV_PRETTY_ELLIPSES 0x0002 #endif #ifndef PERL_PV_PRETTY_LTGT # define PERL_PV_PRETTY_LTGT 0x0004 #endif #ifndef PERL_PV_ESCAPE_FIRSTCHAR # define PERL_PV_ESCAPE_FIRSTCHAR 0x0008 #endif #ifndef PERL_PV_ESCAPE_UNI # define PERL_PV_ESCAPE_UNI 0x0100 #endif #ifndef PERL_PV_ESCAPE_UNI_DETECT # define PERL_PV_ESCAPE_UNI_DETECT 0x0200 #endif #ifndef PERL_PV_ESCAPE_ALL # define PERL_PV_ESCAPE_ALL 0x1000 #endif #ifndef PERL_PV_ESCAPE_NOBACKSLASH # define PERL_PV_ESCAPE_NOBACKSLASH 0x2000 #endif #ifndef PERL_PV_ESCAPE_NOCLEAR # define PERL_PV_ESCAPE_NOCLEAR 0x4000 #endif #ifndef PERL_PV_ESCAPE_RE # define PERL_PV_ESCAPE_RE 0x8000 #endif #ifndef PERL_PV_PRETTY_NOCLEAR # define PERL_PV_PRETTY_NOCLEAR PERL_PV_ESCAPE_NOCLEAR #endif #ifndef PERL_PV_PRETTY_DUMP # define PERL_PV_PRETTY_DUMP PERL_PV_PRETTY_ELLIPSES|PERL_PV_PRETTY_QUOTE #endif #ifndef PERL_PV_PRETTY_REGPROP # define PERL_PV_PRETTY_REGPROP PERL_PV_PRETTY_ELLIPSES|PERL_PV_PRETTY_LTGT|PERL_PV_ESCAPE_RE #endif /* Hint: pv_escape * Note that unicode functionality is only backported to * those perl versions that support it. For older perl * versions, the implementation will fall back to bytes. */ #ifndef pv_escape #if defined(NEED_pv_escape) static char * DPPP_(my_pv_escape)(pTHX_ SV * dsv, char const * const str, const STRLEN count, const STRLEN max, STRLEN * const escaped, const U32 flags); static #else extern char * DPPP_(my_pv_escape)(pTHX_ SV * dsv, char const * const str, const STRLEN count, const STRLEN max, STRLEN * const escaped, const U32 flags); #endif #ifdef pv_escape # undef pv_escape #endif #define pv_escape(a,b,c,d,e,f) DPPP_(my_pv_escape)(aTHX_ a,b,c,d,e,f) #define Perl_pv_escape DPPP_(my_pv_escape) #if defined(NEED_pv_escape) || defined(NEED_pv_escape_GLOBAL) char * DPPP_(my_pv_escape)(pTHX_ SV *dsv, char const * const str, const STRLEN count, const STRLEN max, STRLEN * const escaped, const U32 flags) { const char esc = flags & PERL_PV_ESCAPE_RE ? '%' : '\\'; const char dq = flags & PERL_PV_ESCAPE_QUOTE ? '"' : esc; char octbuf[32] = "%123456789ABCDF"; STRLEN wrote = 0; STRLEN chsize = 0; STRLEN readsize = 1; #if defined(is_utf8_string) && defined(utf8_to_uvchr) bool isuni = flags & PERL_PV_ESCAPE_UNI ? 1 : 0; #endif const char *pv = str; const char * const end = pv + count; octbuf[0] = esc; if (!(flags & PERL_PV_ESCAPE_NOCLEAR)) sv_setpvs(dsv, ""); #if defined(is_utf8_string) && defined(utf8_to_uvchr) if ((flags & PERL_PV_ESCAPE_UNI_DETECT) && is_utf8_string((U8*)pv, count)) isuni = 1; #endif for (; pv < end && (!max || wrote < max) ; pv += readsize) { const UV u = #if defined(is_utf8_string) && defined(utf8_to_uvchr) isuni ? utf8_to_uvchr((U8*)pv, &readsize) : #endif (U8)*pv; const U8 c = (U8)u & 0xFF; if (u > 255 || (flags & PERL_PV_ESCAPE_ALL)) { if (flags & PERL_PV_ESCAPE_FIRSTCHAR) chsize = my_snprintf(octbuf, sizeof octbuf, "%"UVxf, u); else chsize = my_snprintf(octbuf, sizeof octbuf, "%cx{%"UVxf"}", esc, u); } else if (flags & PERL_PV_ESCAPE_NOBACKSLASH) { chsize = 1; } else { if (c == dq || c == esc || !isPRINT(c)) { chsize = 2; switch (c) { case '\\' : /* fallthrough */ case '%' : if (c == esc) octbuf[1] = esc; else chsize = 1; break; case '\v' : octbuf[1] = 'v'; break; case '\t' : octbuf[1] = 't'; break; case '\r' : octbuf[1] = 'r'; break; case '\n' : octbuf[1] = 'n'; break; case '\f' : octbuf[1] = 'f'; break; case '"' : if (dq == '"') octbuf[1] = '"'; else chsize = 1; break; default: chsize = my_snprintf(octbuf, sizeof octbuf, pv < end && isDIGIT((U8)*(pv+readsize)) ? "%c%03o" : "%c%o", esc, c); } } else { chsize = 1; } } if (max && wrote + chsize > max) { break; } else if (chsize > 1) { sv_catpvn(dsv, octbuf, chsize); wrote += chsize; } else { char tmp[2]; my_snprintf(tmp, sizeof tmp, "%c", c); sv_catpvn(dsv, tmp, 1); wrote++; } if (flags & PERL_PV_ESCAPE_FIRSTCHAR) break; } if (escaped != NULL) *escaped= pv - str; return SvPVX(dsv); } #endif #endif #ifndef pv_pretty #if defined(NEED_pv_pretty) static char * DPPP_(my_pv_pretty)(pTHX_ SV * dsv, char const * const str, const STRLEN count, const STRLEN max, char const * const start_color, char const * const end_color, const U32 flags); static #else extern char * DPPP_(my_pv_pretty)(pTHX_ SV * dsv, char const * const str, const STRLEN count, const STRLEN max, char const * const start_color, char const * const end_color, const U32 flags); #endif #ifdef pv_pretty # undef pv_pretty #endif #define pv_pretty(a,b,c,d,e,f,g) DPPP_(my_pv_pretty)(aTHX_ a,b,c,d,e,f,g) #define Perl_pv_pretty DPPP_(my_pv_pretty) #if defined(NEED_pv_pretty) || defined(NEED_pv_pretty_GLOBAL) char * DPPP_(my_pv_pretty)(pTHX_ SV *dsv, char const * const str, const STRLEN count, const STRLEN max, char const * const start_color, char const * const end_color, const U32 flags) { const U8 dq = (flags & PERL_PV_PRETTY_QUOTE) ? '"' : '%'; STRLEN escaped; if (!(flags & PERL_PV_PRETTY_NOCLEAR)) sv_setpvs(dsv, ""); if (dq == '"') sv_catpvs(dsv, "\""); else if (flags & PERL_PV_PRETTY_LTGT) sv_catpvs(dsv, "<"); if (start_color != NULL) sv_catpv(dsv, D_PPP_CONSTPV_ARG(start_color)); pv_escape(dsv, str, count, max, &escaped, flags | PERL_PV_ESCAPE_NOCLEAR); if (end_color != NULL) sv_catpv(dsv, D_PPP_CONSTPV_ARG(end_color)); if (dq == '"') sv_catpvs(dsv, "\""); else if (flags & PERL_PV_PRETTY_LTGT) sv_catpvs(dsv, ">"); if ((flags & PERL_PV_PRETTY_ELLIPSES) && escaped < count) sv_catpvs(dsv, "..."); return SvPVX(dsv); } #endif #endif #ifndef pv_display #if defined(NEED_pv_display) static char * DPPP_(my_pv_display)(pTHX_ SV * dsv, const char * pv, STRLEN cur, STRLEN len, STRLEN pvlim); static #else extern char * DPPP_(my_pv_display)(pTHX_ SV * dsv, const char * pv, STRLEN cur, STRLEN len, STRLEN pvlim); #endif #ifdef pv_display # undef pv_display #endif #define pv_display(a,b,c,d,e) DPPP_(my_pv_display)(aTHX_ a,b,c,d,e) #define Perl_pv_display DPPP_(my_pv_display) #if defined(NEED_pv_display) || defined(NEED_pv_display_GLOBAL) char * DPPP_(my_pv_display)(pTHX_ SV *dsv, const char *pv, STRLEN cur, STRLEN len, STRLEN pvlim) { pv_pretty(dsv, pv, cur, pvlim, NULL, NULL, PERL_PV_PRETTY_DUMP); if (len > cur && pv[cur] == '\0') sv_catpvs(dsv, "\\0"); return SvPVX(dsv); } #endif #endif #endif /* _P_P_PORTABILITY_H_ */ /* End of File ppport.h */ slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/reservation.c000066400000000000000000000125041265000126300251060ustar00rootroot00000000000000/* * reservation.c - convert data between reservation related messages * and perl HVs */ #include #include #include #include "ppport.h" #include #include "slurm-perl.h" /* * convert reserve_info_t to perl HV */ int reserve_info_to_hv(reserve_info_t *reserve_info, HV *hv) { if (reserve_info->accounts) STORE_FIELD(hv, reserve_info, accounts, charp); STORE_FIELD(hv, reserve_info, end_time, time_t); if (reserve_info->features) STORE_FIELD(hv, reserve_info, features, charp); STORE_FIELD(hv, reserve_info, flags, uint16_t); if (reserve_info->licenses) STORE_FIELD(hv, reserve_info, licenses, charp); if (reserve_info->name) STORE_FIELD(hv, reserve_info, name, charp); STORE_FIELD(hv, reserve_info, node_cnt, uint32_t); if (reserve_info->node_list) STORE_FIELD(hv, reserve_info, node_list, charp); /* no store for int pointers yet */ if (reserve_info->node_inx) { int j; AV *av = newAV(); for(j = 0; ; j += 2) { if(reserve_info->node_inx[j] == -1) break; av_store(av, j, newSVuv(reserve_info->node_inx[j])); av_store(av, j+1, newSVuv(reserve_info->node_inx[j+1])); } hv_store_sv(hv, "node_inx", newRV_noinc((SV*)av)); } if (reserve_info->partition) STORE_FIELD(hv, reserve_info, partition, charp); STORE_FIELD(hv, reserve_info, start_time, time_t); if (reserve_info->users) STORE_FIELD(hv, reserve_info, users, charp); return 0; } /* * convert perl HV to reserve_info_t */ int hv_to_reserve_info(HV *hv, reserve_info_t *resv_info) { SV **svp; AV *av; int i, n; memset(resv_info, 0, sizeof(reserve_info_t)); FETCH_FIELD(hv, resv_info, accounts, charp, FALSE); FETCH_FIELD(hv, resv_info, end_time, time_t, TRUE); FETCH_FIELD(hv, resv_info, features, charp, FALSE); FETCH_FIELD(hv, resv_info, flags, uint16_t, TRUE); FETCH_FIELD(hv, resv_info, licenses, charp, FALSE); FETCH_FIELD(hv, resv_info, name, charp, TRUE); FETCH_FIELD(hv, resv_info, node_cnt, uint32_t, TRUE); svp = hv_fetch(hv, "node_inx", 8, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); n = av_len(av) + 2; /* for trailing -1 */ resv_info->node_inx = xmalloc(n * sizeof(int)); for (i = 0 ; i < n-1; i += 2) { resv_info->node_inx[i] = (int)SvIV(*(av_fetch(av, i ,FALSE))); resv_info->node_inx[i+1] = (int)SvIV(*(av_fetch(av, i+1 ,FALSE))); } resv_info->node_inx[n-1] = -1; } else { /* nothing to do */ } FETCH_FIELD(hv, resv_info, node_list, charp, FALSE); FETCH_FIELD(hv, resv_info, partition, charp, FALSE); FETCH_FIELD(hv, resv_info, start_time, time_t, TRUE); FETCH_FIELD(hv, resv_info, users, charp, FALSE); return 0; } /* * convert reserve_info_msg_t to perl HV */ int reserve_info_msg_to_hv(reserve_info_msg_t *reserve_info_msg, HV *hv) { int i; HV *hv_info; AV *av; STORE_FIELD(hv, reserve_info_msg, last_update, time_t); /* record_count implied in reservation_array */ av = newAV(); for(i = 0; i < reserve_info_msg->record_count; i ++) { hv_info = newHV(); if (reserve_info_to_hv(reserve_info_msg->reservation_array + i, hv_info) < 0) { SvREFCNT_dec(hv_info); SvREFCNT_dec(av); return -1; } av_store(av, i, newRV_noinc((SV*)hv_info)); } hv_store_sv(hv, "reservation_array", newRV_noinc((SV*)av)); return 0; } /* * convert perl HV to reserve_info_msg_t */ int hv_to_reserve_info_msg(HV *hv, reserve_info_msg_t *resv_info_msg) { SV **svp; AV *av; int i, n; memset(resv_info_msg, 0, sizeof(reserve_info_msg_t)); FETCH_FIELD(hv, resv_info_msg, last_update, time_t, TRUE); svp = hv_fetch(hv, "reservation_array", 17, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV)) { Perl_warn (aTHX_ "reservation_array is not an array refrence in HV for reservation_info_msg_t"); return -1; } av = (AV*)SvRV(*svp); n = av_len(av) + 1; resv_info_msg->record_count = n; resv_info_msg->reservation_array = xmalloc(n * sizeof(reserve_info_t)); for (i = 0; i < n; i ++) { svp = av_fetch(av, i, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV)) { Perl_warn (aTHX_ "element %d in reservation_array is not valid", i); return -1; } if (hv_to_reserve_info((HV*)SvRV(*svp), &resv_info_msg->reservation_array[i]) < 0) { Perl_warn (aTHX_ "failed to convert element %d in reservation_array", i); return -1; } } return 0; } /* * convert perl HV to resv_desc_msg_t. */ int hv_to_update_reservation_msg(HV *hv, resv_desc_msg_t *resv_msg) { slurm_init_resv_desc_msg(resv_msg); FETCH_FIELD(hv, resv_msg, accounts, charp, FALSE); FETCH_FIELD(hv, resv_msg, duration, uint32_t, FALSE); FETCH_FIELD(hv, resv_msg, end_time, time_t, FALSE); FETCH_FIELD(hv, resv_msg, features, charp, FALSE); FETCH_FIELD(hv, resv_msg, flags, uint16_t, FALSE); FETCH_FIELD(hv, resv_msg, licenses, charp, FALSE); FETCH_FIELD(hv, resv_msg, name, charp, FALSE); FETCH_PTR_FIELD(hv, resv_msg, node_cnt, "SLURM::uint32_t", FALSE); FETCH_FIELD(hv, resv_msg, node_list, charp, FALSE); FETCH_FIELD(hv, resv_msg, partition, charp, FALSE); FETCH_FIELD(hv, resv_msg, start_time, time_t, FALSE); FETCH_FIELD(hv, resv_msg, users, charp, FALSE); return 0; } /* * convert perl HV to reservation_name_msg_t. */ int hv_to_delete_reservation_msg(HV *hv, reservation_name_msg_t *resv_name) { resv_name->name = NULL; FETCH_FIELD(hv, resv_name, name, charp, FALSE); return 0; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/slurm-perl.h000066400000000000000000000155121265000126300246560ustar00rootroot00000000000000/* * slurm-perl.h - prototypes of msg-hv converting functions */ #ifndef _SLURM_PERL_H #define _SLURM_PERL_H #include /* these declaration are not in slurm.h */ #ifndef xfree #define xfree(__p) \ slurm_xfree((void **)&(__p), __FILE__, __LINE__, __FUNCTION__) #define xmalloc(__sz) \ slurm_xmalloc (__sz, true, __FILE__, __LINE__, __FUNCTION__) #endif extern void slurm_xfree(void **, const char *, int, const char *); extern void *slurm_xmalloc(size_t, bool, const char *, int, const char *); extern void slurm_api_clear_config(void); extern void slurm_list_iterator_destroy(ListIterator itr); /*********** entity reason/state/flags string functions **********/ extern char *slurm_preempt_mode_string(uint16_t preempt_mode); extern uint16_t slurm_preempt_mode_num(const char *preempt_mode); extern char *slurm_job_reason_string(enum job_state_reason inx); extern char *slurm_job_state_string(uint32_t inx); extern char *slurm_job_state_string_compact(uint32_t inx); extern int slurm_job_state_num(const char *state_name); extern char *slurm_node_state_string(uint32_t inx); extern char *slurm_node_state_string_compact(uint32_t inx); extern char *slurm_reservation_flags_string(uint16_t inx); extern void slurm_private_data_string(uint16_t private_data, char *str, int str_len); extern void slurm_accounting_enforce_string(uint16_t enforce, char *str, int str_len); extern char *slurm_conn_type_string(enum connection_type conn_type); extern char *slurm_node_use_string(enum node_use_type node_use); extern char *slurm_bg_block_state_string(uint16_t state); /********** resource allocation related conversion functions **********/ extern int hv_to_job_desc_msg(HV *hv, job_desc_msg_t *job_desc); extern void free_job_desc_msg_memory(job_desc_msg_t *msg); extern int resource_allocation_response_msg_to_hv( resource_allocation_response_msg_t *resp_msg, HV *hv); extern int job_alloc_info_response_msg_to_hv(job_alloc_info_response_msg_t *resp_msg, HV *hv); extern int submit_response_msg_to_hv(submit_response_msg_t *resp_msg, HV *hv); extern int job_sbcast_cred_msg_to_hv(job_sbcast_cred_msg_t *msg, HV *hv); extern int srun_job_complete_msg_to_hv(srun_job_complete_msg_t *msg, HV *hv); extern int srun_timeout_msg_to_hv(srun_timeout_msg_t *msg, HV *hv); /********** resource allocation callback functions **********/ extern void set_sarb_cb(SV *callback); extern void sarb_cb(uint32_t job_id); extern void set_sacb(HV *callbacks); extern slurm_allocation_callbacks_t sacb; /********** job info conversion functions **********/ extern int job_info_to_hv(job_info_t *job_info, HV *hv); extern int job_info_msg_to_hv(job_info_msg_t *job_info_msg, HV *hv); extern int hv_to_job_info(HV *hv, job_info_t *job_info); extern int hv_to_job_info_msg(HV *hv, job_info_msg_t *job_info_msg); /********** step info conversion functions **********/ extern int job_step_info_to_hv(job_step_info_t *step_info, HV *hv); extern int hv_to_job_step_info(HV *hv, job_step_info_t *step_info); extern int job_step_info_response_msg_to_hv(job_step_info_response_msg_t *job_step_info_msg, HV *hv); extern int hv_to_job_step_info_response_msg(HV *hv, job_step_info_response_msg_t *job_step_info_msg); extern int slurm_step_layout_to_hv(slurm_step_layout_t *step_layout, HV *hv); extern int job_step_pids_to_hv(job_step_pids_t *pids, HV *hv); extern int job_step_pids_response_msg_to_hv(job_step_pids_response_msg_t *pids_msg, HV *hv); extern int job_step_stat_to_hv(job_step_stat_t *stat, HV *hv); extern int job_step_stat_response_msg_to_hv(job_step_stat_response_msg_t *stat_msg, HV *hv); /********** node info conversion functions **********/ extern int node_info_to_hv(node_info_t *node_info, uint16_t node_scaling, HV *hv); extern int hv_to_node_info(HV *hv, node_info_t *node_info); extern int node_info_msg_to_hv(node_info_msg_t *node_info_msg, HV *hv); extern int hv_to_node_info_msg(HV *hv, node_info_msg_t *node_info_msg); extern int hv_to_update_node_msg(HV *hv, update_node_msg_t *update_msg); /********** block info conversion functions **********/ extern int block_info_to_hv(block_info_t *block_info, HV *hv); extern int hv_to_block_info(HV *hv, block_info_t *block_info); extern int block_info_msg_to_hv(block_info_msg_t *block_info_msg, HV *hv); extern int hv_to_block_info_msg(HV *hv, block_info_msg_t *block_info_msg); extern int hv_to_update_block_msg(HV *hv, update_block_msg_t *update_msg); /********** partition info conversion functions **********/ extern int partition_info_to_hv(partition_info_t *part_info, HV *hv); extern int hv_to_partition_info(HV *hv, partition_info_t *part_info); extern int partition_info_msg_to_hv(partition_info_msg_t *part_info_msg, HV *hv); extern int hv_to_partition_info_msg(HV *hv, partition_info_msg_t *part_info_msg); extern int hv_to_update_part_msg(HV *hv, update_part_msg_t *part_msg); extern int hv_to_delete_part_msg(HV *hv, delete_part_msg_t *delete_msg); /********** ctl config conversion functions **********/ extern int slurm_ctl_conf_to_hv(slurm_ctl_conf_t *conf, HV *hv); extern int hv_to_slurm_ctl_conf(HV *hv, slurm_ctl_conf_t *conf); extern int slurmd_status_to_hv(slurmd_status_t *status, HV *hv); extern int hv_to_slurmd_status(HV *hv, slurmd_status_t *status); extern int hv_to_step_update_request_msg(HV *hv, step_update_request_msg_t *update_msg); /********** reservation info conversion functions **********/ extern int reserve_info_to_hv(reserve_info_t *reserve_info, HV *hv); extern int hv_to_reserve_info(HV *hv, reserve_info_t *resv_info); extern int reserve_info_msg_to_hv(reserve_info_msg_t *resv_info_msg, HV *hv); extern int hv_to_reserve_info_msg(HV *hv, reserve_info_msg_t *resv_info_msg); extern int hv_to_update_reservation_msg(HV *hv, resv_desc_msg_t *resv_msg); extern int hv_to_delete_reservation_msg(HV *hv, reservation_name_msg_t *resv_name); /********* trigger info conversion functions **********/ extern int trigger_info_to_hv(trigger_info_t *info, HV *hv); extern int hv_to_trigger_info(HV *hv, trigger_info_t *info); extern int trigger_info_msg_to_hv(trigger_info_msg_t *msg, HV *hv); /********** topo info conversion functions **********/ extern int topo_info_to_hv(topo_info_t *topo_info, HV *hv); extern int hv_to_topo_info(HV *hv, topo_info_t *topo_info); extern int topo_info_response_msg_to_hv(topo_info_response_msg_t *topo_info_msg, HV *hv); extern int hv_to_topo_info_response_msg(HV *hv, topo_info_response_msg_t *topo_info_msg); /********** step launching functions **********/ extern int hv_to_slurm_step_ctx_params(HV *hv, slurm_step_ctx_params_t *params); extern int hv_to_slurm_step_launch_params(HV *hv, slurm_step_launch_params_t *params); extern void free_slurm_step_launch_params_memory(slurm_step_launch_params_t *params); /********** step launching callback functions **********/ extern void set_slcb(HV *callbacks); extern slurm_step_launch_callbacks_t slcb; #endif /* _SLURM_PERL_H */ slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/step.c000066400000000000000000000211031265000126300235130ustar00rootroot00000000000000/* * step.c - convert data between step related messages and perl HVs */ #include #include #include #include #include "ppport.h" #include "slurm-perl.h" /* * convert job_step_info_t to perl HV */ int job_step_info_to_hv(job_step_info_t *step_info, HV *hv) { int j; AV *av; STORE_FIELD(hv, step_info, array_job_id, uint32_t); STORE_FIELD(hv, step_info, array_task_id, uint32_t); if(step_info->ckpt_dir) STORE_FIELD(hv, step_info, ckpt_dir, charp); STORE_FIELD(hv, step_info, ckpt_interval, uint16_t); if(step_info->gres) STORE_FIELD(hv, step_info, gres, charp); STORE_FIELD(hv, step_info, job_id, uint32_t); if(step_info->name) STORE_FIELD(hv, step_info, name, charp); if(step_info->network) STORE_FIELD(hv, step_info, network, charp); if(step_info->nodes) STORE_FIELD(hv, step_info, nodes, charp); av = newAV(); for(j = 0; ; j += 2) { if(step_info->node_inx[j] == -1) break; av_store_int(av, j, step_info->node_inx[j]); av_store_int(av, j+1, step_info->node_inx[j+1]); } hv_store_sv(hv, "node_inx", newRV_noinc((SV*)av)); STORE_FIELD(hv, step_info, num_cpus, uint32_t); STORE_FIELD(hv, step_info, num_tasks, uint32_t); if(step_info->partition) STORE_FIELD(hv, step_info, partition, charp); if(step_info->resv_ports) STORE_FIELD(hv, step_info, resv_ports, charp); STORE_FIELD(hv, step_info, run_time, time_t); STORE_FIELD(hv, step_info, start_time, time_t); STORE_FIELD(hv, step_info, step_id, uint32_t); STORE_FIELD(hv, step_info, time_limit, uint32_t); STORE_FIELD(hv, step_info, user_id, uint32_t); STORE_FIELD(hv, step_info, state, uint32_t); return 0; } /* * convert perl HV to job_step_info_t */ int hv_to_job_step_info(HV *hv, job_step_info_t *step_info) { SV **svp; AV *av; int i, n; memset(step_info, 0, sizeof(job_step_info_t)); FETCH_FIELD(hv, step_info, array_job_id, uint32_t, TRUE); FETCH_FIELD(hv, step_info, array_task_id, uint32_t, TRUE); FETCH_FIELD(hv, step_info, ckpt_dir, charp, FALSE); FETCH_FIELD(hv, step_info, ckpt_interval, uint16_t, TRUE); FETCH_FIELD(hv, step_info, gres, charp, FALSE); FETCH_FIELD(hv, step_info, job_id, uint16_t, TRUE); FETCH_FIELD(hv, step_info, name, charp, FALSE); FETCH_FIELD(hv, step_info, network, charp, FALSE); FETCH_FIELD(hv, step_info, nodes, charp, FALSE); svp = hv_fetch(hv, "node_inx", 8, FALSE); if (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { av = (AV*)SvRV(*svp); n = av_len(av) + 2; /* for trailing -1 */ step_info->node_inx = xmalloc(n * sizeof(int)); for (i = 0 ; i < n-1; i += 2) { step_info->node_inx[i] = (int)SvIV(*(av_fetch(av, i ,FALSE))); step_info->node_inx[i+1] = (int)SvIV(*(av_fetch(av, i+1 ,FALSE))); } step_info->node_inx[n-1] = -1; } else { /* nothing to do */ } FETCH_FIELD(hv, step_info, num_cpus, uint32_t, TRUE); FETCH_FIELD(hv, step_info, num_tasks, uint32_t, TRUE); FETCH_FIELD(hv, step_info, partition, charp, FALSE); FETCH_FIELD(hv, step_info, resv_ports, charp, FALSE); FETCH_FIELD(hv, step_info, run_time, time_t, TRUE); FETCH_FIELD(hv, step_info, start_time, time_t, TRUE); FETCH_FIELD(hv, step_info, step_id, uint32_t, TRUE); FETCH_FIELD(hv, step_info, time_limit, uint32_t, TRUE); FETCH_FIELD(hv, step_info, user_id, uint32_t, TRUE); FETCH_FIELD(hv, step_info, state, uint32_t, TRUE); return 0; } /* * convert job_step_info_response_msg_t to perl HV */ int job_step_info_response_msg_to_hv( job_step_info_response_msg_t *job_step_info_msg, HV *hv) { int i; AV* av; HV* hv_info; STORE_FIELD(hv, job_step_info_msg, last_update, time_t); /* job_step_count implied in job_steps */ av = newAV(); for(i = 0; i < job_step_info_msg->job_step_count; i ++) { hv_info = newHV(); if (job_step_info_to_hv( job_step_info_msg->job_steps + i, hv_info) < 0) { SvREFCNT_dec(hv_info); SvREFCNT_dec(av); return -1; } av_store(av, i, newRV_noinc((SV*)hv_info)); } hv_store_sv(hv, "job_steps", newRV_noinc((SV*)av)); return 0; } /* * convert perl HV to job_step_info_response_msg_t */ int hv_to_job_step_info_response_msg(HV *hv, job_step_info_response_msg_t *step_info_msg) { int i, n; SV **svp; AV *av; memset(step_info_msg, 0, sizeof(job_step_info_response_msg_t)); FETCH_FIELD(hv, step_info_msg, last_update, time_t, TRUE); svp = hv_fetch(hv, "job_steps", 9, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV)) { Perl_warn (aTHX_ "job_steps is not an array reference in HV for job_step_info_response_msg_t"); return -1; } av = (AV*)SvRV(*svp); n = av_len(av) + 1; step_info_msg->job_step_count = n; step_info_msg->job_steps = xmalloc(n * sizeof(job_step_info_t)); for (i = 0; i < n; i ++) { svp = av_fetch(av, i, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV)) { Perl_warn (aTHX_ "element %d in job_steps is not valid", i); return -1; } if (hv_to_job_step_info((HV*)SvRV(*svp), &step_info_msg->job_steps[i]) < 0) { Perl_warn (aTHX_ "failed to convert element %d in job_steps", i); return -1; } } return 0; } /* * convert slurm_step_layout_t to perl HV */ int slurm_step_layout_to_hv(slurm_step_layout_t *step_layout, HV *hv) { AV* av, *av2; int i, j; if (step_layout->front_end) STORE_FIELD(hv, step_layout, front_end, charp); STORE_FIELD(hv, step_layout, node_cnt, uint16_t); if (step_layout->node_list) STORE_FIELD(hv, step_layout, node_list, charp); else { Perl_warn(aTHX_ "node_list missing in slurm_step_layout_t"); return -1; } STORE_FIELD(hv, step_layout, plane_size, uint16_t); av = newAV(); for (i = 0; i < step_layout->node_cnt; i ++) av_store_uint16_t(av, i, step_layout->tasks[i]); hv_store_sv(hv, "tasks", newRV_noinc((SV*)av)); STORE_FIELD(hv, step_layout, task_cnt, uint32_t); STORE_FIELD(hv, step_layout, task_dist, uint16_t); av = newAV(); for (i = 0; i < step_layout->node_cnt; i ++) { av2 = newAV(); for (j = 0; j < step_layout->tasks[i]; j ++) av_store_uint32_t(av2, i, step_layout->tids[i][j]); av_store(av, i, newRV_noinc((SV*)av2)); } hv_store_sv(hv, "tids", newRV_noinc((SV*)av)); return 0; } /* convert job_step_pids_t to perl HV */ int job_step_pids_to_hv(job_step_pids_t *pids, HV *hv) { int i; AV *av; STORE_FIELD(hv, pids, node_name, charp); /* pid_cnt implied in pid array */ av = newAV(); for (i = 0; i < pids->pid_cnt; i ++) { av_store_uint32_t(av, i, pids->pid[i]); } hv_store_sv(hv, "pid", newRV_noinc((SV*)av)); return 0; } /* convert job_step_pids_response_msg_t to HV */ int job_step_pids_response_msg_to_hv(job_step_pids_response_msg_t *pids_msg, HV *hv) { int i = 0; ListIterator itr; AV *av; HV *hv_pids; job_step_pids_t *pids; STORE_FIELD(hv, pids_msg, job_id, uint32_t); STORE_FIELD(hv, pids_msg, step_id, uint32_t); av = newAV(); itr = slurm_list_iterator_create(pids_msg->pid_list); while ((pids = (job_step_pids_t *)slurm_list_next(itr))) { hv_pids = newHV(); if (job_step_pids_to_hv(pids, hv_pids) < 0) { Perl_warn(aTHX_ "failed to convert job_step_pids_t to hv for job_step_pids_response_msg_t"); SvREFCNT_dec(hv_pids); SvREFCNT_dec(av); slurm_list_iterator_destroy(itr); return -1; } av_store(av, i++, newRV_noinc((SV*)hv_pids)); } slurm_list_iterator_destroy(itr); hv_store_sv(hv, "pid_list", newRV_noinc((SV*)av)); return 0; } /* * convert job_step_stat_t to perl HV */ int job_step_stat_to_hv(job_step_stat_t *stat, HV *hv) { HV *hv_pids; STORE_PTR_FIELD(hv, stat, jobacct, "Slurm::jobacctinfo_t"); STORE_FIELD(hv, stat, num_tasks, uint32_t); STORE_FIELD(hv, stat, return_code, uint32_t); hv_pids = newHV(); if (job_step_pids_to_hv(stat->step_pids, hv_pids) < 0) { Perl_warn(aTHX_ "failed to convert job_step_pids_t to hv for job_step_stat_t"); SvREFCNT_dec(hv_pids); return -1; } hv_store_sv(hv, "step_pids", newRV_noinc((SV*)hv_pids)); return 0; } /* * convert job_step_stat_response_msg_t to perl HV */ int job_step_stat_response_msg_to_hv(job_step_stat_response_msg_t *stat_msg, HV *hv) { int i = 0; ListIterator itr; job_step_stat_t *stat; AV *av; HV *hv_stat; STORE_FIELD(hv, stat_msg, job_id, uint32_t); STORE_FIELD(hv, stat_msg, step_id, uint32_t); av = newAV(); itr = slurm_list_iterator_create(stat_msg->stats_list); while ((stat = (job_step_stat_t *)slurm_list_next(itr))) { hv_stat = newHV(); if(job_step_stat_to_hv(stat, hv_stat) < 0) { Perl_warn(aTHX_ "failed to convert job_step_stat_t to hv for job_step_stat_response_msg_t"); SvREFCNT_dec(hv_stat); SvREFCNT_dec(av); slurm_list_iterator_destroy(itr); return -1; } av_store(av, i++, newRV_noinc((SV*)hv_stat)); } slurm_list_iterator_destroy(itr); hv_store_sv(hv, "stats_list", newRV_noinc((SV*)av)); return 0; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/step_ctx.c000066400000000000000000000373021265000126300244010ustar00rootroot00000000000000/* * step_ctx.c - convert data between step context related messages and perl HVs */ #include #include #include #include "ppport.h" #include #include "slurm-perl.h" /* * convert perl HV to slurm_step_ctx_params_t */ int hv_to_slurm_step_ctx_params(HV *hv, slurm_step_ctx_params_t *params) { slurm_step_ctx_params_t_init(params); FETCH_FIELD(hv, params, ckpt_interval, uint16_t, FALSE); FETCH_FIELD(hv, params, cpu_count, uint32_t, FALSE); FETCH_FIELD(hv, params, cpu_freq_min, uint32_t, FALSE); FETCH_FIELD(hv, params, cpu_freq_max, uint32_t, FALSE); FETCH_FIELD(hv, params, cpu_freq_gov, uint32_t, FALSE); FETCH_FIELD(hv, params, exclusive, uint16_t, FALSE); FETCH_FIELD(hv, params, features, charp, FALSE); FETCH_FIELD(hv, params, immediate, uint16_t, FALSE); FETCH_FIELD(hv, params, job_id, uint32_t, FALSE); /* for slurm_step_ctx_create_no_alloc */ FETCH_FIELD(hv, params, pn_min_memory, uint32_t, FALSE); FETCH_FIELD(hv, params, ckpt_dir, charp, FALSE); FETCH_FIELD(hv, params, gres, charp, FALSE); FETCH_FIELD(hv, params, name, charp, FALSE); FETCH_FIELD(hv, params, network, charp, FALSE); FETCH_FIELD(hv, params, profile, uint32_t, FALSE); FETCH_FIELD(hv, params, no_kill, uint8_t, FALSE); FETCH_FIELD(hv, params, min_nodes, uint32_t, FALSE); FETCH_FIELD(hv, params, max_nodes, uint32_t, FALSE); FETCH_FIELD(hv, params, node_list, charp, FALSE); FETCH_FIELD(hv, params, overcommit, bool, FALSE); FETCH_FIELD(hv, params, plane_size, uint16_t, FALSE); FETCH_FIELD(hv, params, relative, uint16_t, FALSE); FETCH_FIELD(hv, params, resv_port_cnt, uint16_t, FALSE); FETCH_FIELD(hv, params, task_count, uint32_t, FALSE); FETCH_FIELD(hv, params, task_dist, uint16_t, FALSE); FETCH_FIELD(hv, params, time_limit, uint32_t, FALSE); FETCH_FIELD(hv, params, uid, uint32_t, FALSE); FETCH_FIELD(hv, params, verbose_level, uint16_t, FALSE); return 0; } #if 0 /* * convert job_step_create_response_msg_t to perl HV */ int job_step_create_response_msg_to_hv(job_step_create_response_msg_t *resp_msg, HV *hv) { HV *hv; STORE_FIELD(hv, resp_msg, job_step_id, uint32_t); if (resp_msg->resv_ports) STORE_FIELD(hv, resp_msg, resv_ports, charp); hv = newHV(); if (slurm_step_layout_to_hv(resp->step_layout, hv) < 0) { Perl_warn(aTHX_ "Failed to convert slurm_step_layout_t to hv for job_step_create_response_msg_t"); SvREFCNT_dec(hv); return -1; } hv_store(hv, "step_layout", 11, newRV_noinc((SV*)hv)); STORE_PTR_FIELD(hv, resp_msg, cred, "TODO"); STORE_PTR_FIELD(hv, resp_msg, switch_job, "TODO"); return 0; } #endif /* * convert perl HV to slurm_step_launch_params_t */ int hv_to_slurm_step_launch_params(HV *hv, slurm_step_launch_params_t *params) { int i, num_keys; STRLEN vlen; I32 klen; SV **svp; HV *environ_hv, *local_fds_hv, *fd_hv; AV *argv_av; SV *val; char *env_key, *env_val; slurm_step_launch_params_t_init(params); if((svp = hv_fetch(hv, "argv", 4, FALSE))) { if(SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { argv_av = (AV*)SvRV(*svp); params->argc = av_len(argv_av) + 1; if (params->argc > 0) { /* memory of params MUST be free-ed by libslurm-perl */ Newz(0, params->argv, (int32_t)(params->argc + 1), char*); for(i = 0; i < params->argc; i ++) { if((svp = av_fetch(argv_av, i, FALSE))) *(params->argv + i) = (char*) SvPV_nolen(*svp); else { Perl_warn(aTHX_ "error fetching `argv' of job descriptor"); free_slurm_step_launch_params_memory(params); return -1; } } } } else { Perl_warn(aTHX_ "`argv' of step launch params is not an array reference"); return -1; } } else { Perl_warn(aTHX_ "`argv' missing in step launching params"); return -1; } if((svp = hv_fetch(hv, "env", 3, FALSE))) { if(SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV) { environ_hv = (HV*)SvRV(*svp); num_keys = HvKEYS(environ_hv); params->envc = num_keys; Newz(0, params->env, num_keys + 1, char*); hv_iterinit(environ_hv); i = 0; while((val = hv_iternextsv(environ_hv, &env_key, &klen))) { env_val = SvPV(val, vlen); Newz(0, (*(params->env + i)), klen + vlen + 2, char); sprintf((*params->env + i), "%s=%s", env_key, env_val); i ++; } } else { Perl_warn(aTHX_ "`env' of step launch params is not a hash reference, ignored"); } } FETCH_FIELD(hv, params, cwd, charp, FALSE); FETCH_FIELD(hv, params, user_managed_io, bool, FALSE); FETCH_FIELD(hv, params, msg_timeout, uint32_t, FALSE); FETCH_FIELD(hv, params, buffered_stdio, bool, FALSE); FETCH_FIELD(hv, params, labelio, bool, FALSE); FETCH_FIELD(hv, params, profile, uint32_t, FALSE); FETCH_FIELD(hv, params, remote_output_filename, charp, FALSE); FETCH_FIELD(hv, params, remote_error_filename, charp, FALSE); FETCH_FIELD(hv, params, remote_input_filename, charp, FALSE); if ((svp = hv_fetch(hv, "local_fds", 9, FALSE))) { if (SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV) { local_fds_hv = (HV*)SvRV(*svp); if ((svp = hv_fetch(local_fds_hv, "in", 2, FALSE))) { if (SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV) { fd_hv = (HV*)SvRV(*svp); FETCH_FIELD(fd_hv, (¶ms->local_fds.in), fd, int, TRUE); FETCH_FIELD(fd_hv, (¶ms->local_fds.in), taskid, uint32_t, TRUE); FETCH_FIELD(fd_hv, (¶ms->local_fds.in), nodeid, uint32_t, TRUE); } else { Perl_warn(aTHX_ "`in' of local_fds is not a hash reference, ignored"); } } if ((svp = hv_fetch(local_fds_hv, "out", 3, FALSE))) { if (SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV) { fd_hv = (HV*)SvRV(*svp); FETCH_FIELD(fd_hv, (¶ms->local_fds.out), fd, int, TRUE); FETCH_FIELD(fd_hv, (¶ms->local_fds.out), taskid, uint32_t, TRUE); FETCH_FIELD(fd_hv, (¶ms->local_fds.out), nodeid, uint32_t, TRUE); } else { Perl_warn(aTHX_ "`out' of local_fds is not a hash reference, ignored"); } } if ((svp = hv_fetch(local_fds_hv, "err", 3, FALSE))) { if (SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV) { fd_hv = (HV*)SvRV(*svp); FETCH_FIELD(fd_hv, (¶ms->local_fds.err), fd, int, TRUE); FETCH_FIELD(fd_hv, (¶ms->local_fds.err), taskid, uint32_t, TRUE); FETCH_FIELD(fd_hv, (¶ms->local_fds.err), nodeid, uint32_t, TRUE); } else { Perl_warn(aTHX_ "`err' of local_fds is not a hash reference, ignored"); } } } else { Perl_warn(aTHX_ "`local_fds' of step launch params is not a hash reference, ignored"); } } FETCH_FIELD(hv, params, gid, uint32_t, FALSE); FETCH_FIELD(hv, params, multi_prog, bool, FALSE); FETCH_FIELD(hv, params, slurmd_debug, uint32_t, FALSE); FETCH_FIELD(hv, params, parallel_debug, bool, FALSE); FETCH_FIELD(hv, params, task_prolog, charp, FALSE); FETCH_FIELD(hv, params, task_epilog, charp, FALSE); FETCH_FIELD(hv, params, cpu_bind_type, uint16_t, FALSE); FETCH_FIELD(hv, params, cpu_bind, charp, FALSE); FETCH_FIELD(hv, params, cpu_freq_min, uint32_t, FALSE); FETCH_FIELD(hv, params, cpu_freq_max, uint32_t, FALSE); FETCH_FIELD(hv, params, cpu_freq_gov, uint32_t, FALSE); FETCH_FIELD(hv, params, mem_bind_type, uint16_t, FALSE); FETCH_FIELD(hv, params, mem_bind, charp, FALSE); FETCH_FIELD(hv, params, max_sockets, uint16_t, FALSE); FETCH_FIELD(hv, params, max_cores, uint16_t, FALSE); FETCH_FIELD(hv, params, max_threads, uint16_t, FALSE); FETCH_FIELD(hv, params, cpus_per_task, uint16_t, FALSE); FETCH_FIELD(hv, params, task_dist, uint16_t, FALSE); FETCH_FIELD(hv, params, preserve_env, bool, FALSE); FETCH_FIELD(hv, params, mpi_plugin_name, charp, FALSE); FETCH_FIELD(hv, params, open_mode, uint8_t, FALSE); FETCH_FIELD(hv, params, acctg_freq, charp, FALSE); FETCH_FIELD(hv, params, pty, bool, FALSE); FETCH_FIELD(hv, params, ckpt_dir, charp, FALSE); FETCH_FIELD(hv, params, restart_dir, charp, FALSE); if((svp = hv_fetch(hv, "spank_job_env", 13, FALSE))) { if(SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV) { environ_hv = (HV*)SvRV(*svp); num_keys = HvKEYS(environ_hv); params->spank_job_env_size = num_keys; Newz(0, params->spank_job_env, num_keys + 1, char*); hv_iterinit(environ_hv); i = 0; while((val = hv_iternextsv(environ_hv, &env_key, &klen))) { env_val = SvPV(val, vlen); Newz(0, (*(params->spank_job_env + i)), klen + vlen + 2, char); sprintf((*params->spank_job_env + i), "%s=%s", env_key, env_val); i ++; } } else { Perl_warn(aTHX_ "`spank_job_env' of step launch params is not a hash reference, ignored"); } } return 0; } /* * free allocated environment variable memory for slurm_step_launch_params_t */ static void _free_env(char** environ) { int i; if(! environ) return; for(i = 0; *(environ + i) ; i ++) Safefree(*(environ + i)); Safefree(environ); } /* * free allocated memory for slurm_step_launch_params_t */ void free_slurm_step_launch_params_memory(slurm_step_launch_params_t *params) { if (params->argv) Safefree (params->argv); _free_env(params->env); _free_env(params->spank_job_env); } /********** conversion functions for callback **********/ static int launch_tasks_response_msg_to_hv(launch_tasks_response_msg_t *resp_msg, HV *hv) { AV *av, *av2; int i; STORE_FIELD(hv, resp_msg, return_code, uint32_t); if (resp_msg->node_name) STORE_FIELD(hv, resp_msg, node_name, charp); STORE_FIELD(hv, resp_msg, srun_node_id, uint32_t); STORE_FIELD(hv, resp_msg, count_of_pids, uint32_t); if (resp_msg->count_of_pids > 0) { av = newAV(); av2 = newAV(); for (i = 0; i < resp_msg->count_of_pids; i ++) { av_store_uint32_t(av, i, resp_msg->local_pids[i]); av_store_uint32_t(av2, i, resp_msg->task_ids[i]); } hv_store_sv(hv, "local_pids", newRV_noinc((SV*)av)); hv_store_sv(hv, "task_ids", newRV_noinc((SV*)av2)); } return 0; } static int task_exit_msg_to_hv(task_exit_msg_t *exit_msg, HV *hv) { AV *av; int i; STORE_FIELD(hv, exit_msg, num_tasks, uint32_t); if (exit_msg->num_tasks > 0) { av = newAV(); for (i = 0; i < exit_msg->num_tasks; i ++) { av_store_uint32_t(av, i, exit_msg->task_id_list[i]); } hv_store_sv(hv, "task_id_list", newRV_noinc((SV*)av)); } STORE_FIELD(hv, exit_msg, return_code, uint32_t); STORE_FIELD(hv, exit_msg, job_id, uint32_t); STORE_FIELD(hv, exit_msg, step_id, uint32_t); return 0; } /********** callback related functions **********/ /* * In the C api, callbacks are associated with step_ctx->launch_state. * Since the callback functions have no parameter like "ctx" or "sls", * there is no simple way to map Perl callback to C callback. * * So, only one $step_ctx->launch() call is allowed in Perl, until * $step_ctx->launch_wait_finish(). */ static SV *task_start_cb_sv = NULL; static SV *task_finish_cb_sv = NULL; static PerlInterpreter *main_perl = NULL; static pthread_key_t cbs_key; typedef struct thread_callbacks { SV *step_complete; SV *step_signal; SV *step_timeout; SV *task_start; SV *task_finish; } thread_callbacks_t; static void set_thread_perl(void) { PerlInterpreter *thr_perl = PERL_GET_CONTEXT; if (thr_perl == NULL) { if (main_perl == NULL) { /* should never happen */ fprintf(stderr, "error: no main perl context\n"); exit(-1); } thr_perl = perl_clone(main_perl, CLONEf_COPY_STACKS | CLONEf_KEEP_PTR_TABLE); /* seems no need to call PERL_SET_CONTEXT(thr_perl); */ /* * seems perl will destroy the interpreter associated with * a thread automatically. */ } } #define GET_THREAD_CALLBACKS ((thread_callbacks_t *)pthread_getspecific(cbs_key)) #define SET_THREAD_CALLBACKS(cbs) (pthread_setspecific(cbs_key, (void *)cbs)) static void clear_thread_callbacks(void *arg) { thread_callbacks_t *cbs = (thread_callbacks_t *)arg; if (cbs->task_start) { /* segfault if called. dunno why */ /* SvREFCNT_dec(cbs->task_start); */ } if (cbs->task_finish) { /* SvREFCNT_dec(cbs->task_finish); */ } xfree(cbs); } static void set_thread_callbacks(void) { CLONE_PARAMS params; thread_callbacks_t *cbs = GET_THREAD_CALLBACKS; if (cbs != NULL) return; cbs = xmalloc(sizeof(thread_callbacks_t)); if (!cbs) { fprintf(stderr, "set_thread_callbacks: memory exhausted\n"); exit(-1); } params.stashes = NULL; params.flags = CLONEf_COPY_STACKS | CLONEf_KEEP_PTR_TABLE; params.proto_perl = PERL_GET_CONTEXT; if (task_start_cb_sv != NULL && task_start_cb_sv != &PL_sv_undef) { cbs->task_start = sv_dup(task_start_cb_sv, ¶ms); } if (task_finish_cb_sv != NULL && task_finish_cb_sv != &PL_sv_undef) { cbs->task_finish = sv_dup(task_finish_cb_sv, ¶ms); } if (SET_THREAD_CALLBACKS(cbs) != 0) { fprintf(stderr, "set_thread_callbacks: failed to set thread specific value\n"); exit(-1); } } void set_slcb(HV *callbacks) { SV **svp, *cb; svp = hv_fetch(callbacks, "task_start", 10, FALSE); cb = svp ? *svp : &PL_sv_undef; if (task_start_cb_sv == NULL) { task_start_cb_sv = newSVsv(cb); } else { sv_setsv(task_start_cb_sv, cb); } svp = hv_fetch(callbacks, "task_finish", 11, FALSE); cb = svp ? *svp : &PL_sv_undef; if (task_finish_cb_sv == NULL) { task_finish_cb_sv = newSVsv(cb); } else { sv_setsv(task_finish_cb_sv, cb); } if (main_perl == NULL) { main_perl = PERL_GET_CONTEXT; if ( pthread_key_create(&cbs_key, clear_thread_callbacks) != 0) { fprintf(stderr, "set_slcb: failed to create cbs_key\n"); exit(-1); } } } static void step_complete_cb(srun_job_complete_msg_t *comp_msg) { HV *hv; thread_callbacks_t *cbs = NULL; set_thread_perl(); set_thread_callbacks(); cbs = GET_THREAD_CALLBACKS; if (cbs->step_complete == NULL) return; hv = newHV(); if (srun_job_complete_msg_to_hv(comp_msg, hv) < 0) { Perl_warn( aTHX_ "failed to prepare parameter for step_complete callback"); SvREFCNT_dec(hv); return; } dSP; ENTER; SAVETMPS; PUSHMARK(SP); XPUSHs(sv_2mortal(newRV_noinc((SV*)hv))); PUTBACK; call_sv(cbs->step_complete, G_SCALAR); FREETMPS; LEAVE; } static void step_signal_cb(int signo) { thread_callbacks_t *cbs = NULL; set_thread_perl(); set_thread_callbacks(); cbs = GET_THREAD_CALLBACKS; if (cbs->step_signal == NULL) return; dSP; ENTER; SAVETMPS; PUSHMARK(SP); XPUSHs(sv_2mortal(newSViv(signo))); PUTBACK; call_sv(cbs->step_signal, G_SCALAR); FREETMPS; LEAVE; } static void step_timeout_cb(srun_timeout_msg_t *timeout_msg) { HV *hv; thread_callbacks_t *cbs = NULL; set_thread_perl(); set_thread_callbacks(); cbs = GET_THREAD_CALLBACKS; if (cbs->step_timeout == NULL) return; hv = newHV(); if (srun_timeout_msg_to_hv(timeout_msg, hv) < 0) { Perl_warn( aTHX_ "failed to prepare parameter for step_timeout callback"); SvREFCNT_dec(hv); return; } dSP; ENTER; SAVETMPS; PUSHMARK(SP); XPUSHs(sv_2mortal(newRV_noinc((SV*)hv))); PUTBACK; call_sv(cbs->step_timeout, G_SCALAR); FREETMPS; LEAVE; } static void task_start_cb(launch_tasks_response_msg_t *resp_msg) { HV *hv; thread_callbacks_t *cbs = NULL; set_thread_perl(); set_thread_callbacks(); cbs = GET_THREAD_CALLBACKS; if (cbs->task_start == NULL) return; hv = newHV(); if (launch_tasks_response_msg_to_hv(resp_msg, hv) < 0) { Perl_warn( aTHX_ "failed to prepare parameter for task_start callback"); SvREFCNT_dec(hv); return; } dSP; ENTER; SAVETMPS; PUSHMARK(SP); XPUSHs(sv_2mortal(newRV_noinc((SV*)hv))); PUTBACK; call_sv(cbs->task_start, G_SCALAR); FREETMPS; LEAVE; } static void task_finish_cb(task_exit_msg_t *exit_msg) { HV *hv; thread_callbacks_t *cbs = NULL; set_thread_perl(); set_thread_callbacks(); cbs = GET_THREAD_CALLBACKS; if (cbs->task_finish == NULL) return; hv = newHV(); if (task_exit_msg_to_hv(exit_msg, hv) < 0) { Perl_warn( aTHX_ "failed to prepare parameter for task_exit callback"); SvREFCNT_dec(hv); return; } dSP; ENTER; SAVETMPS; PUSHMARK(SP); XPUSHs(sv_2mortal(newRV_noinc((SV*)hv))); PUTBACK; call_sv(cbs->task_finish, G_VOID); FREETMPS; LEAVE; } slurm_step_launch_callbacks_t slcb = { step_complete_cb, step_signal_cb, step_timeout_cb, task_start_cb, task_finish_cb }; slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/000077500000000000000000000000001265000126300226425ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/00-use.t000077500000000000000000000004531265000126300240450ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 4; # 1 BEGIN { use_ok(Slurm, qw(:constant)); } # 2 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 3 ok(defined SLURM_ERROR, "export constant"); # 4 cmp_ok(SLURM_ERROR, "==", -1, "constant value"); slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/01-error.t000077500000000000000000000007471265000126300244110ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 4; use Slurm qw(:constant); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my $errno = $slurm->get_errno(); ok(defined $errno, "get error number"); # 3 my $msg = $slurm->strerror(); ok(defined $msg, "get default error string"); # 4 my $errmsg = $slurm->strerror(SLURM_NO_CHANGE_IN_DATA); ok($errmsg eq "Data has not changed since time specified", "get specified error string"); slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/02-string.t000077500000000000000000000035561265000126300245700ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 16; use Slurm qw(:constant); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); my ($str, $num); # 2 $str = $slurm->preempt_mode_string(PREEMPT_MODE_REQUEUE); cmp_ok($str, "eq", "REQUEUE", "preempt mode string"); # 3 $num = $slurm->preempt_mode_num("REQUEUE"); cmp_ok($num, "==", PREEMPT_MODE_REQUEUE, "preempt mode num"); # 4 $str = $slurm->job_reason_string(WAIT_TIME); cmp_ok($str, "eq", "BeginTime", "job reason string"); # 5 $str = $slurm->job_state_string(JOB_TIMEOUT); cmp_ok($str, "eq", "TIMEOUT", "job state string"); # 6 $str = $slurm->job_state_string_compact(JOB_TIMEOUT); cmp_ok($str, "eq", "TO", "job state string compact"); # 7 $num = $slurm->job_state_num("TIMEOUT"); cmp_ok($num, "==", JOB_TIMEOUT, "job state num"); # 8 $num = $slurm->job_state_num("TO"); cmp_ok($num, "==", JOB_TIMEOUT, "job state num compact"); # 9 $str = $slurm->reservation_flags_string(RESERVE_FLAG_DAILY); cmp_ok($str, "eq", "DAILY", "reservation flags string"); # 10 $str = $slurm->node_state_string(NODE_STATE_UNKNOWN | NODE_STATE_DRAIN); cmp_ok($str, "eq", "DRAINED", "node state string"); # 11 $str = $slurm->node_state_string_compact(NODE_STATE_UNKNOWN | NODE_STATE_DRAIN); cmp_ok($str, "eq", "DRAIN", "node state string compact"); # 12 $str = $slurm->private_data_string(PRIVATE_DATA_USAGE); cmp_ok($str, "eq", "usage", "private data string"); # 13 $str = $slurm->accounting_enforce_string(6); cmp_ok($str, "eq", "limits,wckeys", "accounting enforce string"); # 14 $str = $slurm->conn_type_string(SELECT_MESH); cmp_ok($str, "eq", "Mesh", "conn type string"); # 15 $str = $slurm->node_use_string(SELECT_VIRTUAL_NODE_MODE); cmp_ok($str, "eq", "VIRTUAL", "node use type string"); # 16 $str = $slurm->bg_block_state_string(4); cmp_ok($str, "eq", "Ready", "bg block state string"); slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/03-block.t000077500000000000000000000040041265000126300243420ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 8; use Slurm qw(:constant); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my $resp = $slurm->load_ctl_conf(); ok(ref($resp) eq "HASH", "load ctl conf") or diag("load_ctl_conf failed: " . $slurm->strerror()); # 3 my $bi_msg; SKIP: { skip "system not supported", 1 unless $resp->{select_type} eq "select/bluegene"; $bi_msg = $slurm->load_block_info(); ok(ref($bi_msg) eq "HASH", "load block info") or diag("load_block_info error: " . $slurm->strerror()); } # 3 SKIP: { skip "no block info msg", 1 unless $bi_msg; my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_block_info_msg($fh, $bi_msg); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^Bluegene Block data as of/; } close($fh); ok($print_ok, "print block info msg"); } # 4 SKIP: { skip "no block info msg", 1 unless $bi_msg; my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_block_info($fh, $bi_msg->{block_array}->[0], 1); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^BlockName=\w+/; } close($fh); ok($print_ok, "print block info"); } # 5 SKIP: { skip "no block info msg", 1 unless $bi_msg; my $str; $str = $slurm->sprint_block_info($bi_msg->{block_array}->[0]); ok($str =~ /^BlockName=\w+/, "sprint block info"); } # 6 - 7 SKIP: { # TODO skip "don't know how to test", 2; skip "no block info msg", 2 unless $bi_msg; skip "not super user", 2 if $>; my $block = $bi_msg->{block_array}->[0]; $rc = $slurm->update_block({}); $err_msg = $slurm->strerror() unless $rc == SLURM_SUCCESS; ok($rc == SLURM_SUCCESS, "update block") || diag("update_block failed: $err_msg"); $rc = $slurm->update_block({}); $err_msg = $slurm->strerror() unless $rc == SLURM_SUCCESS; ok($rc == SLURM_SUCCESS, "update block") || diag("update_block failed: $err_msg"); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/04-alloc.c000077500000000000000000000053401265000126300243260ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 10; use Slurm qw(:constant); use POSIX qw(:signal_h); my ($resp, $job_desc, $jobid, $hostlist, $callbacks, $thr, $port, $file); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); $job_desc = { min_nodes => 1, user_id => $>, num_tasks => 1, name => "perlapi_test" }; # 2 $resp = $slurm->allocate_resources($job_desc); ok(defined $resp, "allocate resources") or diag("allocate_resources: " . $slurm->strerror()); $slurm->kill_job($resp->{job_id}, SIGKILL) if $resp; # 3 $resp = $slurm->allocate_resources_blocking($job_desc, 10, sub {$jobid = shift;}); $jobid = $resp->{job_id} if $resp; ok($jobid, "allocate resources blocking") or diag("allocate_resources_blocking: " . $slurm->strerror()); # 4 SKIP: { skip "resource allocation fail", 1 unless $jobid; $resp = $slurm->allocation_lookup($jobid); ok(defined $resp, "allocation lookup") or diag("allocation_lookup: " . $slurm->strerror()); } # 5 SKIP: { skip "resource allocation fail", 1 unless $jobid; $resp = $slurm->allocation_lookup_lite($jobid); ok(defined $resp, "allocation lookup lite") or diag("allocation_lookup_lite: " . $slurm->strerror()); } # 6 $callbacks = { ping => sub { print STDERR "ping from slurmctld, $_->{job_id}.$_->{step_id}\n"; }, job_complete => sub { print STDERR "job complete, $_->{job_id}.$_->{step_id}\n"; }, timeout => sub { print STDERR "srun timeout, $_->{job_id}.$_->{step_id}, $_->{timeout}\n"; }, user_msg => sub { print STDERR "user msg, $_->{job_id}, $_->{msg}\n";}, node_fail => sub { print STDERR "node fail, $_->{job_id}.$_->{step_id}, $_->{nodelist}\n";}, }; $thr = $slurm->allocation_msg_thr_create($port, $callbacks); ok(ref($thr) eq "Slurm::allocation_msg_thread_t" && defined $port, "allocation msg thr create") or diag("allocation_msg_thr_create: " . $slurm->strerror()); $slurm->allocation_msg_thr_destroy($thr) if $thr; # 7 SKIP: { skip "resource allocation fail", 1 unless $jobid; $resp = $slurm->sbcast_lookup($jobid); ok(defined $resp, "sbcast lookup") or diag("sbcast_lookup: " . $slurm->strerror()); } $slurm->kill_job($jobid, SIGKILL) if $jobid; # 8 $job_desc->{script} = "#!/bin/sh\nsleep 1000\n"; $resp = $slurm->submit_batch_job($job_desc); ok($resp, "submit batch job") or diag("submit_batch_job: " . $slurm->strerror()); $slurm->kill_job($resp->{job_id}, SIGKILL) if $resp; # 9 $rc = $slurm->job_will_run($job_desc); ok(defined $rc, "job will run") or diag("job_will_run: " . $slurm->strerror()); # 10 SKIP: { skip "do not know how to test", 1 if 1; $hl = $slurm->read_hostfile($file, 8); ok($hl eq "node0,node1,node2,node3,node4,node5,node6,node7", "read hostfile"); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/05-signal.t000077500000000000000000000027611265000126300245370ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 6; use Slurm qw(:constant); use POSIX qw(:signal_h); my ($job_desc, $rc, $jobid, $resp); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my %env = ('PATH' => $ENV{'PATH'}); $job_desc = { min_nodes => 1, num_tasks => 1, user_id => $>, group_id => $>, script => "#!/bin/sh\ntrap '/bin/true' SIGUSR1\nsrun sleep 1000\nsrun sleep 1000\nsrun sleep 1000\nsleep 1000", name => "perlapi_test", std_out => "/dev/null", std_err => "/dev/null", work_dir => "/tmp", environment => \%env, }; $resp = $slurm->submit_batch_job($job_desc); ok($resp, "submit batch job") or diag ("submit_batch_job: " . $slurm->strerror()); $jobid = $resp->{job_id} if $resp; sleep 2; # 3 - 6 SKIP: { skip "no job", 4 unless $jobid; $rc = $slurm->signal_job($jobid, SIGUSR1); ok($rc == SLURM_SUCCESS, "signal job") or diag("signal_job: " . $slurm->strerror()); $rc = $slurm->signal_job_step($jobid, 0, SIGUSR1); ok($rc == SLURM_SUCCESS, "signal job step") or diag("signal_job_step: " . $slurm->strerror()); $rc = $slurm->kill_job_step($jobid, 1, SIGUSR1); ok($rc == SLURM_SUCCESS || $slurm->get_errno() == ESLURM_INVALID_JOB_ID, "kill job step") or diag("kill_job_step: " . $slurm->strerror()); $rc = $slurm->kill_job($jobid, SIGUSR1, 1); ok($rc == SLURM_SUCCESS, "kill job") or diag("kill_job: " . $slurm->strerror()); } $slurm->kill_job($jobid, SIGKILL) if $jobid; slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/06-complete.t000077500000000000000000000022311265000126300250630ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 4; use Slurm qw(:constant); use POSIX qw(:signal_h); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); my ($job_desc, $rc, $jobid, $resp); # 2 my %env = ('PATH' => $ENV{'PATH'}); $job_desc = { min_nodes => 1, num_tasks => 1, user_id => $>, group_id => $>, script => "#!/bin/sh\nsrun sleep 1000\nsleep 1000", name => "perlapi_test", std_out => "/dev/null", std_err => "/dev/null", work_dir => "/tmp", environment => \%env, }; $resp = $slurm->submit_batch_job($job_desc); ok($resp, "submit batch job") or diag ("submit_batch_job: " . $slurm->strerror()); $jobid = $resp->{job_id} if $resp; sleep 2; # 3 - 4 SKIP: { skip "no job", 4 unless $jobid; $rc = $slurm->complete_job($jobid, 123); ok($rc == SLURM_SUCCESS, "complete job") or diag("complete_job: " . $slurm->strerror()); $rc = $slurm->terminate_job_step($jobid, 0); ok($rc == SLURM_SUCCESS || $slurm->get_errno() == ESLURM_ALREADY_DONE, "terminate job step") or diag("terminate_job_step: " . $slurm->strerror()); } $slurm->kill_job($jobid, SIGKILL) if $jobid; slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/07-spawn.t000077500000000000000000000122301265000126300244040ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 17; use Slurm qw(:constant); use POSIX qw(:signal_h); use Devel::Peek; my ($resp, $job_desc, $jobid, $hostlist, $callbacks, $thr, $port, $file, $params, $rc, $data); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); $job_desc = { min_nodes => 1, user_id => $>, num_tasks => 1, name => "perlapi_test" }; $callbacks = { ping => sub { print STDERR "ping from slurmctld, $_->{job_id}.$_->{step_id}\n"; }, job_complete => sub { print STDERR "job complete, $_->{job_id}.$_->{step_id}\n"; }, timeout => sub { print STDERR "srun timeout, $_->{job_id}.$_->{step_id}, $_->{timeout}\n"; }, user_msg => sub { print STDERR "user msg, $_->{job_id}, $_->{msg}\n";}, node_fail => sub { print STDERR "node fail, $_->{job_id}.$_->{step_id}, $_->{nodelist}\n";}, }; # 2 $thr = $slurm->allocation_msg_thr_create($port, $callbacks); ok(ref($thr) eq "Slurm::allocation_msg_thread_t" && defined $port, "allocation msg thr create") or diag("allocation_msg_thr_create: " . $slurm->strerror()); # 3 $resp = $slurm->allocate_resources_blocking($job_desc, 10, sub {$jobid = shift;}); $jobid = $resp->{job_id} if $resp; ok($jobid, "allocate resources blocking") or diag("allocate_resources_blocking: " . $slurm->strerror()); $params = { job_id => $jobid, name => "perlapi_test", min_nodes => 1, task_count => 1, }; # 4 my $ctx = $slurm->step_ctx_create($params); ok(defined $ctx, "step ctx create") or diag("step_ctx_create: " . $slurm->strerror()); $params->{node_list} = $resp->{node_list}; # 5 my $ctx2 = $slurm->step_ctx_create_no_alloc($params, 3); ok(defined $ctx2, "step ctx create no alloc") or diag("step_ctx_create_no_alloc: " . $slurm->strerror()); # 6 - 8 SKIP: { skip "no step ctx", 3 unless $ctx; foreach my $key (SLURM_STEP_CTX_JOBID, SLURM_STEP_CTX_STEPID, SLURM_STEP_CTX_NUM_HOSTS) { undef($data); $rc = $ctx->get($key, $data); ok($rc == SLURM_SUCCESS && defined $data, "step ctx get $key") or diag("step_ctx_get: $key, $rc, $data, " . $slurm->strerror()); } } # 9 SKIP: { skip "no step ctx", 1 unless $ctx; foreach my $key (SLURM_STEP_CTX_TASKS) { undef $data; $rc = $ctx->get($key, $data); ok($rc == SLURM_SUCCESS && ref($data) eq "ARRAY", "step ctx get TASKS") or diag("step_ctx_get: TASKS, $rc, $data, " . $slurm->strerror()); } } # 10 SKIP: { skip "no step ctx", 1 unless $ctx; foreach my $key (SLURM_STEP_CTX_TID) { undef $data; $rc = $ctx->get($key, 0, $data); ok($rc == SLURM_SUCCESS && ref($data) eq "ARRAY", "step ctx get TID") or diag("step_ctx_get: TID, $rc, $data, " . $slurm->strerror()); } } # 11 SKIP: { skip "no step ctx", 1 unless $ctx; foreach my $key (SLURM_STEP_CTX_CRED) { undef $data; $rc = $ctx->get($key, $data); ok($rc == SLURM_SUCCESS && ref($data) eq "Slurm::slurm_cred_t", "step ctx get CRED") or diag("step_ctx_get: CRED, $rc, $data, " . $slurm->strerror()); } } # 12 SKIP: { skip "no step ctx", 1 unless $ctx; foreach my $key (SLURM_STEP_CTX_SWITCH_JOB) { undef $data; $rc = $ctx->get($key, $data); ok($rc == SLURM_SUCCESS && (!defined($data) || ref($data) eq "Slurm::switch_jobinfo_t"), "step ctx get SWITCH_JOB") or diag("step_ctx_get: SWITCH_JOB, $rc, $data, " . $slurm->strerror()); } } # 13 SKIP: { skip "no step ctx", 1 unless $ctx; foreach my $key (SLURM_STEP_CTX_HOST) { undef $data; $rc = $ctx->get($key, 0, $data); ok($rc == SLURM_SUCCESS && defined $data, "step ctx get HOST") or diag("step_ctx_get: HOST, $rc, $data, " . $slurm->strerror()); } } # 14 SKIP: { skip "no step ctx", 1 unless $ctx; foreach my $key (SLURM_STEP_CTX_USER_MANAGED_SOCKETS) { my ($data1, $data2); $rc = $ctx->get($key, $data1, $data2); ok(($rc == SLURM_SUCCESS && defined $data1 && ref($data2) eq "ARRAY") || ($rc == SLURM_ERROR && $data1 == 0 && !defined($data2)), "step ctx get UMS") or diag("step_ctx_get: UMS, $rc, $data1, $data2" . $slurm->strerror()); } } # 15 SKIP: { skip "no step ctx", 1 unless $ctx2; my ($data1); $data1 = 1; $rc = $ctx2->daemon_per_node_hack("test", 1, \$data1); ok($rc == SLURM_SUCCESS, "daemon per node hack") or diag("step ctx daemon per node hack" . $slurm->strerror()); } # 16 $params = { argv => ["/bin/hostname"], }; $callbacks = { task_start => sub { my $msg = shift; print STDERR "\ntask_start: $msg->{node_name}, $msg->{count_of_pids}\n";}, task_finish => sub { my $msg = shift; print STDERR "\ntask_finish: " . join(", ", @{$msg->{task_id_list}}) . "\n";}, }; SKIP: { skip "no step ctx", 1 unless $ctx; $rc = $ctx->launch( $params, $callbacks); ok($rc == SLURM_SUCCESS, "step ctx launch") or diag("step_ctx_launch" . $slurm->strerror()); } # 17 SKIP: { skip "no step ctx", 1 unless $ctx; $rc = $ctx->launch_wait_start(); ok($rc == SLURM_SUCCESS, "step ctx wait start") or diag("step_ctx_wait_start: $rc, " . $slurm->strerror()); } if ($ctx) { $ctx->launch_fwd_signal(SIGINT); $ctx->launch_wait_finish(); $ctx->launch_abort(); } $slurm->allocation_msg_thr_destroy($thr) if $thr; $slurm->kill_job($jobid, SIGKILL) if $jobid; slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/08-conf.t000077500000000000000000000027441265000126300242130ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 9; use Slurm ':constant'; # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my ($major, $minor, $micro) = $slurm->api_version(); ok(defined $micro, "api version"); # 3 my $resp = $slurm->load_ctl_conf(); ok(ref($resp) eq "HASH", "load ctl conf"); # 4 SKIP: { my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_ctl_conf($fh, $resp); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^ControlMachine\s+=\s+\w+$/; } close($fh); ok($print_ok, "print ctl conf"); } # 5 my $list = $slurm->ctl_conf_2_key_pairs($resp); ok(ref($list) eq "Slurm::List", "ctl conf 2 key pairs"); # 6 $resp = $slurm->load_slurmd_status(); ok((defined $resp || $slurm->strerror() eq "Connection refused"), "load slurmd status"); # 7 SKIP: { my ($fh, $print_ok); skip "this is not a compute node", 1 unless defined $resp; skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_slurmd_status($fh, $resp); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^Slurmd PID\s+=\s+\d+$/; } close($fh); ok($print_ok, "print slurmd status"); } # 8 TODO: { my ($fh, $print_ok); local $TODO = "do not know how to test"; ok($print_ok, "print key pairs"); } # 9 TODO: { my $update_ok; local $TODO = "do not know how to test"; ok($update_ok, "update step"); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/09-resource.t000077500000000000000000000023271265000126300251130ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 5; use Slurm qw(:constant); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my $job_desc = { min_nodes => 1, num_tasks => 1, user_id => $>, script => "#!/bin/sh\nsleep 1000\n", name => "perlapi_test", stdout => "/dev/null", stderr => "/dev/null", }; $resp = $slurm->submit_batch_job($job_desc); ok($resp, "submit batch job") or diag ("submit_batch_job: " . $slurm->strerror()); # 3 $resp = $slurm->load_jobs(0, SHOW_DETAIL); ok(ref($resp) eq "HASH", "load jobs") or diag("load_jobs: " . $slurm->strerror()); my ($job, $resrcs); foreach (@{$resp->{job_array}}) { if ($_->{job_resrcs}) { $resrcs = $_->{job_resrcs}; $job = $_; last; } } # 4, 5 SKIP: { skip "no job resources", 2 unless $resrcs; my $cnt = $slurm->job_cpus_allocated_on_node_id($resrcs, 0); ok($cnt, "job cpus allocated on node id") or diag("job_cpus_allocated_on_node_id: $cnt"); my $hl = Slurm::Hostlist::create($job->{nodes}); my $node = $hl->shift; $cnt = $slurm->job_cpus_allocated_on_node($resrcs, $node); ok($cnt, "job cpus allocated on node") or diag("job_cpus_allocated_on_node: $cnt"); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/10-job.t000077500000000000000000000047521265000126300240320ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 13; use Slurm qw(:constant); use POSIX qw(:signal_h); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); my ($jobid, $time, $resp, $rc); # 2 my $job_desc = { min_nodes => 1, num_tasks => 1, user_id => $>, script => "#!/bin/sh\nsleep 1000\n", name => "perlapi_test", stdout => "/dev/null", stderr => "/dev/null", }; $resp = $slurm->submit_batch_job($job_desc); ok($resp, "submit batch job") or diag ("submit_batch_job: " . $slurm->strerror()); $jobid = $resp->{job_id} if $resp; # 3, 4 ,5 SKIP: { skip "submit job failure", 3 unless $jobid; $time = $slurm->get_end_time($jobid); ok(defined $time, "get end time") or diag("get_end_time: " . $slurm->strerror()); $time = $slurm->get_rem_time($jobid); ok($time != -1, "get rem time") or diag("get_rem_time: " . $slurm->strerror()); $rc = $slurm->job_node_ready($jobid); ok(defined $rc, "job node ready") or diag("job_node_ready: " . $slurm->strerror()); } # 6 $resp = $slurm->load_job($jobid); ok(ref($resp) eq "HASH", "load job") or diag("load_job: " . $slurm->strerror()); # 7 $resp = $slurm->load_jobs(0, 1); ok(ref($resp) eq "HASH", "load jobs") or diag("load_job: " . $slurm->strerror()); # 8 undef $rc; $rc = $slurm->notify_job($jobid, "perl api test"); ok(defined $rc, "notify job") or diag("notify_job: " . $slurm->strerror()); # 9 TODO: { local $TODO = "do not know how to test"; my $jid = $slurm->pid2jobid(1234); ok(defined $jid, "pid2jobid"); } # 10 SKIP: { my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_job_info($fh, $resp->{job_array}->[0]); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^JobId=\d+/; } close($fh); ok($print_ok, "print job info"); } # 11 SKIP: { my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_job_info_msg($fh, $resp); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^Job data as of/; } close($fh); ok($print_ok, "print job info msg"); } # 12 { my ($fh, $print_ok); my $str = $slurm->sprint_job_info($resp->{job_array}->[0], 1); $print_ok = 1 if $str =~ /^JobId=\d+/; ok($print_ok, "print job step info"); } # 13 undef $rc; $rc = $slurm->update_job( { job_id => $jobid, timelimit => 100 } ); ok(defined $rc, "update job"); $slurm->kill_job($jobid, SIGKILL) if $jobid; slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/11-step.t000077500000000000000000000043561265000126300242340ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 8; use Slurm qw(:constant); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my $resp = $slurm->get_job_steps(); ok(ref($resp) eq "HASH", "get job steps"); # 3 SKIP: { my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_job_step_info_msg($fh, $resp, 1); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^Job step data as/; } close($fh); ok($print_ok, "print job step info msg"); } # 4 SKIP: { my ($fh, $print_ok); skip "no steps in system", 1 unless @{$resp->{job_steps}}; skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_job_step_info($fh, $resp->{job_steps}->[0], 1); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^StepId=\d+/; } close($fh); ok($print_ok, "print job step info"); } # 5 SKIP: { my ($fh, $print_ok); skip "no steps in system", 1 unless @{$resp->{job_steps}}; my $str = $slurm->sprint_job_step_info($resp->{job_steps}->[0], 1); $print_ok = 1 if $str =~ /^StepId=\d+/; ok($print_ok, "print job step info"); } # 6 SKIP: { skip "no steps in system", 1 unless @{$resp->{job_steps}}; my $layout = $slurm->job_step_layout_get($resp->{job_steps}->[0]->{job_id}, $resp->{job_steps}->[0]->{step_id}); ok(ref($layout) eq "HASH" || $slurm->get_errno() == ESLURM_INVALID_JOB_ID, "job step layout get") or diag("job_step_layout_get: " . $slurm->strerror()); } # 7 SKIP: { skip "no steps in system", 1 unless @{$resp->{job_steps}}; my $layout = $slurm->job_step_stat($resp->{job_steps}->[0]->{job_id}, $resp->{job_steps}->[0]->{step_id}); ok(ref($layout) eq "HASH" || $slurm->get_errno() == ESLURM_INVALID_JOB_ID, "job step stat") or diag("job_step_stat: " . $slurm->strerror()); } # 8 SKIP: { skip "no steps in system", 1 unless @{$resp->{job_steps}}; my $layout = $slurm->job_step_get_pids($resp->{job_steps}->[0]->{job_id}, $resp->{job_steps}->[0]->{step_id}); ok(ref($layout) eq "HASH" || $slurm->get_errno() == ESLURM_INVALID_JOB_ID, "job step get pids") or diag("job_step_get_pids: " . $slurm->strerror()); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/12-node.t000077500000000000000000000030721265000126300242010ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 7; use Slurm qw(:constant); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my $resp = $slurm->load_node(); ok(ref($resp) eq "HASH", "load node"); # 3 SKIP: { my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_node_info_msg($fh, $resp); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^NodeName=\w+/; } close($fh); ok($print_ok, "print node info msg"); } # 4 SKIP: { my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_node_table($fh, $resp->{node_array}->[0], 1, 1); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^NodeName=\w+/; } close($fh); ok($print_ok, "print node table"); } # 5 my $str; $str = $slurm->sprint_node_table($resp->{node_array}->[0]); ok($str =~ /^NodeName=\w+/, "sprint node table"); # 6 - 7 SKIP: { skip "You are not super user", 2 if $>; my $node = $resp->{node_array}->[0]; $rc = $slurm->update_node({node_names => $node->{name}, state => NODE_STATE_DRAIN, reason => 'perlapi test'}); $err_msg = $slurm->strerror() unless $rc == SLURM_SUCCESS; ok($rc == SLURM_SUCCESS, "update node") || diag("update_node failed: $err_msg"); $rc = $slurm->update_node({node_names => $node->{name}, state => NODE_RESUME, features => 'test'}); $err_msg = $slurm->strerror() unless $rc == SLURM_SUCCESS; ok($rc == SLURM_SUCCESS, "update node") || diag("update_node failed: $err_msg"); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/13-topo.t000077500000000000000000000017601265000126300242400ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 4; use Slurm qw(:constant); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my $resp = $slurm->load_topo(); ok(ref($resp) eq "HASH", "load topo"); my $rec_cnt = @{$resp->{topo_array}}; # 3 SKIP: { my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); if ($rec_cnt) { $slurm->print_topo_info_msg($fh, $resp); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^Switch\s+=\s+\w+$/; } } close($fh); ok($print_ok || $rec_cnt == 0, "print topo info msg"); } # 4 SKIP: { my ($fh, $print_ok); skip "no topo record available", 1 unless $rec_cnt; skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_topo_record($fh, $resp->{topo_array}->[0], 1); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^Switch\s+=\s+\w+$/; } close($fh); ok($print_ok, "print topo record"); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/14-select.t000077500000000000000000000065051265000126300245410ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 26; use Slurm qw(:constant); use POSIX qw(:signal_h); use Devel::Peek; # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); my ($resp, $rc, $type, $select_type, $jobid, $jobinfo, $nodeinfo, $data); # 2 $resp = $slurm->load_ctl_conf(); ok(ref($resp) eq "HASH", "load ctl conf"); $select_type = substr($resp->{select_type}, 7); # 3 my $job_desc = { min_nodes => 1, num_tasks => 1, user_id => $>, script => "#!/bin/sh\nsleep 1000\n", name => "perlapi_test", stdout => "/dev/null", stderr => "/dev/null", }; $resp = $slurm->submit_batch_job($job_desc); ok($resp, "submit batch job") or diag ("submit_batch_job: " . $slurm->strerror()); $jobid = $resp->{job_id} if $resp; # 4 SKIP: { skip "job submit failure", 1 unless $jobid; $resp = $slurm->load_job($jobid); ok($resp, "load job info") or diag("load_job: " . $slurm->strerror()); $jobinfo = $resp->{job_array}->[0]->{select_jobinfo}; } my $jobinfo_data = { SELECT_JOBDATA_GEOMETRY() => [ "ARRAY", [qw(bluegene)] ], SELECT_JOBDATA_ROTATE() => ["", [qw(bluegene)]], SELECT_JOBDATA_CONN_TYPE() => ["", [qw(bluegene)]], SELECT_JOBDATA_ALTERED() => ["", [qw(bluegene)]], SELECT_JOBDATA_REBOOT() => ["", [qw(bluegene)]], SELECT_JOBDATA_NODE_CNT() => ["", [qw(bluegene)]], SELECT_JOBDATA_RESV_ID() => ["", [qw(cray)]], SELECT_JOBDATA_BLOCK_ID() => ["", [qw(bluegene)]], SELECT_JOBDATA_NODES() => ["", [qw(bluegene)]], SELECT_JOBDATA_IONODES() => ["", [qw(bluegene)]], SELECT_JOBDATA_BLRTS_IMAGE() => ["", [qw(bluegene)]], SELECT_JOBDATA_LINUX_IMAGE() => ["", [qw(bluegene)]], SELECT_JOBDATA_MLOADER_IMAGE() => ["", [qw(bluegene)]], SELECT_JOBDATA_RAMDISK_IMAGE() => ["", [qw(bluegene)]], SELECT_JOBDATA_PTR() => ["Slurm::select_jobinfo_t", [qw(cray)]], }; # 5 - 19 foreach $type (0 .. SELECT_JOBDATA_PTR) { SKIP: { skip "job submit failure", 1 unless $jobinfo; skip "plugin not support", 1 unless grep {$select_type eq $_} @{$jobinfo_data->{$type}->[1]}; $rc = $slurm->get_select_jobinfo($jobinfo, $type, $data); ok($rc == SLURM_SUCCESS && ref($data) eq $jobinfo_data->{$type}->[0], "get select jobinfo $type") or diag("get select jobinfo $type: $rc, " . ref($data)); } } $slurm->kill_job($jobid, SIGKILL) if $jobid; # 20 $resp = $slurm->load_node(); ok(ref($resp) eq "HASH", "load node"); $nodeinfo = $resp->{node_array}->[0]->{select_nodeinfo}; my $nodeinfo_data = { SELECT_NODEDATA_BITMAP_SIZE() => ["", [qw(bluegene)]], SELECT_NODEDATA_SUBGRP_SIZE() => ["", [qw(bluegene linear cons_res)]], SELECT_NODEDATA_SUBCNT() => ["", [qw(bluegene linear cons_res)]], SELECT_NODEDATA_BITMAP() => ["Slurm::Bitstr", [qw(bluegene)]], SELECT_NODEDATA_STR() => ["", [qw(bluegene)]], SELECT_NODEDATA_PTR() => ["Slurm::select_nodeinfo_t", [qw(linear cray cons_res)]], }; # 21 - 26 foreach $type (0 .. SELECT_NODEDATA_PTR) { SKIP: { skip "pluing not support", 1 unless grep {$select_type eq $_} @{$nodeinfo_data->{$type}->[1]}; $rc = $slurm->get_select_nodeinfo($nodeinfo, $type, NODE_STATE_ALLOCATED, $data); ok ($rc == SLURM_SUCCESS && ref($data) eq $nodeinfo_data->{$type}->[0], "get select nodeinfo $type") or diag("get select nodeinfo $type: $rc, " . ref($data)); } } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/15-partition.t000077500000000000000000000036501265000126300252720ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 9; use Slurm qw(:constant); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my $resp; $resp = $slurm->load_partitions(); ok(ref($resp) eq "HASH", "load partitions"); # 3 SKIP: { my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_partition_info_msg($fh, $resp); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^Partition data as of/; } close($fh); ok($print_ok, "print partition info msg"); } # 4 SKIP: { my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_partition_info($fh, $resp->{partition_array}->[0]); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^PartitionName=\w+/; } close($fh); ok($print_ok, "print partition info"); } # 5 my $str = $slurm->sprint_partition_info($resp->{partition_array}->[0]); ok(defined $str && $str =~ /^PartitionName=\w+/, "sprint partition info") or diag("sprint_partition_info: $str"); # 6 $resp = $slurm->load_node(); ok(ref($resp) eq "HASH", "load node"); my $node_name = $resp->{node_array}->[0]->{name}; my $part_name = "perlapi_test"; my $rc; # 7 SKIP: { skip "not super user", 1 if $>; $rc = $slurm->create_partition({name => $part_name, nodes => $node_name}); ok($rc == SLURM_SUCCESS, "create partition") || diag("create partition: " . $slurm->strerror()); } # 8 SKIP: { skip "not super user", 1 if $>; $rc = $slurm->update_partition({name => $part_name, flags => PART_FLAG_ROOT_ONLY}); ok($rc == SLURM_SUCCESS, "update partition") || diag("update partition: " . $slurm->strerror()); } # 9 SKIP: { skip "not super user", 1 if $>; $rc = $slurm->delete_partition({name => $part_name}); ok($rc == SLURM_SUCCESS, "delete partition") || diag("delete partition: " . $slurm->strerror()); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/16-reservation.t000077500000000000000000000043101265000126300256150ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 8; use Slurm qw(:constant); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my $resv; SKIP: { skip "not super user", 1 if $>; $resv = $slurm->create_reservation( { start_time => NO_VAL, duration => 100, flags => RESERVE_FLAG_OVERLAP | RESERVE_FLAG_IGN_JOBS, users => 'root', node_cnt => 1 } ); ok(defined $resv, "create reservation") || diag ("create_reservation: " . $slurm->strerror()); } # 3 SKIP: { skip "not super user", 1 if $>; my $rc = $slurm->update_reservation( { name => $resv, duration => 20 }); ok($rc == SLURM_SUCCESS, "update reservation") || diag ("update_rerservation: " . $slurm->strerror()); } # 4 my $resp = $slurm->load_reservations(); ok(ref($resp) eq "HASH", "load reservations"); # 5 SKIP: { my ($fh, $print_ok); skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_reservation_info_msg($fh, $resp, 1); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^Reservation data as of/; } close($fh); ok($print_ok, "print reservation info msg"); } # 6 SKIP: { my ($fh, $print_ok); skip "no reservation in system", 1 unless @{$resp->{reservation_array}}; skip "failed to open temporary file", 1 unless open($fh, '+>', undef); $slurm->print_reservation_info($fh, $resp->{reservation_array}->[0], 1); seek($fh, 0, 0); while(<$fh>) { $print_ok = 1 if /^ReservationName=\w+/; } close($fh); ok($print_ok, "print reservation info"); } # 7 SKIP: { skip "no reservation in system", 1 unless @{$resp->{reservation_array}}; my $str = $slurm->sprint_reservation_info($resp->{reservation_array}->[0], 1); ok(defined $str && $str =~ /^ReservationName=\w+/, "sprint reservation info") or diag("sprint_reservation_info: $str"); } # 8 SKIP: { skip "not super user", 1 if $>; $rc = $slurm->delete_reservation({name => $resv}); # XXX: if accounting_storage/slurmdbd is configured and slurmdbd fails, delete reservation will fail. ok($rc == SLURM_SUCCESS, "delete reservation") || diag("delete_reservation" . $slurm->strerror()); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/17-ping.t000077500000000000000000000027301265000126300242160ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 9; use Slurm qw(:constant); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my $resp = $slurm->load_ctl_conf(); ok(ref($resp) eq "HASH", "load ctl conf"); # 3 my $rc = $slurm->ping(); ok($rc == SLURM_SUCCESS, "ping primary controller"); # 4 SKIP: { skip "no backup control machine configured", 1 unless $resp->{backup_controller}; $rc = $slurm->ping(2); ok($rc == SLURM_SUCCESS, "ping backup control machine") || diag ("ping backup controller: " . $slurm->strerror()); } # 5 SKIP: { skip "better not testing this", 1; #$rc = $slurm->shutdown(); ok($rc == SLURM_SUCCESS, "shutdown"); } # 6 SKIP: { skip "better not testing this", 1; skip "no backup control machine configured", 1 unless $resp->{backup_controller}; #$rc = $slurm->takeover(); ok($rc == SLURM_SUCCESS, "takeover"); } # 7 SKIP: { skip "not super user", 1 if $>; $rc = $slurm->set_debug_level(3); ok($rc == SLURM_SUCCESS, "set debug level") or diag("set_debug_level: " . $slurm->strerror()); } # 8 SKIP: { skip "not super user", 1 if $>; $rc = $slurm->set_schedlog_level(1); ok($rc == SLURM_SUCCESS, "set sched log level") || diag("set_sched_log_level" . $slurm->strerror()); } # 9 SKIP: { skip "not super user", 1 if $>; $rc = $slurm->reconfigure(); ok($rc == SLURM_SUCCESS, "reconfigure") || diag("reconfigure: " . $slurm->strerror()); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/18-suspend.t000077500000000000000000000025511265000126300247440ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 5; use Slurm qw(:constant); use POSIX qw(:signal_h); my ($resp, $jobid, $rc, $susp, $job_desc); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); $job_desc = { min_nodes => 1, num_tasks => 1, user_id => $>, script => "#!/bin/sh\nsleep 1000\n", name => "perlapi_test", stdout => "/dev/null", stderr => "/dev/null", }; # 2 $resp = $slurm->submit_batch_job($job_desc); ok($resp, "submit batch job") or diag ("submit_batch_job: " . $slurm->strerror()); $jobid = $resp->{job_id} if $resp; # 3 SKIP: { skip "not super user", 1 if $>; skip "no job", 1 unless $jobid; $rc = $slurm->suspend($jobid); ok($rc == SLURM_SUCCESS || $slurm->get_errno() == ESLURM_JOB_PENDING, "suspend") and $susp = 1 or diag("suspend: " . $slurm->strerror()); } # 4 SKIP: { skip "not super user", 1 if $>; skip "not suspended", 1 unless $susp; $rc = $slurm->resume($jobid); ok($rc == SLURM_SUCCESS || $slurm->get_errno() == ESLURM_JOB_PENDING, "resume") or diag("resume: " . $slurm->strerror()); } # 5 SKIP: { skip "not super user", 1 if $>; skip "no job", 1 unless $jobid; $rc = $slurm->requeue($jobid); ok($rc == SLURM_SUCCESS, "requeue") or diag("requeue: " . $slurm->strerror()); } $slurm->kill_job($jobid, SIGKILL) if $jobid; slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/19-checkpoint.t000077500000000000000000000003161265000126300254100ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 1; use Slurm qw(:constant); my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # TODO: do not know how to test slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/20-trigger.t000077500000000000000000000025671265000126300247260ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 5; use Slurm qw(:constant); use POSIX qw(:errno_h); # 1 my $slurm = Slurm::new(); ok(defined $slurm, "create slurm object with default configuration"); # 2 my $trig_set; SKIP: { skip "not super user", 1 if $>; my $rc = $slurm->set_trigger( { trig_type => TRIGGER_TYPE_RECONFIG, res_type => TRIGGER_RES_TYPE_NODE, program => "/bin/true", } ); ok($rc == SLURM_SUCCESS, "set trigger") and $trig_set = 1 or diag("set_trigger: " . $slurm->strerror()); } # 3 my $resp; $resp = $slurm->get_triggers(); ok(ref($resp) eq "HASH", "getting triggers"); # 4 SKIP: { skip "not super user", 1 if $>; skip "trigger not set", 1 unless $trig_set; my $rc = $slurm->pull_trigger ( {trig_res_type => TRIGGER_RES_TYPE_NODE} ); ok($rc == SLURM_ERROR && $slurm->get_errno() == EINVAL, "pull trigger") or diag("pull_trigger: " . $slurm->strerror()); } # 5 SKIP: { skip "not super user", 1 if $>; skip "trigger not set", 1 unless $trig_set; my $trig_id; foreach my $trig(@{$resp->{trigger_array}}) { next unless $trig->{program} eq "/bin/true"; $trig_id = $trig->{trig_id}; } skip "trigger not found", 1 unless $trig_id; my $rc = $slurm->clear_trigger ( {trig_id => $trig_id, user_id => 0} ); ok($rc == SLURM_SUCCESS, "clear trigger") or diag("clear_trigger: " . $slurm->strerror()); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/21-hostlist.t000077500000000000000000000016251265000126300251270ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 8; use Slurm qw(:constant); my $hostnames="node0,node3,node4,node8,linux,linux2,linux5,node4"; # 1 my $hl = Slurm::Hostlist::create($hostnames); ok(ref($hl) eq "Slurm::Hostlist", "hostlist create"); # 2 my $cnt = $hl->count(); ok ($cnt == 8, "hostlist count"); # 3 my $pos = $hl->find("linux"); ok ($pos == 4, "hostlist find"); # 4 $cnt = $hl->push("node12,node15,linux8"); ok ($cnt == 3, "hostlist push"); # 5 $cnt = $hl->push_host("linux23"); ok ($cnt == 1, "hostlist push host"); # 6 my $str = $hl->ranged_string(); ok($str eq "node[0,3-4,8],linux,linux[2,5],node[4,12,15],linux[8,23]", "hostlist ranged string") or diag("ranged_string: $str"); #7 my $hn = $hl->shift(); ok($hn eq "node0", "hostlist shift"); # 8 $hl->uniq(); $cnt = $hl->count(); # total 12, one duplicate, one shifted ok($cnt == 10, "hostlist uniq") or diag("count after uniq: $cnt"); slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/22-list.t000077500000000000000000000003221265000126300242230ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 1; use Slurm qw(:constant); my $slurm = Slurm::new(); ok(ref $slurm eq "Slurm", "create slurm object with default configuration"); # TODO: do not know how to test slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/t/23-bitstr.t000077500000000000000000000076751265000126300246020ustar00rootroot00000000000000#!/usr/bin/perl -T use Test::More tests => 37; use Slurm qw(:constant); my ($bm, $bm2, $rc, $cnt, $pos, $sum, $size, $ia, $str); # 1 $bm = Slurm::Bitstr::alloc(32); ok(ref($bm) eq "Slurm::Bitstr", "bit alloc"); # 2 #$bm->realloc(32); #ok($bm->size() == 32, "bit realloc"); # 3 $bm2 = $bm->copy(); ok(ref($bm2) eq "Slurm::Bitstr", "bit copy"); # 4 $rc = $bm->test(13); ok (!$rc, "bit test"); # 5 $bm->set(13); $rc = $bm->test(13); ok ($rc, "bit set"); # 6 $bm->clear(13); $rc = $bm->test(13); ok (!$rc, "bit clear"); # 7 $bm->nset(13, 28); $rc = $bm->test(16); ok($rc, "bit nset"); #8 $bm->nclear(22, 30); $rc = $bm->test(26); ok(!$rc, "bit nset"); # $bm fmt: "13-21" # $bm2 fmt: "" # 9 $pos = $bm->ffc(); ok($pos == 0, "bit ffc") or diag("ffc: $pos"); # 10 $pos = $bm->ffs(); ok($pos == 13, "bit ffs") or diag("ffs: $pos"); # 11 $pos = $bm->fls(); ok($pos == 21, "bit fls") or diag("fls: $pos"); # 12 $pos = $bm->nffc(3); ok($pos == 0, "bit nffc") or diag("nffc: $pos"); # 13 $pos = $bm->nffs(20); ok($pos == -1, "bit nffs") or diag("nffs: $pos"); # 14 $pos = $bm->noc(5, 16); ok($pos == 22, "bit noc") or diag("noc: $pos"); # 15 $size = $bm->size(); ok($size == 32, "bit size") or diag("size: $size"); # 16 $bm->and($bm2); $cnt = $bm->set_count(); ok($cnt == 0, "bit and") or diag("and: $cnt"); # 17 $bm->not(); $cnt = $bm->set_count(); ok($cnt == 32, "bit not") or diag("not: $cnt"); # 18 $bm->nclear(16, 31); $bm2->nset(16, 23); $bm->or($bm2); $cnt = $bm->set_count(); ok($cnt == 24, "bit or") or diag("or: $cnt"); # $bm2 fmt: "16-23" # 19 $bm->copybits($bm2); $cnt = $bm->set_count(); ok($cnt == 8, "bit copybits") or diag("copybits: $cnt"); # 20 $cnt = $bm->set_count(); ok($cnt == 8, "bit set count") or diag("set_count: $cnt"); # 21 $cnt = $bm->clear_count(); ok($cnt == 24, "bit clear count") or diag("clear_count: $cnt"); # 22 $cnt = $bm->nset_max_count(); ok($cnt == 8, "bit nset max count") or diag("nset_max_count: $cnt"); # $bm fmt: "16-23" # 24 $bm2 = $bm->rotate_copy(16, 40); $size = $bm2->size(); $pos = $bm2->ffs(); ok($size == 40 && $pos == 32, "bit rotate copy") or diag("rotate_copy: $size, $pos"); # 25 $bm->rotate(-8); $pos = $bm->ffs(); ok($pos == 8, "bit rotate") or diag("rotate: $pos"); # $bm fmt: "8-15" # 26 $str = $bm->fmt(); ok ($str eq "8-15", "bit fmt") or diag("fmt: $str"); # 27 $bm->unfmt("16-23"); $rc = $bm->test(13); ok (!$rc, "bit unfmt"); # $bm fmt: "16-23" # 28 $ia = Slurm::Bitstr::fmt2int($str); $size = @$ia; ok($size == 2 && $ia->[0] == 8 && $ia->[1] == 15, "bit fmt2int") or diag("fmt2int: $size, $ia->[0], $ia->[1]"); # 29 $str = $bm->fmt_hexmask(); ok($str eq "0x00FF0000", "bit fmt hexmask") or diag("fmt_hexmask: $str"); # 30 $rc = $bm->unfmt_hexmask("0x000000F0"); $cnt = $bm->set_count(); ok($rc == 0 && $cnt == 4, "bit unfmt hexmask") or diag("unfmt_hexmask: $rc, $cnt"); # $bm fmt: "4-7" # 31 $str = $bm->fmt_binmask(); ok($str eq "00000000000000000000000011110000", "bit fmt binmask") or diag("fmt_binmask: $str"); # 32 $bm->unfmt_binmask("0000000111111110000000011110001"); $cnt = $bm->set_count(); ok($cnt == 13, "bit unfmt binmask") or diag("unfmt_binmask: $cnt"); # $bm fmt: "0-0,4-7,16-23" # 33 $bm->fill_gaps(); $cnt = $bm->set_count(); ok($cnt == 24, "bit fill gaps"), or diag("fill_gaps: $cnt"); # $bm fmt: "0-23" # 34 $bm2 = $bm->rotate_copy(16, 32); $rc = $bm->super_set($bm2); ok (!$rc, "bit super set") or diag("super_set: $rc"); # $bm fmt: "0-23" # $bm2 fmt: "0-7,16-31" # 35 $cnt = $bm->overlap($bm2); ok($cnt == 16, "bit overlap") or diag("overlap: $cnt"); # 36 $rc = $bm->equal($bm2); ok(!$rc, "bit equal") or diag("equal: $rc"); # 37 $bm2 = $bm->pick_cnt(8); ok($bm2 && $bm2->set_count() == 8, "pick cnt") or diag("pick_cnt: $cnt"); # 38 $bm->unfmt("3-5,12-23"); $pos = $bm->get_bit_num(8); ok($pos = 17, "bit get bit num") or diag("get_bit_num: $pos"); # 39 $bm->unfmt("3-5,12-23"); $cnt = $bm->get_pos_num(12); ok($cnt == 3, "bit get pos num") or diag("get_pos_num: $cnt"); slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/topo.c000066400000000000000000000050301265000126300235220ustar00rootroot00000000000000/* * topo.c - convert data between topology related messages and perl HVs */ #include #include #include #include "ppport.h" #include #include "slurm-perl.h" /* * convert topo_info_t to perl HV */ int topo_info_to_hv(topo_info_t *topo_info, HV *hv) { STORE_FIELD(hv, topo_info, level, uint16_t); STORE_FIELD(hv, topo_info, link_speed, uint32_t); if (topo_info->name) STORE_FIELD(hv, topo_info, name, charp); if (topo_info->nodes) STORE_FIELD(hv, topo_info, nodes, charp); if (topo_info->switches) STORE_FIELD(hv, topo_info, switches, charp); return 0; } /* * convert perl HV to topo_info_t */ int hv_to_topo_info(HV *hv, topo_info_t *topo_info) { memset(topo_info, 0, sizeof(topo_info_t)); FETCH_FIELD(hv, topo_info, level, uint16_t, TRUE); FETCH_FIELD(hv, topo_info, link_speed, uint32_t, TRUE); FETCH_FIELD(hv, topo_info, name, charp, FALSE); FETCH_FIELD(hv, topo_info, nodes, charp, TRUE); FETCH_FIELD(hv, topo_info, switches, charp, TRUE); return 0; } /* * convert topo_info_response_msg_t to perl HV */ int topo_info_response_msg_to_hv(topo_info_response_msg_t *topo_info_msg, HV *hv) { int i; HV* hv_info; AV* av; /* record_count implied in node_array */ av = newAV(); for (i = 0; i < topo_info_msg->record_count; i ++) { hv_info =newHV(); if (topo_info_to_hv(topo_info_msg->topo_array + i, hv_info) < 0) { SvREFCNT_dec((SV*)hv_info); SvREFCNT_dec((SV*)av); return -1; } av_store(av, i, newRV_noinc((SV*)hv_info)); } hv_store_sv(hv, "topo_array", newRV_noinc((SV*)av)); return 0; } /* * convert perl HV to topo_info_response_msg_t */ int hv_to_topo_info_response_msg(HV *hv, topo_info_response_msg_t *topo_info_msg) { SV **svp; AV *av; int i, n; memset(topo_info_msg, 0, sizeof(topo_info_response_msg_t)); svp = hv_fetch(hv, "topo_array", 10, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV)) { Perl_warn (aTHX_ "topo_array is not an array refrence in HV for topo_info_response_msg_t"); return -1; } av = (AV*)SvRV(*svp); n = av_len(av) + 1; topo_info_msg->record_count = n; topo_info_msg->topo_array = xmalloc(n * sizeof(topo_info_t)); for (i = 0; i < n; i ++) { svp = av_fetch(av, i, FALSE); if (! (svp && SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV)) { Perl_warn (aTHX_ "element %d in topo_array is not valid", i); return -1; } if (hv_to_topo_info((HV*)SvRV(*svp), &topo_info_msg->topo_array[i]) < 0) { Perl_warn (aTHX_ "failed to convert element %d in topo_array", i); return -1; } } return 0; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/trigger.c000066400000000000000000000035231265000126300242110ustar00rootroot00000000000000/* * trigger.c - convert data between trigger related messages and perl HVs */ #include #include #include #include "ppport.h" #include #include "slurm-perl.h" /* * convert trigger_info_t to perl HV */ int trigger_info_to_hv(trigger_info_t *trigger_info, HV *hv) { STORE_FIELD(hv, trigger_info, trig_id, uint32_t); STORE_FIELD(hv, trigger_info, res_type, uint16_t); if (trigger_info->res_id) STORE_FIELD(hv, trigger_info, res_id, charp); STORE_FIELD(hv, trigger_info, trig_type, uint32_t); STORE_FIELD(hv, trigger_info, offset, uint16_t); STORE_FIELD(hv, trigger_info, user_id, uint32_t); if(trigger_info->program) STORE_FIELD(hv, trigger_info, program, charp); return 0; } /* * convert perl HV to trigger_info_t */ int hv_to_trigger_info(HV *hv, trigger_info_t *trigger_info) { memset(trigger_info, 0, sizeof(trigger_info_t)); FETCH_FIELD(hv, trigger_info, trig_id, uint32_t, FALSE); FETCH_FIELD(hv, trigger_info, res_type, uint8_t, FALSE); FETCH_FIELD(hv, trigger_info, res_id, charp, FALSE); FETCH_FIELD(hv, trigger_info, trig_type, uint32_t, FALSE); FETCH_FIELD(hv, trigger_info, offset, uint16_t, FALSE); FETCH_FIELD(hv, trigger_info, user_id, uint32_t, FALSE); FETCH_FIELD(hv, trigger_info, program, charp, FALSE); return 0; } /* * convert trigger_info_msg_t to perl HV */ int trigger_info_msg_to_hv(trigger_info_msg_t *trigger_info_msg, HV *hv) { int i; HV *hv_info; AV *av; /* record_count implied in node_array */ av = newAV(); for (i = 0; i < trigger_info_msg->record_count; i ++) { hv_info =newHV(); if (trigger_info_to_hv(trigger_info_msg->trigger_array + i, hv_info) < 0) { SvREFCNT_dec((SV*)hv_info); SvREFCNT_dec((SV*)av); return -1; } av_store(av, i, newRV_noinc((SV*)hv_info)); } hv_store_sv(hv, "trigger_array", newRV_noinc((SV*)av)); return 0; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurm/perl/typemap000066400000000000000000000036211265000126300240030ustar00rootroot00000000000000##################################### TYPEMAP char_xfree * T_CHAR_XFREE char_free * T_CHAR_FREE uint32_t T_U_LONG uint16_t T_U_SHORT pid_t T_U_LONG bitoff_t T_IV log_facility_t T_UV slurm_t T_SLURM bitstr_t * T_PTROBJ_SLURM hostlist_t T_PTROBJ_SLURM slurm_step_ctx_t * T_PTROBJ_SLURM List T_PTROBJ_SLURM ListIterator T_PTROBJ_SLURM dynamic_plugin_data_t * T_PTROBJ_SLURM job_resources_t * T_PTROBJ_SLURM slurm_cred_t * T_PTROBJ_SLURM switch_jobinfo_t * T_PTROBJ_SLURM select_jobinfo_t * T_PTROBJ_SLURM select_nodeinfo_t * T_PTROBJ_SLURM jobacctinfo_t * T_PTROBJ_SLURM allocation_msg_thread_t * T_PTROBJ_SLURM node_info_msg_t * T_PTROBJ_SLURM job_info_msg_t * T_PTROBJ_SLURM ##################################### OUTPUT T_SLURM sv_setref_pv( $arg, \"Slurm\", (void*)$var ); T_PTROBJ_SLURM sv_setref_pv( $arg, \"${eval(`cat classmap`);\$slurm_perl_api::class_map->{$ntype}}\", (void*)$var ); T_CHAR_XFREE sv_setpv ((SV*)$arg, $var); xfree ($var); T_CHAR_FREE sv_setpv ((SV*)$arg, $var); free ($var); ##################################### INPUT T_SLURM if (sv_isobject($arg) && (SvTYPE(SvRV($arg)) == SVt_PVMG) && sv_derived_from($arg, \"Slurm\") ) { IV tmp = SvIV((SV*)SvRV($arg)); $var = INT2PTR($type,tmp); } else if(SvPOK($arg) && !strcmp(\"Slurm\", SvPV_nolen($arg))) $var = &default_slurm_object; else { Perl_croak(aTHX_ \"${Package}::$func_name() -- $var is not a blessed SV reference or correct package name\" ); } T_PTROBJ_SLURM if (sv_isobject($arg) && (SvTYPE(SvRV($arg)) == SVt_PVMG) && sv_derived_from($arg, \"${eval(`cat classmap`);\$slurm_perl_api::class_map->{$ntype}}\")) { IV tmp = SvIV((SV*)SvRV($arg)); $var = INT2PTR($type,tmp); } else { Perl_croak(aTHX_ \"%s: %s is not of type %s\", ${$ALIAS?\q[GvNAME(CvGV(cv))]:\qq[\"$pname\"]}, \"$var\", \"${eval(`cat classmap`);\$slurm_perl_api::class_map->{$ntype}}\"); } slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/000077500000000000000000000000001265000126300217435ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/Makefile.am000066400000000000000000000071601265000126300240030ustar00rootroot00000000000000AUTOMAKE_OPTIONS = foreign # copied from pidgin # perl_dir = perl perlpath = /usr/bin/perl perl_sources = \ $(perl_dir)/Makefile.PL.in \ $(perl_dir)/ppport.h \ $(perl_dir)/Slurmdb.pm \ $(perl_dir)/Slurmdb.xs \ $(perl_dir)/slurmdb-perl.h \ $(perl_dir)/cluster.c test_sources = \ $(perl_dir)/t/00-use.t \ $(perl_dir)/t/01-clusters_get.t \ $(perl_dir)/t/02-report_cluster_account_by_user.t \ $(perl_dir)/t/03-report_cluster_user_by_account.t \ $(perl_dir)/t/04-report_job_sizes_grouped_by_top_account.t \ $(perl_dir)/t/05-report_user_top_usage.t \ $(perl_dir)/t/06-jobs_get.t \ $(perl_dir)/t/07-qos_get.t EXTRA_DIST = $(perl_sources) $(test_sources) $(perl_dir)/Makefile: $(perl_dir)/Makefile.PL @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi @cd $(perl_dir) && $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT= # # Note on linking logic below # # AIX needs to use LD to link. It can not use gcc. # Suse Linux compiles with gcc, but picks some other compiler to use for linking. # Since some CFLAGS may be incompatible with this other compiler, the build # may fail, as seen on BlueGene platforms. # Other Linux implementations sems to work fine with the LD specified as below # all-local: $(perl_dir)/Makefile #libslurmdb if HAVE_AIX @cd $(perl_dir) && \ if [ ! -f Makefile ]; then \ $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT=; \ fi && \ ($(MAKE) CC="$(CC)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS) || \ $(MAKE) CC="$(CC)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS)) && \ cd ..; else @cd $(perl_dir) && \ if [ ! -f Makefile ]; then \ $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT=; \ fi && \ ($(MAKE) CC="$(CC)" LD="$(CC) $(CFLAGS) $(LDFLAGS)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS) || \ $(MAKE) CC="$(CC)" LD="$(CC) $(CFLAGS) $(LDFLAGS)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS)) && \ cd ..; endif install-exec-local: @cd $(perl_dir) && \ $(MAKE) DESTDIR=$(DESTDIR) install && \ cd ..; # Evil Hack (TM) # ... which doesn't work with DESTDIR installs. FIXME? uninstall-local: @cd $(perl_dir) && \ `$(MAKE) uninstall | grep unlink | sed -e 's#/usr#${prefix}#' -e 's#unlink#rm -f#'` && \ cd ..; clean-generic: @cd $(perl_dir); \ $(MAKE) clean; \ if test "x${top_srcdir}" != "x${top_builddir}"; then \ rm -fr lib t *c *h *xs typemap classmap; \ fi; \ cd ..; @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi distclean-generic: @cd $(perl_dir); \ $(MAKE) realclean; \ rm -f Makefile.PL; \ rm -f Makefile.old; \ rm -f Makefile; \ cd ..; @rm -f Makefile @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi AM_CPPFLAGS = \ -DVERSION=\"$(VERSION)\" \ -I$(top_srcdir) \ -I$(top_builddir) \ $(DEBUG_CFLAGS) \ $(PERL_CFLAGS) slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/Makefile.in000066400000000000000000000512401265000126300240120ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/perlapi/libslurmdb DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign # copied from pidgin # perl_dir = perl perlpath = /usr/bin/perl perl_sources = \ $(perl_dir)/Makefile.PL.in \ $(perl_dir)/ppport.h \ $(perl_dir)/Slurmdb.pm \ $(perl_dir)/Slurmdb.xs \ $(perl_dir)/slurmdb-perl.h \ $(perl_dir)/cluster.c test_sources = \ $(perl_dir)/t/00-use.t \ $(perl_dir)/t/01-clusters_get.t \ $(perl_dir)/t/02-report_cluster_account_by_user.t \ $(perl_dir)/t/03-report_cluster_user_by_account.t \ $(perl_dir)/t/04-report_job_sizes_grouped_by_top_account.t \ $(perl_dir)/t/05-report_user_top_usage.t \ $(perl_dir)/t/06-jobs_get.t \ $(perl_dir)/t/07-qos_get.t EXTRA_DIST = $(perl_sources) $(test_sources) AM_CPPFLAGS = \ -DVERSION=\"$(VERSION)\" \ -I$(top_srcdir) \ -I$(top_builddir) \ $(DEBUG_CFLAGS) \ $(PERL_CFLAGS) all: all-am .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign contribs/perlapi/libslurmdb/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign contribs/perlapi/libslurmdb/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile all-local installdirs: install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-exec-local install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-local .MAKE: install-am install-strip .PHONY: all all-am all-local check check-am clean clean-generic \ clean-libtool cscopelist-am ctags-am distclean \ distclean-generic distclean-libtool distdir dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-exec-local install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags-am uninstall uninstall-am uninstall-local $(perl_dir)/Makefile: $(perl_dir)/Makefile.PL @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi @cd $(perl_dir) && $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT= # # Note on linking logic below # # AIX needs to use LD to link. It can not use gcc. # Suse Linux compiles with gcc, but picks some other compiler to use for linking. # Since some CFLAGS may be incompatible with this other compiler, the build # may fail, as seen on BlueGene platforms. # Other Linux implementations sems to work fine with the LD specified as below # all-local: $(perl_dir)/Makefile #libslurmdb @HAVE_AIX_TRUE@ @cd $(perl_dir) && \ @HAVE_AIX_TRUE@ if [ ! -f Makefile ]; then \ @HAVE_AIX_TRUE@ $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT=; \ @HAVE_AIX_TRUE@ fi && \ @HAVE_AIX_TRUE@ ($(MAKE) CC="$(CC)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS) || \ @HAVE_AIX_TRUE@ $(MAKE) CC="$(CC)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS)) && \ @HAVE_AIX_TRUE@ cd ..; @HAVE_AIX_FALSE@ @cd $(perl_dir) && \ @HAVE_AIX_FALSE@ if [ ! -f Makefile ]; then \ @HAVE_AIX_FALSE@ $(perlpath) Makefile.PL $(PERL_MM_PARAMS) prefix=${prefix} INSTALL_BASE= PERL_MM_OPT=; \ @HAVE_AIX_FALSE@ fi && \ @HAVE_AIX_FALSE@ ($(MAKE) CC="$(CC)" LD="$(CC) $(CFLAGS) $(LDFLAGS)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS) || \ @HAVE_AIX_FALSE@ $(MAKE) CC="$(CC)" LD="$(CC) $(CFLAGS) $(LDFLAGS)" CCFLAGS="$(PERL_CFLAGS) -g -static $(CFLAGS) $(CPPFLAGS)" $(PERL_EXTRA_OPTS)) && \ @HAVE_AIX_FALSE@ cd ..; install-exec-local: @cd $(perl_dir) && \ $(MAKE) DESTDIR=$(DESTDIR) install && \ cd ..; # Evil Hack (TM) # ... which doesn't work with DESTDIR installs. FIXME? uninstall-local: @cd $(perl_dir) && \ `$(MAKE) uninstall | grep unlink | sed -e 's#/usr#${prefix}#' -e 's#unlink#rm -f#'` && \ cd ..; clean-generic: @cd $(perl_dir); \ $(MAKE) clean; \ if test "x${top_srcdir}" != "x${top_builddir}"; then \ rm -fr lib t *c *h *xs typemap classmap; \ fi; \ cd ..; @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi distclean-generic: @cd $(perl_dir); \ $(MAKE) realclean; \ rm -f Makefile.PL; \ rm -f Makefile.old; \ rm -f Makefile; \ cd ..; @rm -f Makefile @if test "x${top_srcdir}" != "x${top_builddir}"; then \ for f in ${perl_sources}; do \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ for f in ${test_sources}; do \ $(mkdir_p) `dirname $$f`; \ ${LN_S} -f ${abs_srcdir}/$$f $$f; \ done; \ fi # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/000077500000000000000000000000001265000126300227055ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/Makefile.PL.in000066400000000000000000000215301265000126300252650ustar00rootroot00000000000000use 5.008; use ExtUtils::MakeMaker; if (!(-e "@prefix@/lib/libslurmdb.so") && !(-e "@top_builddir@/src/db_api/.libs/libslurmdb.so")) { die("I can't seem to find the library files I need in your SLURM installation. Please check that you have SLURM installation has at least one of the following link(s): @top_builddir@/src/db_api/.libs/libslurmdb.so @prefix@/lib/libslurmdb.so\n"); } # Most all the extra code is to deal with MakeMaker < 6.11 not working # correctly to build rpms my( $mm_version, $mm_knows_destdir, $mm_has_destdir, $mm_has_good_destdir, $mm_needs_destdir, ); # Gather some information about what EU::MM offers and/or needs # Store the version for later use $mm_version = $ExtUtils::MakeMaker::VERSION; # MakeMaker prior to 6.11 doesn't support DESTDIR which is needed for # packaging with builddir!=destdir. See bug 2388. $mm_knows_destdir = $ExtUtils::MakeMaker::Recognized_Att_Keys{DESTDIR}; $mm_has_good_destdir = $mm_version >= 6.11; # Add DESTDIR hack only if it's requested (and necessary) $mm_needs_destdir = !$mm_has_good_destdir; $mm_has_destdir = $mm_knows_destdir || $mm_needs_destdir; $ExtUtils::MakeMaker::Recognized_Att_Keys{"DESTDIR"} = 1 if $mm_needs_destdir; if ($mm_needs_destdir) { my $error = < to get an up-to-date version. This should only be necessary if you are creating binary packages. *********************************************************************** DESTDIR_HACK $error =~ s/^ {4}//gm; warn $error; } elsif (!$mm_has_good_destdir) { my $error = < to get an up-to-date version. This should only be necessary if you are creating binary packages. *********************************************************************** DESTDIR_BUG $error =~ s/^ {4}//gm; warn $error; } # AIX has problems with not always having the correct # flags so we have to add some :) my $os = lc(`uname`); my $other_ld_flags = "-Wl,-rpath,@top_builddir@/src/db_api/.libs -Wl,-rpath,@prefix@/lib"; $other_ld_flags = " -brtl -G -bnoentry -bgcbypass:1000 -bexpfull" if $os =~ "aix"; WriteMakefile( NAME => 'Slurmdb', VERSION_FROM => 'Slurmdb.pm', # finds $VERSION PREREQ_PM => {}, # e.g., Module::Name => 1.1 ($] >= 5.005 ? ## Add these new keywords supported since 5.005 (ABSTRACT_FROM => 'Slurmdb.pm', # retrieve abstract from module AUTHOR => 'Don Lipari ') : ()), LIBS => ["-L@top_builddir@/src/db_api/.libs -L@prefix@/lib -lslurmdb"], # e.g., '-lm' DEFINE => '', # e.g., '-DHAVE_SOMETHING' INC => "-I. -I@top_srcdir@ -I@top_srcdir@/contribs/perlapi/common -I@top_builddir@", # Un-comment this if you add C files to link with later: OBJECT => '$(O_FILES)', # link all the C files too CCFLAGS => '-g', dynamic_lib => {'OTHERLDFLAGS' => $other_ld_flags}, ); if (eval {require ExtUtils::Constant; 1}) { # If you edit these definitions to change the constants used by this module, # you will need to use the generated const-c.inc and const-xs.inc # files to replace their "fallback" counterparts before distributing your # changes. my @names = (qw(SLURMDB_CLASSIFIED_FLAG SLURMDB_CLASS_BASE SLURMDB_PURGE_ARCHIVE SLURMDB_PURGE_BASE SLURMDB_PURGE_DAYS SLURMDB_PURGE_FLAGS SLURMDB_PURGE_HOURS SLURMDB_PURGE_MONTHS), ); ExtUtils::Constant::WriteConstants( NAME => 'Slurmdb', NAMES => \@names, C_FILE => 'const-c.inc', XS_FILE => 'const-xs.inc', ); } # Override the install routine to add our additional install dirs and # hack DESTDIR support into old EU::MMs. sub MY::install { package MY; my $self = shift; my @code = split(/\n/, $self->SUPER::install(@_)); init_MY_globals($self); foreach (@code) { # Write the correct path to perllocal.pod next if /installed into/; # Replace all other $(INSTALL*) vars (except $(INSTALLDIRS) of course) # with their $(DESTINSTALL*) counterparts s/\Q$(\E(INSTALL(?!DIRS)${MACRO_RE})\Q)\E/\$(DEST$1)/g; } clean_MY_globals($self); return join("\n", @code); } # Now override the constants routine to add our own macros. sub MY::constants { package MY; my $self = shift; my @code = split(/\n/, $self->SUPER::constants(@_)); init_MY_globals($self); foreach my $line (@code) { # Skip comments next if $line =~ /^\s*\#/; # Skip everything which isn't a var assignment. next unless line_has_macro_def($line); #tore the assignment string if necessary. set_EQ_from_line($line); # Add some "dummy" (PERL|SITE|VENDOR)PREFIX macros for later use (only if # necessary for old EU::MMs of course) if (line_has_macro_def($line, 'PREFIX')) { foreach my $r (@REPOSITORIES) { my $rprefix = "${r}PREFIX"; if (!defined(get_macro($rprefix))) { set_macro($rprefix, macro_ref('PREFIX')); $line .= "\n" . macro_def($rprefix); } } } # fix problem with /usr(/local) being used as a prefix # instead of the real thing. if ($line =~ 'INSTALL') { $line =~ s/= \/usr\/local/= \$(PREFIX)/; $line =~ s/= \/usr/= \$(PREFIX)/; } # Add DESTDIR support if necessary if (line_has_macro_def($line, 'INSTALLDIRS')) { if(!get_macro('DESTDIR')) { $line .= "\n" . macro_def('DESTDIR'); } } elsif (line_has_macro_def($line, qr/INSTALL${MACRO_RE}/)) { my $macro = get_macro_name_from_line($line); if(!get_macro('DEST' . $macro, macro_ref('DESTDIR') . macro_ref($macro))) { $line .= "\n" . macro_def('DEST' . $macro, macro_ref('DESTDIR') . macro_ref($macro)); } } } push(@code, qq{}); clean_MY_globals($self); return join("\n", @code); } package MY; use vars qw( @REPOSITORIES $MY_GLOBALS_ARE_SANE $MACRO_RE $EQ_RE $EQ $SELF ); sub line_has_macro_def { my($line, $name) = (@_, undef); $name = $MACRO_RE unless defined $name; return $line =~ /^($name)${EQ_RE}/; } sub macro_def { my($name, $val) = (@_, undef); my $error_message = "Problems building report error."; die $error_message unless defined $name; die $error_message unless defined $EQ; $val = $SELF->{$name} unless defined $val; return $name . $EQ . $val; } sub set_EQ_from_line { my($line) = (@_); return if defined($EQ); $line =~ /\S(${EQ_RE})/; $EQ = $1; } # Reads the name of the macro defined on the given line. # # The first parameter must be the line to be expected. If the line doesn't # contain a macro definition, weird things may happen. So check with # line_has_macro_def() before! sub get_macro_name_from_line { my($line) = (@_); $line =~ /^(${MACRO_RE})${EQ_RE}/; return $1; } sub macro_ref { my($name) = (@_); return sprintf('$(%s)', $name); } # Reads the value of the given macro from the current instance of EU::MM. # # The first parameter must be the name of a macro. sub get_macro { my($name) = (@_); return $SELF->{$name}; } # Sets the value of the macro with the given name to the given value in the # current instance of EU::MM. Just sets, doesn't write to the Makefile! # # The first parameter must be the macro's name, the second the value. sub set_macro { my($name, $val) = (@_); $SELF->{$name} = $val; } # For some reason initializing the vars on the global scope doesn't work; # guess its some weird Perl behaviour in combination with bless(). sub init_MY_globals { my $self = shift; # Keep a reference to ourselves so we don't have to feed it to the helper # scripts. $SELF = $self; return if $MY_GLOBALS_ARE_SANE; $MY_GLOBALS_ARE_SANE = 1; @REPOSITORIES = qw( PERL SITE VENDOR ); # Macro names follow this RE -- at least stricly enough for our purposes. $MACRO_RE = qr/[A-Z0-9_]+/; # Normally macros are assigned via FOO = bar. But the part with the equal # sign might differ from platform to platform. So we use this RE: $EQ_RE = qr/\s*:?=\s*/; # To assign or own macros we'll follow the first assignment string we find; # normally " = ". $EQ = undef; } # Unset $SELF to avoid any leaking memory. sub clean_MY_globals { my $self = shift; $SELF = undef; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/Slurmdb.pm000066400000000000000000000112211265000126300246500ustar00rootroot00000000000000package Slurmdb; use 5.008; use strict; use warnings; use Carp; require Exporter; use AutoLoader; our @ISA = qw(Exporter); # Items to export into callers namespace by default. Note: do not export # names by default without a very good reason. Use EXPORT_OK instead. # Do not simply export all your public functions/methods/constants. # This allows declaration use Slurmdb ':all'; # If you do not need this, moving things directly into @EXPORT or @EXPORT_OK # will save memory. our %EXPORT_TAGS = ( 'all' => [ qw( SLURMDB_ADD_ASSOC SLURMDB_ADD_COORD SLURMDB_ADD_QOS SLURMDB_ADD_USER SLURMDB_ADD_WCKEY SLURMDB_ADMIN_NONE SLURMDB_ADMIN_NOTSET SLURMDB_ADMIN_OPERATOR SLURMDB_ADMIN_SUPER_USER SLURMDB_CLASSIFIED_FLAG SLURMDB_CLASS_BASE SLURMDB_CLASS_CAPABILITY SLURMDB_CLASS_CAPACITY SLURMDB_CLASS_CAPAPACITY SLURMDB_CLASS_NONE SLURMDB_EVENT_ALL SLURMDB_EVENT_CLUSTER SLURMDB_EVENT_NODE SLURMDB_MODIFY_ASSOC SLURMDB_MODIFY_QOS SLURMDB_MODIFY_USER SLURMDB_MODIFY_WCKEY SLURMDB_PROBLEM_ACCT_NO_ASSOC SLURMDB_PROBLEM_ACCT_NO_USERS SLURMDB_PROBLEM_NOT_SET SLURMDB_PROBLEM_USER_NO_ASSOC SLURMDB_PROBLEM_USER_NO_UID SLURMDB_PURGE_ARCHIVE SLURMDB_PURGE_BASE SLURMDB_PURGE_DAYS SLURMDB_PURGE_FLAGS SLURMDB_PURGE_HOURS SLURMDB_PURGE_MONTHS SLURMDB_REMOVE_ASSOC SLURMDB_REMOVE_COORD SLURMDB_REMOVE_QOS SLURMDB_REMOVE_USER SLURMDB_REMOVE_WCKEY SLURMDB_REPORT_SORT_NAME SLURMDB_REPORT_SORT_TIME SLURMDB_REPORT_TIME_HOURS SLURMDB_REPORT_TIME_HOURS_PER SLURMDB_REPORT_TIME_MINS SLURMDB_REPORT_TIME_MINS_PER SLURMDB_REPORT_TIME_PERCENT SLURMDB_REPORT_TIME_SECS SLURMDB_REPORT_TIME_SECS_PER SLURMDB_UPDATE_NOTSET ) ] ); our @EXPORT_OK = ( @{ $EXPORT_TAGS{'all'} } ); our @EXPORT = qw(); our $VERSION = '0.01'; sub AUTOLOAD { # This AUTOLOAD is used to 'autoload' constants from the constant() # XS function. my $constname; our $AUTOLOAD; ($constname = $AUTOLOAD) =~ s/.*:://; croak "&Slurmdb::constant not defined" if $constname eq 'constant'; my ($error, $val) = constant($constname); if ($error) { croak $error; } { no strict 'refs'; # Fixed between 5.005_53 and 5.005_61 #XXX if ($] >= 5.00561) { #XXX *$AUTOLOAD = sub () { $val }; #XXX } #XXX else { *$AUTOLOAD = sub { $val }; #XXX } } goto &$AUTOLOAD; } #require XSLoader; #XSLoader::load('Slurmdb', $VERSION); # XSLoader will not work for SLURM because it does not honour dl_load_flags. require DynaLoader; push @ISA, 'DynaLoader'; bootstrap Slurmdb $VERSION; sub dl_load_flags { if($^O eq 'aix') { 0x00 } else { 0x01 }} # Preloaded methods go here. # Autoload methods go after =cut, and are processed by the autosplit program. 1; __END__ =head1 NAME Slurmdb - Perl extension for slurmdb library =head1 SYNOPSIS use Slurmdb; =head1 DESCRIPTION A traditional Perl module that contains XSUBs of the SLURM Database API. =head2 EXPORT None by default. =head2 Exportable constants =head1 SEE ALSO http://slurm.schedmd.com/accounting.html =head1 AUTHOR Don Lipari, lipari@llnl.gov =head1 COPYRIGHT AND LICENSE Copyright (C) 2010 Lawrence Livermore National Security. Written by Don Lipari CODE-OCEC-09-009. All rights reserved. This file is part of SLURM, a resource management program. For details, see . Please also read the included file: DISCLAIMER. SLURM is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. In addition, as a special exception, the copyright holders give permission to link the code of portions of this program with the OpenSSL library under certain conditions as described in each individual source file, and distribute linked combinations including the two. You must obey the GNU General Public License in all respects for all of the code used other than OpenSSL. If you modify file(s) with this exception, you may extend this exception to your version of the file(s), but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version. If you delete this exception statement from all source files in the program, then also delete it here. SLURM is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with SLURM; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. =cut slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/Slurmdb.xs000066400000000000000000000154661265000126300247050ustar00rootroot00000000000000#include "EXTERN.h" #include "perl.h" #include "XSUB.h" #include "ppport.h" #include #include #include "slurmdb-perl.h" #include "const-c.inc" extern void *slurm_xmalloc(size_t, const char *, int, const char *); extern void slurmdb_destroy_assoc_cond(void *object); extern void slurmdb_destroy_cluster_cond(void *object); extern void slurmdb_destroy_job_cond(void *object); extern void slurmdb_destroy_user_cond(void *object); MODULE = Slurmdb PACKAGE = Slurmdb PREFIX=slurmdb_ INCLUDE: const-xs.inc PROTOTYPES: ENABLE void* slurmdb_connection_get() int slurmdb_connection_close(db_conn) void* db_conn SV* slurmdb_clusters_get(db_conn, conditions) void* db_conn HV* conditions INIT: AV* results; HV* rh; List list = NULL; ListIterator itr; slurmdb_cluster_cond_t *cluster_cond = (slurmdb_cluster_cond_t*) slurm_xmalloc(sizeof(slurmdb_cluster_cond_t), __FILE__, __LINE__, "slurmdb_clusters_get"); slurmdb_init_cluster_cond(cluster_cond, 0); slurmdb_cluster_rec_t *rec = NULL; if (hv_to_cluster_cond(conditions, cluster_cond) < 0) { XSRETURN_UNDEF; } results = (AV*)sv_2mortal((SV*)newAV()); CODE: list = slurmdb_clusters_get(db_conn, cluster_cond); if (list) { itr = slurm_list_iterator_create(list); while ((rec = slurm_list_next(itr))) { rh = (HV *)sv_2mortal((SV*)newHV()); if (cluster_rec_to_hv(rec, rh) < 0) { XSRETURN_UNDEF; } av_push(results, newRV((SV*)rh)); } slurm_list_destroy(list); } RETVAL = newRV((SV*)results); slurmdb_destroy_cluster_cond(cluster_cond); OUTPUT: RETVAL SV* slurmdb_report_cluster_account_by_user(db_conn, assoc_condition) void* db_conn HV* assoc_condition INIT: AV* results; List list = NULL; slurmdb_assoc_cond_t *assoc_cond = (slurmdb_assoc_cond_t*) slurm_xmalloc(sizeof(slurmdb_assoc_cond_t), __FILE__, __LINE__, "slurmdb_report_cluster_account_by_user"); if (hv_to_assoc_cond(assoc_condition, assoc_cond) < 0) { XSRETURN_UNDEF; } results = (AV*)sv_2mortal((SV*)newAV()); CODE: list = slurmdb_report_cluster_account_by_user(db_conn, assoc_cond); if (list) { if (report_cluster_rec_list_to_av(list, results) < 0) { XSRETURN_UNDEF; } slurm_list_destroy(list); } RETVAL = newRV((SV*)results); slurmdb_destroy_assoc_cond(assoc_cond); OUTPUT: RETVAL SV* slurmdb_report_cluster_user_by_account(db_conn, assoc_condition) void* db_conn HV* assoc_condition INIT: AV* results; List list = NULL; slurmdb_assoc_cond_t *assoc_cond = (slurmdb_assoc_cond_t*) slurm_xmalloc(sizeof(slurmdb_assoc_cond_t), __FILE__, __LINE__, "slurmdb_report_cluster_user_by_account"); if (hv_to_assoc_cond(assoc_condition, assoc_cond) < 0) { XSRETURN_UNDEF; } results = (AV*)sv_2mortal((SV*)newAV()); CODE: list = slurmdb_report_cluster_user_by_account(db_conn, assoc_cond); if (list) { if (report_cluster_rec_list_to_av(list, results) < 0) { XSRETURN_UNDEF; } slurm_list_destroy(list); } RETVAL = newRV((SV*)results); slurmdb_destroy_assoc_cond(assoc_cond); OUTPUT: RETVAL SV* slurmdb_report_job_sizes_grouped_by_top_account(db_conn, job_condition, grouping_array, flat_view) void* db_conn HV* job_condition AV* grouping_array bool flat_view INIT: AV* results; List list = NULL; List grouping_list = slurm_list_create(NULL); slurmdb_job_cond_t *job_cond = (slurmdb_job_cond_t*) slurm_xmalloc(sizeof(slurmdb_job_cond_t), __FILE__, __LINE__, "slurmdb_report_job_sizes_grouped_by_top_account"); if (hv_to_job_cond(job_condition, job_cond) < 0) { XSRETURN_UNDEF; } if (av_to_cluster_grouping_list(grouping_array, grouping_list) < 0) { XSRETURN_UNDEF; } results = (AV*)sv_2mortal((SV*)newAV()); CODE: list = slurmdb_report_job_sizes_grouped_by_top_account(db_conn, job_cond, grouping_list, flat_view); if (list) { if (cluster_grouping_list_to_av(list, results) < 0) { XSRETURN_UNDEF; } slurm_list_destroy(list); } RETVAL = newRV((SV*)results); slurmdb_destroy_job_cond(job_cond); slurm_list_destroy(grouping_list); OUTPUT: RETVAL SV* slurmdb_report_user_top_usage(db_conn, user_condition, group_accounts) void* db_conn HV* user_condition bool group_accounts INIT: AV* results; List list = NULL; slurmdb_user_cond_t* user_cond = (slurmdb_user_cond_t*) slurm_xmalloc(sizeof(slurmdb_user_cond_t), __FILE__, __LINE__, "slurmdb_report_user_top_usage"); user_cond->assoc_cond = (slurmdb_assoc_cond_t*) slurm_xmalloc(sizeof(slurmdb_assoc_cond_t), __FILE__, __LINE__, "slurmdb_report_user_top_usage"); if (hv_to_user_cond(user_condition, user_cond) < 0) { XSRETURN_UNDEF; } results = (AV*)sv_2mortal((SV*)newAV()); CODE: list = slurmdb_report_user_top_usage(db_conn, user_cond, group_accounts); if (list) { if (report_cluster_rec_list_to_av(list, results) < 0) { XSRETURN_UNDEF; } slurm_list_destroy(list); } RETVAL = newRV((SV*)results); slurmdb_destroy_user_cond(user_cond); OUTPUT: RETVAL SV* slurmdb_jobs_get(db_conn, conditions) void* db_conn HV* conditions INIT: AV* results; HV* rh; List list = NULL; ListIterator itr; slurmdb_job_cond_t *job_cond = (slurmdb_job_cond_t*) slurm_xmalloc(sizeof(slurmdb_job_cond_t), __FILE__, __LINE__, "slurmdb_jobs_get"); slurmdb_job_rec_t *rec = NULL; if (hv_to_job_cond(conditions, job_cond) < 0) { XSRETURN_UNDEF; } results = (AV*)sv_2mortal((SV*)newAV()); CODE: list = slurmdb_jobs_get(db_conn, job_cond); if (list) { itr = slurm_list_iterator_create(list); while ((rec = slurm_list_next(itr))) { rh = (HV *)sv_2mortal((SV*)newHV()); if (job_rec_to_hv(rec, rh) < 0) { XSRETURN_UNDEF; } av_push(results, newRV((SV*)rh)); } slurm_list_destroy(list); } RETVAL = newRV((SV*)results); slurmdb_destroy_job_cond(job_cond); OUTPUT: RETVAL SV* slurmdb_qos_get(db_conn, conditions) void* db_conn HV* conditions INIT: AV* results; HV* rh; List list = NULL, all = NULL; ListIterator itr; slurmdb_qos_cond_t *qos_cond = (slurmdb_qos_cond_t*) slurm_xmalloc(sizeof(slurmdb_qos_cond_t), __FILE__, __LINE__, "slurmdb_qos_get"); slurmdb_qos_rec_t *rec = NULL; if (hv_to_qos_cond(conditions, qos_cond) < 0) { XSRETURN_UNDEF; } results = (AV*)sv_2mortal((SV*)newAV()); CODE: list = slurmdb_qos_get(db_conn, qos_cond); all = slurmdb_qos_get(db_conn, NULL); if (list) { itr = slurm_list_iterator_create(list); while ((rec = slurm_list_next(itr))) { rh = (HV *)sv_2mortal((SV*)newHV()); if (qos_rec_to_hv(rec, rh, all) < 0) { XSRETURN_UNDEF; } av_push(results, newRV((SV*)rh)); } slurm_list_destroy(list); } RETVAL = newRV((SV*)results); slurmdb_destroy_qos_cond(qos_cond); OUTPUT: RETVAL UV slurmdb_find_tres_count_in_string(tres_str_in, id) char *tres_str_in int id slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/cluster.c000066400000000000000000000654441265000126300245470ustar00rootroot00000000000000/* * cluster.c - convert data between cluster related messages and perl HVs */ #include #include #include #include #include "src/common/slurm_protocol_defs.h" #include "slurmdb-perl.h" extern char* slurm_xstrdup(const char* str); extern int slurmdb_report_set_start_end_time(time_t* start, time_t* end); extern char *slurmdb_get_qos_complete_str_bitstr(List qos_list, bitstr_t *valid_qos); int av_to_cluster_grouping_list(AV* av, List grouping_list) { SV** svp; char* str = NULL; int i, elements = 0; elements = av_len(av) + 1; for (i = 0; i < elements; i ++) { if ((svp = av_fetch(av, i, FALSE))) { str = slurm_xstrdup((char*)SvPV_nolen(*svp)); slurm_list_append(grouping_list, str); } else { Perl_warn(aTHX_ "error fetching group from grouping list"); return -1; } } return 0; } int hv_to_assoc_cond(HV* hv, slurmdb_assoc_cond_t* assoc_cond) { AV* element_av; SV** svp; char* str = NULL; int i, elements = 0; time_t start_time = 0; time_t end_time = 0; if ( (svp = hv_fetch (hv, "usage_start", strlen("usage_start"), FALSE)) ) { start_time = (time_t) (SV2time_t(*svp)); } if ( (svp = hv_fetch (hv, "usage_end", strlen("usage_end"), FALSE)) ) { end_time = (time_t) (SV2time_t(*svp)); } slurmdb_report_set_start_end_time(&start_time, &end_time); assoc_cond->usage_start = start_time; assoc_cond->usage_end = end_time; assoc_cond->with_usage = 1; assoc_cond->with_deleted = 0; assoc_cond->with_raw_qos = 0; assoc_cond->with_sub_accts = 0; assoc_cond->without_parent_info = 0; assoc_cond->without_parent_limits = 0; FETCH_FIELD(hv, assoc_cond, with_usage, uint16_t, FALSE); FETCH_FIELD(hv, assoc_cond, with_deleted, uint16_t, FALSE); FETCH_FIELD(hv, assoc_cond, with_raw_qos, uint16_t, FALSE); FETCH_FIELD(hv, assoc_cond, with_sub_accts, uint16_t, FALSE); FETCH_FIELD(hv, assoc_cond, without_parent_info, uint16_t, FALSE); FETCH_FIELD(hv, assoc_cond, without_parent_limits, uint16_t, FALSE); FETCH_LIST_FIELD(hv, assoc_cond, acct_list); FETCH_LIST_FIELD(hv, assoc_cond, cluster_list); FETCH_LIST_FIELD(hv, assoc_cond, def_qos_id_list); FETCH_LIST_FIELD(hv, assoc_cond, id_list); FETCH_LIST_FIELD(hv, assoc_cond, parent_acct_list); FETCH_LIST_FIELD(hv, assoc_cond, partition_list); FETCH_LIST_FIELD(hv, assoc_cond, qos_list); FETCH_LIST_FIELD(hv, assoc_cond, user_list); return 0; } int hv_to_cluster_cond(HV* hv, slurmdb_cluster_cond_t* cluster_cond) { AV* element_av; char* str = NULL; int i, elements = 0; cluster_cond->classification = SLURMDB_CLASS_NONE; cluster_cond->usage_end = 0; cluster_cond->usage_start = 0; cluster_cond->with_deleted = 1; cluster_cond->with_usage = 1; FETCH_FIELD(hv, cluster_cond, classification, uint16_t, FALSE); FETCH_FIELD(hv, cluster_cond, flags, uint32_t, FALSE); FETCH_FIELD(hv, cluster_cond, usage_end, time_t , FALSE); FETCH_FIELD(hv, cluster_cond, usage_start, time_t , FALSE); FETCH_FIELD(hv, cluster_cond, with_deleted, uint16_t, FALSE); FETCH_FIELD(hv, cluster_cond, with_usage, uint16_t, FALSE); FETCH_LIST_FIELD(hv, cluster_cond, cluster_list); FETCH_LIST_FIELD(hv, cluster_cond, plugin_id_select_list); FETCH_LIST_FIELD(hv, cluster_cond, rpc_version_list); return 0; } int hv_to_job_cond(HV* hv, slurmdb_job_cond_t* job_cond) { AV* element_av; SV** svp; char* str = NULL; int i, elements = 0; time_t start_time = 0; time_t end_time = 0; if ( (svp = hv_fetch (hv, "step_list", strlen("step_list"), FALSE)) ) { char *jobids = (char *) (SvPV_nolen(*svp)); if (!job_cond->step_list) job_cond->step_list = slurm_list_create(slurmdb_destroy_selected_step); slurm_addto_step_list(job_cond->step_list, jobids); } if ( (svp = hv_fetch (hv, "usage_start", strlen("usage_start"), FALSE)) ) { start_time = (time_t) (SV2time_t(*svp)); } if ( (svp = hv_fetch (hv, "usage_end", strlen("usage_end"), FALSE)) ) { end_time = (time_t) (SV2time_t(*svp)); } slurmdb_report_set_start_end_time(&start_time, &end_time); job_cond->usage_start = start_time; job_cond->usage_end = end_time; job_cond->cpus_max = 0; job_cond->cpus_min = 0; job_cond->duplicates = 0; job_cond->nodes_max = 0; job_cond->nodes_min = 0; job_cond->used_nodes = NULL; job_cond->without_steps = 0; job_cond->without_usage_truncation = 0; FETCH_FIELD(hv, job_cond, cpus_max, uint32_t, FALSE); FETCH_FIELD(hv, job_cond, cpus_min, uint32_t, FALSE); FETCH_FIELD(hv, job_cond, duplicates, uint16_t, FALSE); FETCH_FIELD(hv, job_cond, exitcode, int32_t, FALSE); FETCH_FIELD(hv, job_cond, nodes_max, uint32_t, FALSE); FETCH_FIELD(hv, job_cond, nodes_min, uint32_t, FALSE); FETCH_FIELD(hv, job_cond, timelimit_max, uint32_t, FALSE); FETCH_FIELD(hv, job_cond, timelimit_min, uint32_t, FALSE); FETCH_FIELD(hv, job_cond, usage_end, time_t, FALSE); FETCH_FIELD(hv, job_cond, usage_start, time_t, FALSE); FETCH_FIELD(hv, job_cond, used_nodes, charp, FALSE); FETCH_FIELD(hv, job_cond, without_steps, uint16_t, FALSE); FETCH_FIELD(hv, job_cond, without_usage_truncation, uint16_t, FALSE); FETCH_LIST_FIELD(hv, job_cond, acct_list); FETCH_LIST_FIELD(hv, job_cond, associd_list); FETCH_LIST_FIELD(hv, job_cond, cluster_list); FETCH_LIST_FIELD(hv, job_cond, groupid_list); FETCH_LIST_FIELD(hv, job_cond, jobname_list); FETCH_LIST_FIELD(hv, job_cond, partition_list); FETCH_LIST_FIELD(hv, job_cond, qos_list); FETCH_LIST_FIELD(hv, job_cond, resv_list); FETCH_LIST_FIELD(hv, job_cond, resvid_list); FETCH_LIST_FIELD(hv, job_cond, state_list); FETCH_LIST_FIELD(hv, job_cond, userid_list); FETCH_LIST_FIELD(hv, job_cond, wckey_list); return 0; } int hv_to_user_cond(HV* hv, slurmdb_user_cond_t* user_cond) { AV* element_av; SV** svp; char* str = NULL; int i, elements = 0; user_cond->admin_level = 0; user_cond->with_assocs = 1; user_cond->with_coords = 0; user_cond->with_deleted = 1; user_cond->with_wckeys = 0; FETCH_FIELD(hv, user_cond, admin_level, uint16_t, FALSE); FETCH_FIELD(hv, user_cond, with_assocs, uint16_t, FALSE); FETCH_FIELD(hv, user_cond, with_coords, uint16_t, FALSE); FETCH_FIELD(hv, user_cond, with_deleted, uint16_t, FALSE); FETCH_FIELD(hv, user_cond, with_wckeys, uint16_t, FALSE); if ( (svp = hv_fetch (hv, "assoc_cond", strlen("assoc_cond"), FALSE)) ) { if(SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVHV) { HV* element_hv = (HV*)SvRV(*svp); hv_to_assoc_cond(element_hv, user_cond->assoc_cond); } else { Perl_warn(aTHX_ "assoc_cond val is not an hash value reference"); return -1; } } FETCH_LIST_FIELD(hv, user_cond, def_acct_list); FETCH_LIST_FIELD(hv, user_cond, def_wckey_list); return 0; } int tres_rec_to_hv(slurmdb_tres_rec_t* rec, HV* hv) { STORE_FIELD(hv, rec, alloc_secs, uint64_t); STORE_FIELD(hv, rec, rec_count, uint32_t); STORE_FIELD(hv, rec, count, uint64_t); STORE_FIELD(hv, rec, id, uint32_t); STORE_FIELD(hv, rec, name, charp); STORE_FIELD(hv, rec, type, charp); return 0; } int report_job_grouping_to_hv(slurmdb_report_job_grouping_t* rec, HV* hv) { AV* my_av; HV* rh; slurmdb_tres_rec_t *tres_rec = NULL; ListIterator itr = NULL; /* FIX ME: include the job list here (is is not NULL, as * previously thought) */ STORE_FIELD(hv, rec, min_size, uint32_t); STORE_FIELD(hv, rec, max_size, uint32_t); STORE_FIELD(hv, rec, count, uint32_t); my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->tres_list) { itr = slurm_list_iterator_create(rec->tres_list); while ((tres_rec = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (tres_rec_to_hv(tres_rec, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a tres_rec to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "tres_list", newRV((SV*)my_av)); return 0; } int report_acct_grouping_to_hv(slurmdb_report_acct_grouping_t* rec, HV* hv) { AV* my_av; HV* rh; slurmdb_report_job_grouping_t* jgr = NULL; slurmdb_tres_rec_t *tres_rec = NULL; ListIterator itr = NULL; STORE_FIELD(hv, rec, acct, charp); STORE_FIELD(hv, rec, count, uint32_t); STORE_FIELD(hv, rec, lft, uint32_t); STORE_FIELD(hv, rec, rgt, uint32_t); my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->groups) { itr = slurm_list_iterator_create(rec->groups); while ((jgr = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (report_job_grouping_to_hv(jgr, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a report_job_grouping to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "groups", newRV((SV*)my_av)); my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->tres_list) { itr = slurm_list_iterator_create(rec->tres_list); while ((tres_rec = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (tres_rec_to_hv(tres_rec, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a tres_rec to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "tres_list", newRV((SV*)my_av)); return 0; } int report_cluster_grouping_to_hv(slurmdb_report_cluster_grouping_t* rec, HV* hv) { AV* my_av; HV* rh; slurmdb_report_acct_grouping_t* agr = NULL; slurmdb_tres_rec_t *tres_rec = NULL; ListIterator itr = NULL; STORE_FIELD(hv, rec, cluster, charp); STORE_FIELD(hv, rec, count, uint32_t); my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->acct_list) { itr = slurm_list_iterator_create(rec->acct_list); while ((agr = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (report_acct_grouping_to_hv(agr, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a report_acct_grouping to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "acct_list", newRV((SV*)my_av)); my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->tres_list) { itr = slurm_list_iterator_create(rec->tres_list); while ((tres_rec = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (tres_rec_to_hv(tres_rec, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a tres_rec to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "tres_list", newRV((SV*)my_av)); return 0; } int cluster_grouping_list_to_av(List list, AV* av) { HV* rh; ListIterator itr = NULL; slurmdb_report_cluster_grouping_t* rec = NULL; if (list) { itr = slurm_list_iterator_create(list); while ((rec = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (report_cluster_grouping_to_hv(rec, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a report_cluster_grouping to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } return 0; } int cluster_accounting_rec_to_hv(slurmdb_cluster_accounting_rec_t* ar, HV* hv) { HV* rh; STORE_FIELD(hv, ar, alloc_secs, uint64_t); STORE_FIELD(hv, ar, down_secs, uint64_t); STORE_FIELD(hv, ar, idle_secs, uint64_t); STORE_FIELD(hv, ar, over_secs, uint64_t); STORE_FIELD(hv, ar, pdown_secs, uint64_t); STORE_FIELD(hv, ar, period_start, time_t); STORE_FIELD(hv, ar, resv_secs, uint64_t); rh = (HV*)sv_2mortal((SV*)newHV()); if (tres_rec_to_hv(&ar->tres_rec, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a tres_rec to a hv"); return -1; } hv_store_sv(hv, "tres_rec", newRV((SV*)rh)); return 0; } int cluster_rec_to_hv(slurmdb_cluster_rec_t* rec, HV* hv) { AV* my_av; HV* rh; ListIterator itr = NULL; slurmdb_cluster_accounting_rec_t* ar = NULL; my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->accounting_list) { itr = slurm_list_iterator_create(rec->accounting_list); while ((ar = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (cluster_accounting_rec_to_hv(ar, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a cluster_accounting_rec to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "accounting_list", newRV((SV*)my_av)); STORE_FIELD(hv, rec, classification, uint16_t); STORE_FIELD(hv, rec, control_host, charp); STORE_FIELD(hv, rec, control_port, uint32_t); STORE_FIELD(hv, rec, dimensions, uint16_t); STORE_FIELD(hv, rec, flags, uint32_t); STORE_FIELD(hv, rec, name, charp); STORE_FIELD(hv, rec, nodes, charp); STORE_FIELD(hv, rec, plugin_id_select, uint32_t); /* slurmdb_assoc_rec_t* root_assoc; */ STORE_FIELD(hv, rec, rpc_version, uint16_t); STORE_FIELD(hv, rec, tres_str, charp); return 0; } int report_assoc_rec_to_hv(slurmdb_report_assoc_rec_t* rec, HV* hv) { AV* my_av; HV* rh; slurmdb_tres_rec_t *tres_rec = NULL; ListIterator itr = NULL; STORE_FIELD(hv, rec, acct, charp); STORE_FIELD(hv, rec, cluster, charp); STORE_FIELD(hv, rec, parent_acct, charp); my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->tres_list) { itr = slurm_list_iterator_create(rec->tres_list); while ((tres_rec = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (tres_rec_to_hv(tres_rec, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a tres_rec to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "tres_list", newRV((SV*)my_av)); STORE_FIELD(hv, rec, user, charp); return 0; } int report_cluster_rec_to_hv(slurmdb_report_cluster_rec_t* rec, HV* hv) { AV* my_av; HV* rh; slurmdb_report_assoc_rec_t* ar = NULL; slurmdb_report_user_rec_t* ur = NULL; slurmdb_tres_rec_t *tres_rec = NULL; ListIterator itr = NULL; /* FIXME: do the accounting_list (add function to parse * slurmdb_accounting_rec_t) */ my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->assoc_list) { itr = slurm_list_iterator_create(rec->assoc_list); while ((ar = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (report_assoc_rec_to_hv(ar, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a report_assoc_rec to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "assoc_list", newRV((SV*)my_av)); STORE_FIELD(hv, rec, name, charp); my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->tres_list) { itr = slurm_list_iterator_create(rec->tres_list); while ((tres_rec = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (tres_rec_to_hv(tres_rec, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a tres_rec to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "tres_list", newRV((SV*)my_av)); my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->user_list) { itr = slurm_list_iterator_create(rec->user_list); while ((ur = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (report_user_rec_to_hv(ur, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a report_user_rec to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "user_list", newRV((SV*)my_av)); return 0; } int report_cluster_rec_list_to_av(List list, AV* av) { HV* rh; ListIterator itr = NULL; slurmdb_report_cluster_rec_t* rec = NULL; if (list) { itr = slurm_list_iterator_create(list); while ((rec = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (report_cluster_rec_to_hv(rec, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a report_cluster_rec to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } return 0; } int report_user_rec_to_hv(slurmdb_report_user_rec_t* rec, HV* hv) { AV* my_av; HV* rh; char* acct; slurmdb_report_assoc_rec_t* ar = NULL; slurmdb_tres_rec_t *tres_rec = NULL; ListIterator itr = NULL; my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->acct_list) { itr = slurm_list_iterator_create(rec->acct_list); while ((acct = slurm_list_next(itr))) { av_push(my_av, newSVpv(acct, strlen(acct))); } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "acct_list", newRV((SV*)my_av)); my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->assoc_list) { itr = slurm_list_iterator_create(rec->assoc_list); while ((ar = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (report_assoc_rec_to_hv(ar, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a report_assoc_rec to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "assoc_list", newRV((SV*)my_av)); STORE_FIELD(hv, rec, acct, charp); STORE_FIELD(hv, rec, name, charp); my_av = (AV*)sv_2mortal((SV*)newAV()); if (rec->tres_list) { itr = slurm_list_iterator_create(rec->tres_list); while ((tres_rec = slurm_list_next(itr))) { rh = (HV*)sv_2mortal((SV*)newHV()); if (tres_rec_to_hv(tres_rec, rh) < 0) { Perl_warn(aTHX_ "Failed to convert a tres_rec to a hv"); slurm_list_iterator_destroy(itr); return -1; } else { av_push(my_av, newRV((SV*)rh)); } } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "tres_list", newRV((SV*)my_av)); STORE_FIELD(hv, rec, uid, uid_t); return 0; } int stats_to_hv(slurmdb_stats_t *stats, HV* hv) { STORE_FIELD(hv, stats, act_cpufreq, double); STORE_FIELD(hv, stats, cpu_ave, double); STORE_FIELD(hv, stats, consumed_energy, double); STORE_FIELD(hv, stats, cpu_min, uint32_t); STORE_FIELD(hv, stats, cpu_min_nodeid, uint32_t); STORE_FIELD(hv, stats, cpu_min_taskid, uint32_t); STORE_FIELD(hv, stats, disk_read_ave, double); STORE_FIELD(hv, stats, disk_read_max, double); STORE_FIELD(hv, stats, disk_read_max_nodeid, uint32_t); STORE_FIELD(hv, stats, disk_read_max_taskid, uint32_t); STORE_FIELD(hv, stats, disk_write_ave, double); STORE_FIELD(hv, stats, disk_write_max, double); STORE_FIELD(hv, stats, disk_write_max_nodeid, uint32_t); STORE_FIELD(hv, stats, disk_write_max_taskid, uint32_t); STORE_FIELD(hv, stats, pages_ave, double); STORE_FIELD(hv, stats, pages_max, uint64_t); STORE_FIELD(hv, stats, pages_max_nodeid, uint32_t); STORE_FIELD(hv, stats, pages_max_taskid, uint32_t); STORE_FIELD(hv, stats, rss_ave, double); STORE_FIELD(hv, stats, rss_max, uint64_t); STORE_FIELD(hv, stats, rss_max_nodeid, uint32_t); STORE_FIELD(hv, stats, rss_max_taskid, uint32_t); STORE_FIELD(hv, stats, vsize_ave, double); STORE_FIELD(hv, stats, vsize_max, uint64_t); STORE_FIELD(hv, stats, vsize_max_nodeid, uint32_t); STORE_FIELD(hv, stats, vsize_max_taskid, uint32_t); return 0; } int step_rec_to_hv(slurmdb_step_rec_t *rec, HV* hv) { HV* stats_hv = (HV*)sv_2mortal((SV*)newHV()); stats_to_hv(&rec->stats, stats_hv); hv_store_sv(hv, "stats", newRV((SV*)stats_hv)); STORE_FIELD(hv, rec, elapsed, uint32_t); STORE_FIELD(hv, rec, end, time_t); STORE_FIELD(hv, rec, exitcode, int32_t); STORE_FIELD(hv, rec, nnodes, uint32_t); STORE_FIELD(hv, rec, nodes, charp); STORE_FIELD(hv, rec, ntasks, uint32_t); STORE_FIELD(hv, rec, pid_str, charp); STORE_FIELD(hv, rec, req_cpufreq_min, uint32_t); STORE_FIELD(hv, rec, req_cpufreq_max, uint32_t); STORE_FIELD(hv, rec, req_cpufreq_gov, uint32_t); STORE_FIELD(hv, rec, requid, uint32_t); STORE_FIELD(hv, rec, start, time_t); STORE_FIELD(hv, rec, state, uint32_t); STORE_FIELD(hv, rec, stepid, uint32_t); STORE_FIELD(hv, rec, stepname, charp); STORE_FIELD(hv, rec, suspended, uint32_t); STORE_FIELD(hv, rec, sys_cpu_sec, uint32_t); STORE_FIELD(hv, rec, sys_cpu_usec, uint32_t); STORE_FIELD(hv, rec, task_dist, uint16_t); STORE_FIELD(hv, rec, tot_cpu_sec, uint32_t); STORE_FIELD(hv, rec, tot_cpu_usec, uint32_t); STORE_FIELD(hv, rec, tres_alloc_str, charp); STORE_FIELD(hv, rec, user_cpu_sec, uint32_t); STORE_FIELD(hv, rec, user_cpu_usec, uint32_t); return 0; } int job_rec_to_hv(slurmdb_job_rec_t* rec, HV* hv) { slurmdb_step_rec_t *step; ListIterator itr = NULL; AV* steps_av = (AV*)sv_2mortal((SV*)newAV()); HV* stats_hv = (HV*)sv_2mortal((SV*)newHV()); HV* step_hv; stats_to_hv(&rec->stats, stats_hv); hv_store_sv(hv, "stats", newRV((SV*)stats_hv)); if (rec->steps) { itr = slurm_list_iterator_create(rec->steps); while ((step = slurm_list_next(itr))) { step_hv = (HV*)sv_2mortal((SV*)newHV()); step_rec_to_hv(step, step_hv); av_push(steps_av, newRV((SV*)step_hv)); } slurm_list_iterator_destroy(itr); } hv_store_sv(hv, "steps", newRV((SV*)steps_av)); STORE_FIELD(hv, rec, account, charp); STORE_FIELD(hv, rec, alloc_gres, charp); STORE_FIELD(hv, rec, alloc_nodes, uint32_t); STORE_FIELD(hv, rec, array_job_id, uint32_t); STORE_FIELD(hv, rec, array_max_tasks, uint32_t); STORE_FIELD(hv, rec, array_task_id, uint32_t); STORE_FIELD(hv, rec, array_task_str, charp); STORE_FIELD(hv, rec, associd, uint32_t); STORE_FIELD(hv, rec, blockid, charp); STORE_FIELD(hv, rec, cluster, charp); STORE_FIELD(hv, rec, derived_ec, uint32_t); STORE_FIELD(hv, rec, derived_es, charp); STORE_FIELD(hv, rec, elapsed, uint32_t); STORE_FIELD(hv, rec, eligible, time_t); STORE_FIELD(hv, rec, end, time_t); STORE_FIELD(hv, rec, exitcode, uint32_t); /*STORE_FIELD(hv, rec, first_step_ptr, void*);*/ STORE_FIELD(hv, rec, gid, uint32_t); STORE_FIELD(hv, rec, jobid, uint32_t); STORE_FIELD(hv, rec, jobname, charp); STORE_FIELD(hv, rec, lft, uint32_t); STORE_FIELD(hv, rec, partition, charp); STORE_FIELD(hv, rec, nodes, charp); STORE_FIELD(hv, rec, priority, uint32_t); STORE_FIELD(hv, rec, qosid, uint32_t); STORE_FIELD(hv, rec, req_cpus, uint32_t); STORE_FIELD(hv, rec, req_gres, charp); STORE_FIELD(hv, rec, req_mem, uint32_t); STORE_FIELD(hv, rec, requid, uint32_t); STORE_FIELD(hv, rec, resvid, uint32_t); STORE_FIELD(hv, rec, resv_name, charp); STORE_FIELD(hv, rec, show_full, uint32_t); STORE_FIELD(hv, rec, start, time_t); STORE_FIELD(hv, rec, state, uint32_t); STORE_FIELD(hv, rec, submit, time_t); STORE_FIELD(hv, rec, suspended, uint32_t); STORE_FIELD(hv, rec, sys_cpu_sec, uint32_t); STORE_FIELD(hv, rec, sys_cpu_usec, uint32_t); STORE_FIELD(hv, rec, timelimit, uint32_t); STORE_FIELD(hv, rec, tot_cpu_sec, uint32_t); STORE_FIELD(hv, rec, tot_cpu_usec, uint32_t); STORE_FIELD(hv, rec, track_steps, uint16_t); STORE_FIELD(hv, rec, tres_alloc_str, charp); STORE_FIELD(hv, rec, uid, uint32_t); STORE_FIELD(hv, rec, used_gres, charp); STORE_FIELD(hv, rec, user, charp); STORE_FIELD(hv, rec, user_cpu_sec, uint32_t); STORE_FIELD(hv, rec, user_cpu_usec, uint32_t); STORE_FIELD(hv, rec, wckey, charp); STORE_FIELD(hv, rec, wckeyid, uint32_t); return 0; } int hv_to_qos_cond(HV* hv, slurmdb_qos_cond_t* qos_cond) { AV* element_av; char* str = NULL; int i, elements = 0; FETCH_FIELD(hv, qos_cond, preempt_mode, uint16_t, FALSE); FETCH_FIELD(hv, qos_cond, with_deleted, uint16_t, FALSE); FETCH_LIST_FIELD(hv, qos_cond, description_list); FETCH_LIST_FIELD(hv, qos_cond, id_list); FETCH_LIST_FIELD(hv, qos_cond, name_list); return 0; } int qos_rec_to_hv(slurmdb_qos_rec_t* rec, HV* hv, List all_qos) { char *preempt = NULL; preempt = slurmdb_get_qos_complete_str_bitstr(all_qos, rec->preempt_bitstr); hv_store_charp(hv, "preempt", preempt); STORE_FIELD(hv, rec, description, charp); STORE_FIELD(hv, rec, id, uint32_t); STORE_FIELD(hv, rec, flags, uint32_t); STORE_FIELD(hv, rec, grace_time, uint32_t); STORE_FIELD(hv, rec, grp_jobs, uint32_t); STORE_FIELD(hv, rec, grp_submit_jobs, uint32_t); STORE_FIELD(hv, rec, grp_tres, charp); STORE_FIELD(hv, rec, grp_tres_mins, charp); STORE_FIELD(hv, rec, grp_tres_run_mins, charp); STORE_FIELD(hv, rec, grp_wall, uint32_t); STORE_FIELD(hv, rec, max_jobs_pu, uint32_t); STORE_FIELD(hv, rec, max_submit_jobs_pu, uint32_t); STORE_FIELD(hv, rec, max_tres_mins_pj, charp); STORE_FIELD(hv, rec, max_tres_pj, charp); STORE_FIELD(hv, rec, max_tres_pn, charp); STORE_FIELD(hv, rec, max_tres_pu, charp); STORE_FIELD(hv, rec, max_tres_run_mins_pu,charp); STORE_FIELD(hv, rec, max_wall_pj, uint32_t); STORE_FIELD(hv, rec, min_tres_pj, charp); STORE_FIELD(hv, rec, name, charp); STORE_FIELD(hv, rec, preempt_mode, uint16_t); STORE_FIELD(hv, rec, priority, uint32_t); STORE_FIELD(hv, rec, usage_factor, double); STORE_FIELD(hv, rec, usage_thres, double); return 0; } slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/ppport.h000066400000000000000000004565141265000126300244210ustar00rootroot00000000000000#if 0 <<'SKIP'; #endif /* ---------------------------------------------------------------------- ppport.h -- Perl/Pollution/Portability Version 3.13 Automatically created by Devel::PPPort running under perl 5.010000. Do NOT edit this file directly! -- Edit PPPort_pm.PL and the includes in parts/inc/ instead. Use 'perldoc ppport.h' to view the documentation below. ---------------------------------------------------------------------- SKIP =pod =head1 NAME ppport.h - Perl/Pollution/Portability version 3.13 =head1 SYNOPSIS perl ppport.h [options] [source files] Searches current directory for files if no [source files] are given --help show short help --version show version --patch=file write one patch file with changes --copy=suffix write changed copies with suffix --diff=program use diff program and options --compat-version=version provide compatibility with Perl version --cplusplus accept C++ comments --quiet don't output anything except fatal errors --nodiag don't show diagnostics --nohints don't show hints --nochanges don't suggest changes --nofilter don't filter input files --strip strip all script and doc functionality from ppport.h --list-provided list provided API --list-unsupported list unsupported API --api-info=name show Perl API portability information =head1 COMPATIBILITY This version of F is designed to support operation with Perl installations back to 5.003, and has been tested up to 5.10.0. =head1 OPTIONS =head2 --help Display a brief usage summary. =head2 --version Display the version of F. =head2 --patch=I If this option is given, a single patch file will be created if any changes are suggested. This requires a working diff program to be installed on your system. =head2 --copy=I If this option is given, a copy of each file will be saved with the given suffix that contains the suggested changes. This does not require any external programs. Note that this does not automagially add a dot between the original filename and the suffix. If you want the dot, you have to include it in the option argument. If neither C<--patch> or C<--copy> are given, the default is to simply print the diffs for each file. This requires either C or a C program to be installed. =head2 --diff=I Manually set the diff program and options to use. The default is to use C, when installed, and output unified context diffs. =head2 --compat-version=I Tell F to check for compatibility with the given Perl version. The default is to check for compatibility with Perl version 5.003. You can use this option to reduce the output of F if you intend to be backward compatible only down to a certain Perl version. =head2 --cplusplus Usually, F will detect C++ style comments and replace them with C style comments for portability reasons. Using this option instructs F to leave C++ comments untouched. =head2 --quiet Be quiet. Don't print anything except fatal errors. =head2 --nodiag Don't output any diagnostic messages. Only portability alerts will be printed. =head2 --nohints Don't output any hints. Hints often contain useful portability notes. Warnings will still be displayed. =head2 --nochanges Don't suggest any changes. Only give diagnostic output and hints unless these are also deactivated. =head2 --nofilter Don't filter the list of input files. By default, files not looking like source code (i.e. not *.xs, *.c, *.cc, *.cpp or *.h) are skipped. =head2 --strip Strip all script and documentation functionality from F. This reduces the size of F dramatically and may be useful if you want to include F in smaller modules without increasing their distribution size too much. The stripped F will have a C<--unstrip> option that allows you to undo the stripping, but only if an appropriate C module is installed. =head2 --list-provided Lists the API elements for which compatibility is provided by F. Also lists if it must be explicitly requested, if it has dependencies, and if there are hints or warnings for it. =head2 --list-unsupported Lists the API elements that are known not to be supported by F and below which version of Perl they probably won't be available or work. =head2 --api-info=I Show portability information for API elements matching I. If I is surrounded by slashes, it is interpreted as a regular expression. =head1 DESCRIPTION In order for a Perl extension (XS) module to be as portable as possible across differing versions of Perl itself, certain steps need to be taken. =over 4 =item * Including this header is the first major one. This alone will give you access to a large part of the Perl API that hasn't been available in earlier Perl releases. Use perl ppport.h --list-provided to see which API elements are provided by ppport.h. =item * You should avoid using deprecated parts of the API. For example, using global Perl variables without the C prefix is deprecated. Also, some API functions used to have a C prefix. Using this form is also deprecated. You can safely use the supported API, as F will provide wrappers for older Perl versions. =item * If you use one of a few functions or variables that were not present in earlier versions of Perl, and that can't be provided using a macro, you have to explicitly request support for these functions by adding one or more C<#define>s in your source code before the inclusion of F. These functions or variables will be marked C in the list shown by C<--list-provided>. Depending on whether you module has a single or multiple files that use such functions or variables, you want either C or global variants. For a C function or variable (used only in a single source file), use: #define NEED_function #define NEED_variable For a global function or variable (used in multiple source files), use: #define NEED_function_GLOBAL #define NEED_variable_GLOBAL Note that you mustn't have more than one global request for the same function or variable in your project. Function / Variable Static Request Global Request ----------------------------------------------------------------------------------------- PL_signals NEED_PL_signals NEED_PL_signals_GLOBAL eval_pv() NEED_eval_pv NEED_eval_pv_GLOBAL grok_bin() NEED_grok_bin NEED_grok_bin_GLOBAL grok_hex() NEED_grok_hex NEED_grok_hex_GLOBAL grok_number() NEED_grok_number NEED_grok_number_GLOBAL grok_numeric_radix() NEED_grok_numeric_radix NEED_grok_numeric_radix_GLOBAL grok_oct() NEED_grok_oct NEED_grok_oct_GLOBAL load_module() NEED_load_module NEED_load_module_GLOBAL my_snprintf() NEED_my_snprintf NEED_my_snprintf_GLOBAL my_strlcat() NEED_my_strlcat NEED_my_strlcat_GLOBAL my_strlcpy() NEED_my_strlcpy NEED_my_strlcpy_GLOBAL newCONSTSUB() NEED_newCONSTSUB NEED_newCONSTSUB_GLOBAL newRV_noinc() NEED_newRV_noinc NEED_newRV_noinc_GLOBAL newSVpvn_share() NEED_newSVpvn_share NEED_newSVpvn_share_GLOBAL sv_2pv_flags() NEED_sv_2pv_flags NEED_sv_2pv_flags_GLOBAL sv_2pvbyte() NEED_sv_2pvbyte NEED_sv_2pvbyte_GLOBAL sv_catpvf_mg() NEED_sv_catpvf_mg NEED_sv_catpvf_mg_GLOBAL sv_catpvf_mg_nocontext() NEED_sv_catpvf_mg_nocontext NEED_sv_catpvf_mg_nocontext_GLOBAL sv_pvn_force_flags() NEED_sv_pvn_force_flags NEED_sv_pvn_force_flags_GLOBAL sv_setpvf_mg() NEED_sv_setpvf_mg NEED_sv_setpvf_mg_GLOBAL sv_setpvf_mg_nocontext() NEED_sv_setpvf_mg_nocontext NEED_sv_setpvf_mg_nocontext_GLOBAL vload_module() NEED_vload_module NEED_vload_module_GLOBAL vnewSVpvf() NEED_vnewSVpvf NEED_vnewSVpvf_GLOBAL warner() NEED_warner NEED_warner_GLOBAL To avoid namespace conflicts, you can change the namespace of the explicitly exported functions / variables using the C macro. Just C<#define> the macro before including C: #define DPPP_NAMESPACE MyOwnNamespace_ #include "ppport.h" The default namespace is C. =back The good thing is that most of the above can be checked by running F on your source code. See the next section for details. =head1 EXAMPLES To verify whether F is needed for your module, whether you should make any changes to your code, and whether any special defines should be used, F can be run as a Perl script to check your source code. Simply say: perl ppport.h The result will usually be a list of patches suggesting changes that should at least be acceptable, if not necessarily the most efficient solution, or a fix for all possible problems. If you know that your XS module uses features only available in newer Perl releases, if you're aware that it uses C++ comments, and if you want all suggestions as a single patch file, you could use something like this: perl ppport.h --compat-version=5.6.0 --cplusplus --patch=test.diff If you only want your code to be scanned without any suggestions for changes, use: perl ppport.h --nochanges You can specify a different C program or options, using the C<--diff> option: perl ppport.h --diff='diff -C 10' This would output context diffs with 10 lines of context. If you want to create patched copies of your files instead, use: perl ppport.h --copy=.new To display portability information for the C function, use: perl ppport.h --api-info=newSVpvn Since the argument to C<--api-info> can be a regular expression, you can use perl ppport.h --api-info=/_nomg$/ to display portability information for all C<_nomg> functions or perl ppport.h --api-info=/./ to display information for all known API elements. =head1 BUGS If this version of F is causing failure during the compilation of this module, please check if newer versions of either this module or C are available on CPAN before sending a bug report. If F was generated using the latest version of C and is causing failure of this module, please file a bug report using the CPAN Request Tracker at L. Please include the following information: =over 4 =item 1. The complete output from running "perl -V" =item 2. This file. =item 3. The name and version of the module you were trying to build. =item 4. A full log of the build that failed. =item 5. Any other information that you think could be relevant. =back For the latest version of this code, please get the C module from CPAN. =head1 COPYRIGHT Version 3.x, Copyright (c) 2004-2007, Marcus Holland-Moritz. Version 2.x, Copyright (C) 2001, Paul Marquess. Version 1.x, Copyright (C) 1999, Kenneth Albanowski. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =head1 SEE ALSO See L. =cut use strict; # Disable broken TRIE-optimization BEGIN { eval '${^RE_TRIE_MAXBUF} = -1' if $] >= 5.009004 && $] <= 5.009005 } my $VERSION = 3.13; my %opt = ( quiet => 0, diag => 1, hints => 1, changes => 1, cplusplus => 0, filter => 1, strip => 0, version => 0, ); my($ppport) = $0 =~ /([\w.]+)$/; my $LF = '(?:\r\n|[\r\n])'; # line feed my $HS = "[ \t]"; # horizontal whitespace # Never use C comments in this file! my $ccs = '/'.'*'; my $cce = '*'.'/'; my $rccs = quotemeta $ccs; my $rcce = quotemeta $cce; eval { require Getopt::Long; Getopt::Long::GetOptions(\%opt, qw( help quiet diag! filter! hints! changes! cplusplus strip version patch=s copy=s diff=s compat-version=s list-provided list-unsupported api-info=s )) or usage(); }; if ($@ and grep /^-/, @ARGV) { usage() if "@ARGV" =~ /^--?h(?:elp)?$/; die "Getopt::Long not found. Please don't use any options.\n"; } if ($opt{version}) { print "This is $0 $VERSION.\n"; exit 0; } usage() if $opt{help}; strip() if $opt{strip}; if (exists $opt{'compat-version'}) { my($r,$v,$s) = eval { parse_version($opt{'compat-version'}) }; if ($@) { die "Invalid version number format: '$opt{'compat-version'}'\n"; } die "Only Perl 5 is supported\n" if $r != 5; die "Invalid version number: $opt{'compat-version'}\n" if $v >= 1000 || $s >= 1000; $opt{'compat-version'} = sprintf "%d.%03d%03d", $r, $v, $s; } else { $opt{'compat-version'} = 5; } my %API = map { /^(\w+)\|([^|]*)\|([^|]*)\|(\w*)$/ ? ( $1 => { ($2 ? ( base => $2 ) : ()), ($3 ? ( todo => $3 ) : ()), (index($4, 'v') >= 0 ? ( varargs => 1 ) : ()), (index($4, 'p') >= 0 ? ( provided => 1 ) : ()), (index($4, 'n') >= 0 ? ( nothxarg => 1 ) : ()), } ) : die "invalid spec: $_" } qw( AvFILLp|5.004050||p AvFILL||| CLASS|||n CX_CURPAD_SAVE||| CX_CURPAD_SV||| CopFILEAV|5.006000||p CopFILEGV_set|5.006000||p CopFILEGV|5.006000||p CopFILESV|5.006000||p CopFILE_set|5.006000||p CopFILE|5.006000||p CopSTASHPV_set|5.006000||p CopSTASHPV|5.006000||p CopSTASH_eq|5.006000||p CopSTASH_set|5.006000||p CopSTASH|5.006000||p CopyD|5.009002||p Copy||| CvPADLIST||| CvSTASH||| CvWEAKOUTSIDE||| DEFSV|5.004050||p END_EXTERN_C|5.005000||p ENTER||| ERRSV|5.004050||p EXTEND||| EXTERN_C|5.005000||p F0convert|||n FREETMPS||| GIMME_V||5.004000|n GIMME|||n GROK_NUMERIC_RADIX|5.007002||p G_ARRAY||| G_DISCARD||| G_EVAL||| G_NOARGS||| G_SCALAR||| G_VOID||5.004000| GetVars||| GvSV||| Gv_AMupdate||| HEf_SVKEY||5.004000| HeHASH||5.004000| HeKEY||5.004000| HeKLEN||5.004000| HePV||5.004000| HeSVKEY_force||5.004000| HeSVKEY_set||5.004000| HeSVKEY||5.004000| HeVAL||5.004000| HvNAME||| INT2PTR|5.006000||p IN_LOCALE_COMPILETIME|5.007002||p IN_LOCALE_RUNTIME|5.007002||p IN_LOCALE|5.007002||p IN_PERL_COMPILETIME|5.008001||p IS_NUMBER_GREATER_THAN_UV_MAX|5.007002||p IS_NUMBER_INFINITY|5.007002||p IS_NUMBER_IN_UV|5.007002||p IS_NUMBER_NAN|5.007003||p IS_NUMBER_NEG|5.007002||p IS_NUMBER_NOT_INT|5.007002||p IVSIZE|5.006000||p IVTYPE|5.006000||p IVdf|5.006000||p LEAVE||| LVRET||| MARK||| MULTICALL||5.009005| MY_CXT_CLONE|5.009002||p MY_CXT_INIT|5.007003||p MY_CXT|5.007003||p MoveD|5.009002||p Move||| NOOP|5.005000||p NUM2PTR|5.006000||p NVTYPE|5.006000||p NVef|5.006001||p NVff|5.006001||p NVgf|5.006001||p Newxc|5.009003||p Newxz|5.009003||p Newx|5.009003||p Nullav||| Nullch||| Nullcv||| Nullhv||| Nullsv||| ORIGMARK||| PAD_BASE_SV||| PAD_CLONE_VARS||| PAD_COMPNAME_FLAGS||| PAD_COMPNAME_GEN_set||| PAD_COMPNAME_GEN||| PAD_COMPNAME_OURSTASH||| PAD_COMPNAME_PV||| PAD_COMPNAME_TYPE||| PAD_RESTORE_LOCAL||| PAD_SAVE_LOCAL||| PAD_SAVE_SETNULLPAD||| PAD_SETSV||| PAD_SET_CUR_NOSAVE||| PAD_SET_CUR||| PAD_SVl||| PAD_SV||| PERL_ABS|5.008001||p PERL_BCDVERSION|5.009005||p PERL_GCC_BRACE_GROUPS_FORBIDDEN|5.008001||p PERL_HASH|5.004000||p PERL_INT_MAX|5.004000||p PERL_INT_MIN|5.004000||p PERL_LONG_MAX|5.004000||p PERL_LONG_MIN|5.004000||p PERL_MAGIC_arylen|5.007002||p PERL_MAGIC_backref|5.007002||p PERL_MAGIC_bm|5.007002||p PERL_MAGIC_collxfrm|5.007002||p PERL_MAGIC_dbfile|5.007002||p PERL_MAGIC_dbline|5.007002||p PERL_MAGIC_defelem|5.007002||p PERL_MAGIC_envelem|5.007002||p PERL_MAGIC_env|5.007002||p PERL_MAGIC_ext|5.007002||p PERL_MAGIC_fm|5.007002||p PERL_MAGIC_glob|5.009005||p PERL_MAGIC_isaelem|5.007002||p PERL_MAGIC_isa|5.007002||p PERL_MAGIC_mutex|5.009005||p PERL_MAGIC_nkeys|5.007002||p PERL_MAGIC_overload_elem|5.007002||p PERL_MAGIC_overload_table|5.007002||p PERL_MAGIC_overload|5.007002||p PERL_MAGIC_pos|5.007002||p PERL_MAGIC_qr|5.007002||p PERL_MAGIC_regdata|5.007002||p PERL_MAGIC_regdatum|5.007002||p PERL_MAGIC_regex_global|5.007002||p PERL_MAGIC_shared_scalar|5.007003||p PERL_MAGIC_shared|5.007003||p PERL_MAGIC_sigelem|5.007002||p PERL_MAGIC_sig|5.007002||p PERL_MAGIC_substr|5.007002||p PERL_MAGIC_sv|5.007002||p PERL_MAGIC_taint|5.007002||p PERL_MAGIC_tiedelem|5.007002||p PERL_MAGIC_tiedscalar|5.007002||p PERL_MAGIC_tied|5.007002||p PERL_MAGIC_utf8|5.008001||p PERL_MAGIC_uvar_elem|5.007003||p PERL_MAGIC_uvar|5.007002||p PERL_MAGIC_vec|5.007002||p PERL_MAGIC_vstring|5.008001||p PERL_QUAD_MAX|5.004000||p PERL_QUAD_MIN|5.004000||p PERL_REVISION|5.006000||p PERL_SCAN_ALLOW_UNDERSCORES|5.007003||p PERL_SCAN_DISALLOW_PREFIX|5.007003||p PERL_SCAN_GREATER_THAN_UV_MAX|5.007003||p PERL_SCAN_SILENT_ILLDIGIT|5.008001||p PERL_SHORT_MAX|5.004000||p PERL_SHORT_MIN|5.004000||p PERL_SIGNALS_UNSAFE_FLAG|5.008001||p PERL_SUBVERSION|5.006000||p PERL_UCHAR_MAX|5.004000||p PERL_UCHAR_MIN|5.004000||p PERL_UINT_MAX|5.004000||p PERL_UINT_MIN|5.004000||p PERL_ULONG_MAX|5.004000||p PERL_ULONG_MIN|5.004000||p PERL_UNUSED_ARG|5.009003||p PERL_UNUSED_CONTEXT|5.009004||p PERL_UNUSED_DECL|5.007002||p PERL_UNUSED_VAR|5.007002||p PERL_UQUAD_MAX|5.004000||p PERL_UQUAD_MIN|5.004000||p PERL_USE_GCC_BRACE_GROUPS|5.009004||p PERL_USHORT_MAX|5.004000||p PERL_USHORT_MIN|5.004000||p PERL_VERSION|5.006000||p PL_DBsignal|5.005000||p PL_DBsingle|||pn PL_DBsub|||pn PL_DBtrace|||pn PL_Sv|5.005000||p PL_compiling|5.004050||p PL_copline|5.009005||p PL_curcop|5.004050||p PL_curstash|5.004050||p PL_debstash|5.004050||p PL_defgv|5.004050||p PL_diehook|5.004050||p PL_dirty|5.004050||p PL_dowarn|||pn PL_errgv|5.004050||p PL_expect|5.009005||p PL_hexdigit|5.005000||p PL_hints|5.005000||p PL_last_in_gv|||n PL_laststatval|5.005000||p PL_modglobal||5.005000|n PL_na|5.004050||pn PL_no_modify|5.006000||p PL_ofs_sv|||n PL_perl_destruct_level|5.004050||p PL_perldb|5.004050||p PL_ppaddr|5.006000||p PL_rsfp_filters|5.004050||p PL_rsfp|5.004050||p PL_rs|||n PL_signals|5.008001||p PL_stack_base|5.004050||p PL_stack_sp|5.004050||p PL_statcache|5.005000||p PL_stdingv|5.004050||p PL_sv_arenaroot|5.004050||p PL_sv_no|5.004050||pn PL_sv_undef|5.004050||pn PL_sv_yes|5.004050||pn PL_tainted|5.004050||p PL_tainting|5.004050||p POP_MULTICALL||5.009005| POPi|||n POPl|||n POPn|||n POPpbytex||5.007001|n POPpx||5.005030|n POPp|||n POPs|||n PTR2IV|5.006000||p PTR2NV|5.006000||p PTR2UV|5.006000||p PTR2ul|5.007001||p PTRV|5.006000||p PUSHMARK||| PUSH_MULTICALL||5.009005| PUSHi||| PUSHmortal|5.009002||p PUSHn||| PUSHp||| PUSHs||| PUSHu|5.004000||p PUTBACK||| PerlIO_clearerr||5.007003| PerlIO_close||5.007003| PerlIO_context_layers||5.009004| PerlIO_eof||5.007003| PerlIO_error||5.007003| PerlIO_fileno||5.007003| PerlIO_fill||5.007003| PerlIO_flush||5.007003| PerlIO_get_base||5.007003| PerlIO_get_bufsiz||5.007003| PerlIO_get_cnt||5.007003| PerlIO_get_ptr||5.007003| PerlIO_read||5.007003| PerlIO_seek||5.007003| PerlIO_set_cnt||5.007003| PerlIO_set_ptrcnt||5.007003| PerlIO_setlinebuf||5.007003| PerlIO_stderr||5.007003| PerlIO_stdin||5.007003| PerlIO_stdout||5.007003| PerlIO_tell||5.007003| PerlIO_unread||5.007003| PerlIO_write||5.007003| Perl_signbit||5.009005|n PoisonFree|5.009004||p PoisonNew|5.009004||p PoisonWith|5.009004||p Poison|5.008000||p RETVAL|||n Renewc||| Renew||| SAVECLEARSV||| SAVECOMPPAD||| SAVEPADSV||| SAVETMPS||| SAVE_DEFSV|5.004050||p SPAGAIN||| SP||| START_EXTERN_C|5.005000||p START_MY_CXT|5.007003||p STMT_END|||p STMT_START|||p STR_WITH_LEN|5.009003||p ST||| SV_CONST_RETURN|5.009003||p SV_COW_DROP_PV|5.008001||p SV_COW_SHARED_HASH_KEYS|5.009005||p SV_GMAGIC|5.007002||p SV_HAS_TRAILING_NUL|5.009004||p SV_IMMEDIATE_UNREF|5.007001||p SV_MUTABLE_RETURN|5.009003||p SV_NOSTEAL|5.009002||p SV_SMAGIC|5.009003||p SV_UTF8_NO_ENCODING|5.008001||p SVf|5.006000||p SVt_IV||| SVt_NV||| SVt_PVAV||| SVt_PVCV||| SVt_PVHV||| SVt_PVMG||| SVt_PV||| Safefree||| Slab_Alloc||| Slab_Free||| Slab_to_rw||| StructCopy||| SvCUR_set||| SvCUR||| SvEND||| SvGAMAGIC||5.006001| SvGETMAGIC|5.004050||p SvGROW||| SvIOK_UV||5.006000| SvIOK_notUV||5.006000| SvIOK_off||| SvIOK_only_UV||5.006000| SvIOK_only||| SvIOK_on||| SvIOKp||| SvIOK||| SvIVX||| SvIV_nomg|5.009001||p SvIV_set||| SvIVx||| SvIV||| SvIsCOW_shared_hash||5.008003| SvIsCOW||5.008003| SvLEN_set||| SvLEN||| SvLOCK||5.007003| SvMAGIC_set|5.009003||p SvNIOK_off||| SvNIOKp||| SvNIOK||| SvNOK_off||| SvNOK_only||| SvNOK_on||| SvNOKp||| SvNOK||| SvNVX||| SvNV_set||| SvNVx||| SvNV||| SvOK||| SvOOK||| SvPOK_off||| SvPOK_only_UTF8||5.006000| SvPOK_only||| SvPOK_on||| SvPOKp||| SvPOK||| SvPVX_const|5.009003||p SvPVX_mutable|5.009003||p SvPVX||| SvPV_const|5.009003||p SvPV_flags_const_nolen|5.009003||p SvPV_flags_const|5.009003||p SvPV_flags_mutable|5.009003||p SvPV_flags|5.007002||p SvPV_force_flags_mutable|5.009003||p SvPV_force_flags_nolen|5.009003||p SvPV_force_flags|5.007002||p SvPV_force_mutable|5.009003||p SvPV_force_nolen|5.009003||p SvPV_force_nomg_nolen|5.009003||p SvPV_force_nomg|5.007002||p SvPV_force|||p SvPV_mutable|5.009003||p SvPV_nolen_const|5.009003||p SvPV_nolen|5.006000||p SvPV_nomg_const_nolen|5.009003||p SvPV_nomg_const|5.009003||p SvPV_nomg|5.007002||p SvPV_set||| SvPVbyte_force||5.009002| SvPVbyte_nolen||5.006000| SvPVbytex_force||5.006000| SvPVbytex||5.006000| SvPVbyte|5.006000||p SvPVutf8_force||5.006000| SvPVutf8_nolen||5.006000| SvPVutf8x_force||5.006000| SvPVutf8x||5.006000| SvPVutf8||5.006000| SvPVx||| SvPV||| SvREFCNT_dec||| SvREFCNT_inc_NN|5.009004||p SvREFCNT_inc_simple_NN|5.009004||p SvREFCNT_inc_simple_void_NN|5.009004||p SvREFCNT_inc_simple_void|5.009004||p SvREFCNT_inc_simple|5.009004||p SvREFCNT_inc_void_NN|5.009004||p SvREFCNT_inc_void|5.009004||p SvREFCNT_inc|||p SvREFCNT||| SvROK_off||| SvROK_on||| SvROK||| SvRV_set|5.009003||p SvRV||| SvRXOK||5.009005| SvRX||5.009005| SvSETMAGIC||| SvSHARED_HASH|5.009003||p SvSHARE||5.007003| SvSTASH_set|5.009003||p SvSTASH||| SvSetMagicSV_nosteal||5.004000| SvSetMagicSV||5.004000| SvSetSV_nosteal||5.004000| SvSetSV||| SvTAINTED_off||5.004000| SvTAINTED_on||5.004000| SvTAINTED||5.004000| SvTAINT||| SvTRUE||| SvTYPE||| SvUNLOCK||5.007003| SvUOK|5.007001|5.006000|p SvUPGRADE||| SvUTF8_off||5.006000| SvUTF8_on||5.006000| SvUTF8||5.006000| SvUVXx|5.004000||p SvUVX|5.004000||p SvUV_nomg|5.009001||p SvUV_set|5.009003||p SvUVx|5.004000||p SvUV|5.004000||p SvVOK||5.008001| SvVSTRING_mg|5.009004||p THIS|||n UNDERBAR|5.009002||p UTF8_MAXBYTES|5.009002||p UVSIZE|5.006000||p UVTYPE|5.006000||p UVXf|5.007001||p UVof|5.006000||p UVuf|5.006000||p UVxf|5.006000||p WARN_ALL|5.006000||p WARN_AMBIGUOUS|5.006000||p WARN_ASSERTIONS|5.009005||p WARN_BAREWORD|5.006000||p WARN_CLOSED|5.006000||p WARN_CLOSURE|5.006000||p WARN_DEBUGGING|5.006000||p WARN_DEPRECATED|5.006000||p WARN_DIGIT|5.006000||p WARN_EXEC|5.006000||p WARN_EXITING|5.006000||p WARN_GLOB|5.006000||p WARN_INPLACE|5.006000||p WARN_INTERNAL|5.006000||p WARN_IO|5.006000||p WARN_LAYER|5.008000||p WARN_MALLOC|5.006000||p WARN_MISC|5.006000||p WARN_NEWLINE|5.006000||p WARN_NUMERIC|5.006000||p WARN_ONCE|5.006000||p WARN_OVERFLOW|5.006000||p WARN_PACK|5.006000||p WARN_PARENTHESIS|5.006000||p WARN_PIPE|5.006000||p WARN_PORTABLE|5.006000||p WARN_PRECEDENCE|5.006000||p WARN_PRINTF|5.006000||p WARN_PROTOTYPE|5.006000||p WARN_QW|5.006000||p WARN_RECURSION|5.006000||p WARN_REDEFINE|5.006000||p WARN_REGEXP|5.006000||p WARN_RESERVED|5.006000||p WARN_SEMICOLON|5.006000||p WARN_SEVERE|5.006000||p WARN_SIGNAL|5.006000||p WARN_SUBSTR|5.006000||p WARN_SYNTAX|5.006000||p WARN_TAINT|5.006000||p WARN_THREADS|5.008000||p WARN_UNINITIALIZED|5.006000||p WARN_UNOPENED|5.006000||p WARN_UNPACK|5.006000||p WARN_UNTIE|5.006000||p WARN_UTF8|5.006000||p WARN_VOID|5.006000||p XCPT_CATCH|5.009002||p XCPT_RETHROW|5.009002||p XCPT_TRY_END|5.009002||p XCPT_TRY_START|5.009002||p XPUSHi||| XPUSHmortal|5.009002||p XPUSHn||| XPUSHp||| XPUSHs||| XPUSHu|5.004000||p XSRETURN_EMPTY||| XSRETURN_IV||| XSRETURN_NO||| XSRETURN_NV||| XSRETURN_PV||| XSRETURN_UNDEF||| XSRETURN_UV|5.008001||p XSRETURN_YES||| XSRETURN|||p XST_mIV||| XST_mNO||| XST_mNV||| XST_mPV||| XST_mUNDEF||| XST_mUV|5.008001||p XST_mYES||| XS_VERSION_BOOTCHECK||| XS_VERSION||| XSprePUSH|5.006000||p XS||| ZeroD|5.009002||p Zero||| _aMY_CXT|5.007003||p _pMY_CXT|5.007003||p aMY_CXT_|5.007003||p aMY_CXT|5.007003||p aTHXR_|5.009005||p aTHXR|5.009005||p aTHX_|5.006000||p aTHX|5.006000||p add_data|||n addmad||| allocmy||| amagic_call||| amagic_cmp_locale||| amagic_cmp||| amagic_i_ncmp||| amagic_ncmp||| any_dup||| ao||| append_elem||| append_list||| append_madprops||| apply_attrs_my||| apply_attrs_string||5.006001| apply_attrs||| apply||| atfork_lock||5.007003|n atfork_unlock||5.007003|n av_arylen_p||5.009003| av_clear||| av_create_and_push||5.009005| av_create_and_unshift_one||5.009005| av_delete||5.006000| av_exists||5.006000| av_extend||| av_fake||| av_fetch||| av_fill||| av_len||| av_make||| av_pop||| av_push||| av_reify||| av_shift||| av_store||| av_undef||| av_unshift||| ax|||n bad_type||| bind_match||| block_end||| block_gimme||5.004000| block_start||| boolSV|5.004000||p boot_core_PerlIO||| boot_core_UNIVERSAL||| boot_core_mro||| boot_core_xsutils||| bytes_from_utf8||5.007001| bytes_to_uni|||n bytes_to_utf8||5.006001| call_argv|5.006000||p call_atexit||5.006000| call_list||5.004000| call_method|5.006000||p call_pv|5.006000||p call_sv|5.006000||p calloc||5.007002|n cando||| cast_i32||5.006000| cast_iv||5.006000| cast_ulong||5.006000| cast_uv||5.006000| check_type_and_open||| check_uni||| checkcomma||| checkposixcc||| ckWARN|5.006000||p ck_anoncode||| ck_bitop||| ck_concat||| ck_defined||| ck_delete||| ck_die||| ck_eof||| ck_eval||| ck_exec||| ck_exists||| ck_exit||| ck_ftst||| ck_fun||| ck_glob||| ck_grep||| ck_index||| ck_join||| ck_lengthconst||| ck_lfun||| ck_listiob||| ck_match||| ck_method||| ck_null||| ck_open||| ck_readline||| ck_repeat||| ck_require||| ck_retarget||| ck_return||| ck_rfun||| ck_rvconst||| ck_sassign||| ck_select||| ck_shift||| ck_sort||| ck_spair||| ck_split||| ck_subr||| ck_substr||| ck_svconst||| ck_trunc||| ck_unpack||| ckwarn_d||5.009003| ckwarn||5.009003| cl_and|||n cl_anything|||n cl_init_zero|||n cl_init|||n cl_is_anything|||n cl_or|||n clear_placeholders||| closest_cop||| convert||| cop_free||| cr_textfilter||| create_eval_scope||| croak_nocontext|||vn croak|||v csighandler||5.009003|n curmad||| custom_op_desc||5.007003| custom_op_name||5.007003| cv_ckproto_len||| cv_ckproto||| cv_clone||| cv_const_sv||5.004000| cv_dump||| cv_undef||| cx_dump||5.005000| cx_dup||| cxinc||| dAXMARK|5.009003||p dAX|5.007002||p dITEMS|5.007002||p dMARK||| dMULTICALL||5.009003| dMY_CXT_SV|5.007003||p dMY_CXT|5.007003||p dNOOP|5.006000||p dORIGMARK||| dSP||| dTHR|5.004050||p dTHXR|5.009005||p dTHXa|5.006000||p dTHXoa|5.006000||p dTHX|5.006000||p dUNDERBAR|5.009002||p dVAR|5.009003||p dXCPT|5.009002||p dXSARGS||| dXSI32||| dXSTARG|5.006000||p deb_curcv||| deb_nocontext|||vn deb_stack_all||| deb_stack_n||| debop||5.005000| debprofdump||5.005000| debprof||| debstackptrs||5.007003| debstack||5.007003| debug_start_match||| deb||5.007003|v del_sv||| delete_eval_scope||| delimcpy||5.004000| deprecate_old||| deprecate||| despatch_signals||5.007001| destroy_matcher||| die_nocontext|||vn die_where||| die|||v dirp_dup||| div128||| djSP||| do_aexec5||| do_aexec||| do_aspawn||| do_binmode||5.004050| do_chomp||| do_chop||| do_close||| do_dump_pad||| do_eof||| do_exec3||| do_execfree||| do_exec||| do_gv_dump||5.006000| do_gvgv_dump||5.006000| do_hv_dump||5.006000| do_ipcctl||| do_ipcget||| do_join||| do_kv||| do_magic_dump||5.006000| do_msgrcv||| do_msgsnd||| do_oddball||| do_op_dump||5.006000| do_op_xmldump||| do_open9||5.006000| do_openn||5.007001| do_open||5.004000| do_pipe||| do_pmop_dump||5.006000| do_pmop_xmldump||| do_print||| do_readline||| do_seek||| do_semop||| do_shmio||| do_smartmatch||| do_spawn_nowait||| do_spawn||| do_sprintf||| do_sv_dump||5.006000| do_sysseek||| do_tell||| do_trans_complex_utf8||| do_trans_complex||| do_trans_count_utf8||| do_trans_count||| do_trans_simple_utf8||| do_trans_simple||| do_trans||| do_vecget||| do_vecset||| do_vop||| docatch_body||| docatch||| doeval||| dofile||| dofindlabel||| doform||| doing_taint||5.008001|n dooneliner||| doopen_pm||| doparseform||| dopoptoeval||| dopoptogiven||| dopoptolabel||| dopoptoloop||| dopoptosub_at||| dopoptosub||| dopoptowhen||| doref||5.009003| dounwind||| dowantarray||| dump_all||5.006000| dump_eval||5.006000| dump_exec_pos||| dump_fds||| dump_form||5.006000| dump_indent||5.006000|v dump_mstats||| dump_packsubs||5.006000| dump_sub||5.006000| dump_sv_child||| dump_trie_interim_list||| dump_trie_interim_table||| dump_trie||| dump_vindent||5.006000| dumpuntil||| dup_attrlist||| emulate_cop_io||| emulate_eaccess||| eval_pv|5.006000||p eval_sv|5.006000||p exec_failed||| expect_number||| fbm_compile||5.005000| fbm_instr||5.005000| fd_on_nosuid_fs||| feature_is_enabled||| filter_add||| filter_del||| filter_gets||| filter_read||| find_and_forget_pmops||| find_array_subscript||| find_beginning||| find_byclass||| find_hash_subscript||| find_in_my_stash||| find_runcv||5.008001| find_rundefsvoffset||5.009002| find_script||| find_uninit_var||| first_symbol|||n fold_constants||| forbid_setid||| force_ident||| force_list||| force_next||| force_version||| force_word||| forget_pmop||| form_nocontext|||vn form||5.004000|v fp_dup||| fprintf_nocontext|||vn free_global_struct||| free_tied_hv_pool||| free_tmps||| gen_constant_list||| get_arena||| get_av|5.006000||p get_context||5.006000|n get_cvn_flags||5.009005| get_cv|5.006000||p get_db_sub||| get_debug_opts||| get_hash_seed||| get_hv|5.006000||p get_mstats||| get_no_modify||| get_num||| get_op_descs||5.005000| get_op_names||5.005000| get_opargs||| get_ppaddr||5.006000| get_re_arg||| get_sv|5.006000||p get_vtbl||5.005030| getcwd_sv||5.007002| getenv_len||| glob_2number||| glob_2pv||| glob_assign_glob||| glob_assign_ref||| gp_dup||| gp_free||| gp_ref||| grok_bin|5.007003||p grok_hex|5.007003||p grok_number|5.007002||p grok_numeric_radix|5.007002||p grok_oct|5.007003||p group_end||| gv_AVadd||| gv_HVadd||| gv_IOadd||| gv_SVadd||| gv_autoload4||5.004000| gv_check||| gv_const_sv||5.009003| gv_dump||5.006000| gv_efullname3||5.004000| gv_efullname4||5.006001| gv_efullname||| gv_ename||| gv_fetchfile_flags||5.009005| gv_fetchfile||| gv_fetchmeth_autoload||5.007003| gv_fetchmethod_autoload||5.004000| gv_fetchmethod||| gv_fetchmeth||| gv_fetchpvn_flags||5.009002| gv_fetchpv||| gv_fetchsv||5.009002| gv_fullname3||5.004000| gv_fullname4||5.006001| gv_fullname||| gv_handler||5.007001| gv_init_sv||| gv_init||| gv_name_set||5.009004| gv_stashpvn|5.004000||p gv_stashpvs||5.009003| gv_stashpv||| gv_stashsv||| he_dup||| hek_dup||| hfreeentries||| hsplit||| hv_assert||5.009005| hv_auxinit|||n hv_backreferences_p||| hv_clear_placeholders||5.009001| hv_clear||| hv_copy_hints_hv||| hv_delayfree_ent||5.004000| hv_delete_common||| hv_delete_ent||5.004000| hv_delete||| hv_eiter_p||5.009003| hv_eiter_set||5.009003| hv_exists_ent||5.004000| hv_exists||| hv_fetch_common||| hv_fetch_ent||5.004000| hv_fetchs|5.009003||p hv_fetch||| hv_free_ent||5.004000| hv_iterinit||| hv_iterkeysv||5.004000| hv_iterkey||| hv_iternext_flags||5.008000| hv_iternextsv||| hv_iternext||| hv_iterval||| hv_kill_backrefs||| hv_ksplit||5.004000| hv_magic_check|||n hv_magic_uvar_xkey||| hv_magic||| hv_name_set||5.009003| hv_notallowed||| hv_placeholders_get||5.009003| hv_placeholders_p||5.009003| hv_placeholders_set||5.009003| hv_riter_p||5.009003| hv_riter_set||5.009003| hv_scalar||5.009001| hv_store_ent||5.004000| hv_store_flags||5.008000| hv_stores|5.009004||p hv_store||| hv_undef||| ibcmp_locale||5.004000| ibcmp_utf8||5.007003| ibcmp||| incl_perldb||| incline||| incpush_if_exists||| incpush||| ingroup||| init_argv_symbols||| init_debugger||| init_global_struct||| init_i18nl10n||5.006000| init_i18nl14n||5.006000| init_ids||| init_interp||| init_main_stash||| init_perllib||| init_postdump_symbols||| init_predump_symbols||| init_stacks||5.005000| init_tm||5.007002| instr||| intro_my||| intuit_method||| intuit_more||| invert||| io_close||| isALNUM||| isALPHA||| isDIGIT||| isLOWER||| isSPACE||| isUPPER||| is_an_int||| is_gv_magical_sv||| is_gv_magical||| is_handle_constructor|||n is_list_assignment||| is_lvalue_sub||5.007001| is_uni_alnum_lc||5.006000| is_uni_alnumc_lc||5.006000| is_uni_alnumc||5.006000| is_uni_alnum||5.006000| is_uni_alpha_lc||5.006000| is_uni_alpha||5.006000| is_uni_ascii_lc||5.006000| is_uni_ascii||5.006000| is_uni_cntrl_lc||5.006000| is_uni_cntrl||5.006000| is_uni_digit_lc||5.006000| is_uni_digit||5.006000| is_uni_graph_lc||5.006000| is_uni_graph||5.006000| is_uni_idfirst_lc||5.006000| is_uni_idfirst||5.006000| is_uni_lower_lc||5.006000| is_uni_lower||5.006000| is_uni_print_lc||5.006000| is_uni_print||5.006000| is_uni_punct_lc||5.006000| is_uni_punct||5.006000| is_uni_space_lc||5.006000| is_uni_space||5.006000| is_uni_upper_lc||5.006000| is_uni_upper||5.006000| is_uni_xdigit_lc||5.006000| is_uni_xdigit||5.006000| is_utf8_alnumc||5.006000| is_utf8_alnum||5.006000| is_utf8_alpha||5.006000| is_utf8_ascii||5.006000| is_utf8_char_slow|||n is_utf8_char||5.006000| is_utf8_cntrl||5.006000| is_utf8_common||| is_utf8_digit||5.006000| is_utf8_graph||5.006000| is_utf8_idcont||5.008000| is_utf8_idfirst||5.006000| is_utf8_lower||5.006000| is_utf8_mark||5.006000| is_utf8_print||5.006000| is_utf8_punct||5.006000| is_utf8_space||5.006000| is_utf8_string_loclen||5.009003| is_utf8_string_loc||5.008001| is_utf8_string||5.006001| is_utf8_upper||5.006000| is_utf8_xdigit||5.006000| isa_lookup||| items|||n ix|||n jmaybe||| join_exact||| keyword||| leave_scope||| lex_end||| lex_start||| linklist||| listkids||| list||| load_module_nocontext|||vn load_module|5.006000||pv localize||| looks_like_bool||| looks_like_number||| lop||| mPUSHi|5.009002||p mPUSHn|5.009002||p mPUSHp|5.009002||p mPUSHu|5.009002||p mXPUSHi|5.009002||p mXPUSHn|5.009002||p mXPUSHp|5.009002||p mXPUSHu|5.009002||p mad_free||| madlex||| madparse||| magic_clear_all_env||| magic_clearenv||| magic_clearhint||| magic_clearpack||| magic_clearsig||| magic_dump||5.006000| magic_existspack||| magic_freearylen_p||| magic_freeovrld||| magic_freeregexp||| magic_getarylen||| magic_getdefelem||| magic_getnkeys||| magic_getpack||| magic_getpos||| magic_getsig||| magic_getsubstr||| magic_gettaint||| magic_getuvar||| magic_getvec||| magic_get||| magic_killbackrefs||| magic_len||| magic_methcall||| magic_methpack||| magic_nextpack||| magic_regdata_cnt||| magic_regdatum_get||| magic_regdatum_set||| magic_scalarpack||| magic_set_all_env||| magic_setamagic||| magic_setarylen||| magic_setbm||| magic_setcollxfrm||| magic_setdbline||| magic_setdefelem||| magic_setenv||| magic_setfm||| magic_setglob||| magic_sethint||| magic_setisa||| magic_setmglob||| magic_setnkeys||| magic_setpack||| magic_setpos||| magic_setregexp||| magic_setsig||| magic_setsubstr||| magic_settaint||| magic_setutf8||| magic_setuvar||| magic_setvec||| magic_set||| magic_sizepack||| magic_wipepack||| magicname||| make_matcher||| make_trie_failtable||| make_trie||| malloced_size|||n malloc||5.007002|n markstack_grow||| matcher_matches_sv||| measure_struct||| memEQ|5.004000||p memNE|5.004000||p mem_collxfrm||| mess_alloc||| mess_nocontext|||vn mess||5.006000|v method_common||| mfree||5.007002|n mg_clear||| mg_copy||| mg_dup||| mg_find||| mg_free||| mg_get||| mg_length||5.005000| mg_localize||| mg_magical||| mg_set||| mg_size||5.005000| mini_mktime||5.007002| missingterm||| mode_from_discipline||| modkids||| mod||| more_bodies||| more_sv||| moreswitches||| mro_get_linear_isa_c3||5.009005| mro_get_linear_isa_dfs||5.009005| mro_get_linear_isa||5.009005| mro_isa_changed_in||| mro_meta_dup||| mro_meta_init||| mro_method_changed_in||5.009005| mul128||| mulexp10|||n my_atof2||5.007002| my_atof||5.006000| my_attrs||| my_bcopy|||n my_betoh16|||n my_betoh32|||n my_betoh64|||n my_betohi|||n my_betohl|||n my_betohs|||n my_bzero|||n my_chsize||| my_clearenv||| my_cxt_index||| my_cxt_init||| my_dirfd||5.009005| my_exit_jump||| my_exit||| my_failure_exit||5.004000| my_fflush_all||5.006000| my_fork||5.007003|n my_htobe16|||n my_htobe32|||n my_htobe64|||n my_htobei|||n my_htobel|||n my_htobes|||n my_htole16|||n my_htole32|||n my_htole64|||n my_htolei|||n my_htolel|||n my_htoles|||n my_htonl||| my_kid||| my_letoh16|||n my_letoh32|||n my_letoh64|||n my_letohi|||n my_letohl|||n my_letohs|||n my_lstat||| my_memcmp||5.004000|n my_memset|||n my_ntohl||| my_pclose||5.004000| my_popen_list||5.007001| my_popen||5.004000| my_setenv||| my_snprintf|5.009004||pvn my_socketpair||5.007003|n my_sprintf||5.009003|vn my_stat||| my_strftime||5.007002| my_strlcat|5.009004||pn my_strlcpy|5.009004||pn my_swabn|||n my_swap||| my_unexec||| my_vsnprintf||5.009004|n my||| need_utf8|||n newANONATTRSUB||5.006000| newANONHASH||| newANONLIST||| newANONSUB||| newASSIGNOP||| newATTRSUB||5.006000| newAVREF||| newAV||| newBINOP||| newCONDOP||| newCONSTSUB|5.004050||p newCVREF||| newDEFSVOP||| newFORM||| newFOROP||| newGIVENOP||5.009003| newGIVWHENOP||| newGP||| newGVOP||| newGVREF||| newGVgen||| newHVREF||| newHVhv||5.005000| newHV||| newIO||| newLISTOP||| newLOGOP||| newLOOPEX||| newLOOPOP||| newMADPROP||| newMADsv||| newMYSUB||| newNULLLIST||| newOP||| newPADOP||| newPMOP||| newPROG||| newPVOP||| newRANGE||| newRV_inc|5.004000||p newRV_noinc|5.004000||p newRV||| newSLICEOP||| newSTATEOP||| newSUB||| newSVOP||| newSVREF||| newSV_type||5.009005| newSVhek||5.009003| newSViv||| newSVnv||| newSVpvf_nocontext|||vn newSVpvf||5.004000|v newSVpvn_share|5.007001||p newSVpvn|5.004050||p newSVpvs_share||5.009003| newSVpvs|5.009003||p newSVpv||| newSVrv||| newSVsv||| newSVuv|5.006000||p newSV||| newTOKEN||| newUNOP||| newWHENOP||5.009003| newWHILEOP||5.009003| newXS_flags||5.009004| newXSproto||5.006000| newXS||5.006000| new_collate||5.006000| new_constant||| new_ctype||5.006000| new_he||| new_logop||| new_numeric||5.006000| new_stackinfo||5.005000| new_version||5.009000| new_warnings_bitfield||| next_symbol||| nextargv||| nextchar||| ninstr||| no_bareword_allowed||| no_fh_allowed||| no_op||| not_a_number||| nothreadhook||5.008000| nuke_stacks||| num_overflow|||n offer_nice_chunk||| oopsAV||| oopsCV||| oopsHV||| op_clear||| op_const_sv||| op_dump||5.006000| op_free||| op_getmad_weak||| op_getmad||| op_null||5.007002| op_refcnt_dec||| op_refcnt_inc||| op_refcnt_lock||5.009002| op_refcnt_unlock||5.009002| op_xmldump||| open_script||| pMY_CXT_|5.007003||p pMY_CXT|5.007003||p pTHX_|5.006000||p pTHX|5.006000||p packWARN|5.007003||p pack_cat||5.007003| pack_rec||| package||| packlist||5.008001| pad_add_anon||| pad_add_name||| pad_alloc||| pad_block_start||| pad_check_dup||| pad_compname_type||| pad_findlex||| pad_findmy||| pad_fixup_inner_anons||| pad_free||| pad_leavemy||| pad_new||| pad_peg|||n pad_push||| pad_reset||| pad_setsv||| pad_sv||5.009005| pad_swipe||| pad_tidy||| pad_undef||| parse_body||| parse_unicode_opts||| parser_dup||| parser_free||| path_is_absolute|||n peep||| pending_Slabs_to_ro||| perl_alloc_using|||n perl_alloc|||n perl_clone_using|||n perl_clone|||n perl_construct|||n perl_destruct||5.007003|n perl_free|||n perl_parse||5.006000|n perl_run|||n pidgone||| pm_description||| pmflag||| pmop_dump||5.006000| pmop_xmldump||| pmruntime||| pmtrans||| pop_scope||| pregcomp||5.009005| pregexec||| pregfree||| prepend_elem||| prepend_madprops||| printbuf||| printf_nocontext|||vn process_special_blocks||| ptr_table_clear||5.009005| ptr_table_fetch||5.009005| ptr_table_find|||n ptr_table_free||5.009005| ptr_table_new||5.009005| ptr_table_split||5.009005| ptr_table_store||5.009005| push_scope||| put_byte||| pv_display||5.006000| pv_escape||5.009004| pv_pretty||5.009004| pv_uni_display||5.007003| qerror||| qsortsvu||| re_compile||5.009005| re_croak2||| re_dup||| re_intuit_start||5.009005| re_intuit_string||5.006000| readpipe_override||| realloc||5.007002|n reentrant_free||| reentrant_init||| reentrant_retry|||vn reentrant_size||| ref_array_or_hash||| refcounted_he_chain_2hv||| refcounted_he_fetch||| refcounted_he_free||| refcounted_he_new||| refcounted_he_value||| refkids||| refto||| ref||5.009003| reg_check_named_buff_matched||| reg_named_buff_all||5.009005| reg_named_buff_exists||5.009005| reg_named_buff_fetch||5.009005| reg_named_buff_firstkey||5.009005| reg_named_buff_iter||| reg_named_buff_nextkey||5.009005| reg_named_buff_scalar||5.009005| reg_named_buff||| reg_namedseq||| reg_node||| reg_numbered_buff_fetch||| reg_numbered_buff_length||| reg_numbered_buff_store||| reg_qr_package||| reg_recode||| reg_scan_name||| reg_skipcomment||| reg_stringify||5.009005| reg_temp_copy||| reganode||| regatom||| regbranch||| regclass_swash||5.009004| regclass||| regcppop||| regcppush||| regcurly|||n regdump_extflags||| regdump||5.005000| regdupe_internal||| regexec_flags||5.005000| regfree_internal||5.009005| reghop3|||n reghop4|||n reghopmaybe3|||n reginclass||| reginitcolors||5.006000| reginsert||| regmatch||| regnext||5.005000| regpiece||| regpposixcc||| regprop||| regrepeat||| regtail_study||| regtail||| regtry||| reguni||| regwhite|||n reg||| repeatcpy||| report_evil_fh||| report_uninit||| require_pv||5.006000| require_tie_mod||| restore_magic||| rninstr||| rsignal_restore||| rsignal_save||| rsignal_state||5.004000| rsignal||5.004000| run_body||| run_user_filter||| runops_debug||5.005000| runops_standard||5.005000| rvpv_dup||| rxres_free||| rxres_restore||| rxres_save||| safesyscalloc||5.006000|n safesysfree||5.006000|n safesysmalloc||5.006000|n safesysrealloc||5.006000|n same_dirent||| save_I16||5.004000| save_I32||| save_I8||5.006000| save_aelem||5.004050| save_alloc||5.006000| save_aptr||| save_ary||| save_bool||5.008001| save_clearsv||| save_delete||| save_destructor_x||5.006000| save_destructor||5.006000| save_freeop||| save_freepv||| save_freesv||| save_generic_pvref||5.006001| save_generic_svref||5.005030| save_gp||5.004000| save_hash||| save_hek_flags|||n save_helem||5.004050| save_hints||5.005000| save_hptr||| save_int||| save_item||| save_iv||5.005000| save_lines||| save_list||| save_long||| save_magic||| save_mortalizesv||5.007001| save_nogv||| save_op||| save_padsv||5.007001| save_pptr||| save_re_context||5.006000| save_scalar_at||| save_scalar||| save_set_svflags||5.009000| save_shared_pvref||5.007003| save_sptr||| save_svref||| save_vptr||5.006000| savepvn||| savepvs||5.009003| savepv||| savesharedpvn||5.009005| savesharedpv||5.007003| savestack_grow_cnt||5.008001| savestack_grow||| savesvpv||5.009002| sawparens||| scalar_mod_type|||n scalarboolean||| scalarkids||| scalarseq||| scalarvoid||| scalar||| scan_bin||5.006000| scan_commit||| scan_const||| scan_formline||| scan_heredoc||| scan_hex||| scan_ident||| scan_inputsymbol||| scan_num||5.007001| scan_oct||| scan_pat||| scan_str||| scan_subst||| scan_trans||| scan_version||5.009001| scan_vstring||5.009005| scan_word||| scope||| screaminstr||5.005000| seed||5.008001| sequence_num||| sequence_tail||| sequence||| set_context||5.006000|n set_csh||| set_numeric_local||5.006000| set_numeric_radix||5.006000| set_numeric_standard||5.006000| setdefout||| setenv_getix||| share_hek_flags||| share_hek||5.004000| si_dup||| sighandler|||n simplify_sort||| skipspace0||| skipspace1||| skipspace2||| skipspace||| softref2xv||| sortcv_stacked||| sortcv_xsub||| sortcv||| sortsv_flags||5.009003| sortsv||5.007003| space_join_names_mortal||| ss_dup||| stack_grow||| start_force||| start_glob||| start_subparse||5.004000| stashpv_hvname_match||5.009005| stdize_locale||| strEQ||| strGE||| strGT||| strLE||| strLT||| strNE||| str_to_version||5.006000| strip_return||| strnEQ||| strnNE||| study_chunk||| sub_crush_depth||| sublex_done||| sublex_push||| sublex_start||| sv_2bool||| sv_2cv||| sv_2io||| sv_2iuv_common||| sv_2iuv_non_preserve||| sv_2iv_flags||5.009001| sv_2iv||| sv_2mortal||| sv_2nv||| sv_2pv_flags|5.007002||p sv_2pv_nolen|5.006000||p sv_2pvbyte_nolen|5.006000||p sv_2pvbyte|5.006000||p sv_2pvutf8_nolen||5.006000| sv_2pvutf8||5.006000| sv_2pv||| sv_2uv_flags||5.009001| sv_2uv|5.004000||p sv_add_arena||| sv_add_backref||| sv_backoff||| sv_bless||| sv_cat_decode||5.008001| sv_catpv_mg|5.004050||p sv_catpvf_mg_nocontext|||pvn sv_catpvf_mg|5.006000|5.004000|pv sv_catpvf_nocontext|||vn sv_catpvf||5.004000|v sv_catpvn_flags||5.007002| sv_catpvn_mg|5.004050||p sv_catpvn_nomg|5.007002||p sv_catpvn||| sv_catpvs|5.009003||p sv_catpv||| sv_catsv_flags||5.007002| sv_catsv_mg|5.004050||p sv_catsv_nomg|5.007002||p sv_catsv||| sv_catxmlpvn||| sv_catxmlsv||| sv_chop||| sv_clean_all||| sv_clean_objs||| sv_clear||| sv_cmp_locale||5.004000| sv_cmp||| sv_collxfrm||| sv_compile_2op||5.008001| sv_copypv||5.007003| sv_dec||| sv_del_backref||| sv_derived_from||5.004000| sv_does||5.009004| sv_dump||| sv_dup||| sv_eq||| sv_exp_grow||| sv_force_normal_flags||5.007001| sv_force_normal||5.006000| sv_free2||| sv_free_arenas||| sv_free||| sv_gets||5.004000| sv_grow||| sv_i_ncmp||| sv_inc||| sv_insert||| sv_isa||| sv_isobject||| sv_iv||5.005000| sv_kill_backrefs||| sv_len_utf8||5.006000| sv_len||| sv_magic_portable|5.009005|5.004000|p sv_magicext||5.007003| sv_magic||| sv_mortalcopy||| sv_ncmp||| sv_newmortal||| sv_newref||| sv_nolocking||5.007003| sv_nosharing||5.007003| sv_nounlocking||| sv_nv||5.005000| sv_peek||5.005000| sv_pos_b2u_midway||| sv_pos_b2u||5.006000| sv_pos_u2b_cached||| sv_pos_u2b_forwards|||n sv_pos_u2b_midway|||n sv_pos_u2b||5.006000| sv_pvbyten_force||5.006000| sv_pvbyten||5.006000| sv_pvbyte||5.006000| sv_pvn_force_flags|5.007002||p sv_pvn_force||| sv_pvn_nomg|5.007003||p sv_pvn||| sv_pvutf8n_force||5.006000| sv_pvutf8n||5.006000| sv_pvutf8||5.006000| sv_pv||5.006000| sv_recode_to_utf8||5.007003| sv_reftype||| sv_release_COW||| sv_replace||| sv_report_used||| sv_reset||| sv_rvweaken||5.006000| sv_setiv_mg|5.004050||p sv_setiv||| sv_setnv_mg|5.006000||p sv_setnv||| sv_setpv_mg|5.004050||p sv_setpvf_mg_nocontext|||pvn sv_setpvf_mg|5.006000|5.004000|pv sv_setpvf_nocontext|||vn sv_setpvf||5.004000|v sv_setpviv_mg||5.008001| sv_setpviv||5.008001| sv_setpvn_mg|5.004050||p sv_setpvn||| sv_setpvs|5.009004||p sv_setpv||| sv_setref_iv||| sv_setref_nv||| sv_setref_pvn||| sv_setref_pv||| sv_setref_uv||5.007001| sv_setsv_cow||| sv_setsv_flags||5.007002| sv_setsv_mg|5.004050||p sv_setsv_nomg|5.007002||p sv_setsv||| sv_setuv_mg|5.004050||p sv_setuv|5.004000||p sv_tainted||5.004000| sv_taint||5.004000| sv_true||5.005000| sv_unglob||| sv_uni_display||5.007003| sv_unmagic||| sv_unref_flags||5.007001| sv_unref||| sv_untaint||5.004000| sv_upgrade||| sv_usepvn_flags||5.009004| sv_usepvn_mg|5.004050||p sv_usepvn||| sv_utf8_decode||5.006000| sv_utf8_downgrade||5.006000| sv_utf8_encode||5.006000| sv_utf8_upgrade_flags||5.007002| sv_utf8_upgrade||5.007001| sv_uv|5.005000||p sv_vcatpvf_mg|5.006000|5.004000|p sv_vcatpvfn||5.004000| sv_vcatpvf|5.006000|5.004000|p sv_vsetpvf_mg|5.006000|5.004000|p sv_vsetpvfn||5.004000| sv_vsetpvf|5.006000|5.004000|p sv_xmlpeek||| svtype||| swallow_bom||| swap_match_buff||| swash_fetch||5.007002| swash_get||| swash_init||5.006000| sys_intern_clear||| sys_intern_dup||| sys_intern_init||| taint_env||| taint_proper||| tmps_grow||5.006000| toLOWER||| toUPPER||| to_byte_substr||| to_uni_fold||5.007003| to_uni_lower_lc||5.006000| to_uni_lower||5.007003| to_uni_title_lc||5.006000| to_uni_title||5.007003| to_uni_upper_lc||5.006000| to_uni_upper||5.007003| to_utf8_case||5.007003| to_utf8_fold||5.007003| to_utf8_lower||5.007003| to_utf8_substr||| to_utf8_title||5.007003| to_utf8_upper||5.007003| token_free||| token_getmad||| tokenize_use||| tokeq||| tokereport||| too_few_arguments||| too_many_arguments||| uiv_2buf|||n unlnk||| unpack_rec||| unpack_str||5.007003| unpackstring||5.008001| unshare_hek_or_pvn||| unshare_hek||| unsharepvn||5.004000| unwind_handler_stack||| update_debugger_info||| upg_version||5.009005| usage||| utf16_to_utf8_reversed||5.006001| utf16_to_utf8||5.006001| utf8_distance||5.006000| utf8_hop||5.006000| utf8_length||5.007001| utf8_mg_pos_cache_update||| utf8_to_bytes||5.006001| utf8_to_uvchr||5.007001| utf8_to_uvuni||5.007001| utf8n_to_uvchr||| utf8n_to_uvuni||5.007001| utilize||| uvchr_to_utf8_flags||5.007003| uvchr_to_utf8||| uvuni_to_utf8_flags||5.007003| uvuni_to_utf8||5.007001| validate_suid||| varname||| vcmp||5.009000| vcroak||5.006000| vdeb||5.007003| vdie_common||| vdie_croak_common||| vdie||| vform||5.006000| visit||| vivify_defelem||| vivify_ref||| vload_module|5.006000||p vmess||5.006000| vnewSVpvf|5.006000|5.004000|p vnormal||5.009002| vnumify||5.009000| vstringify||5.009000| vverify||5.009003| vwarner||5.006000| vwarn||5.006000| wait4pid||| warn_nocontext|||vn warner_nocontext|||vn warner|5.006000|5.004000|pv warn|||v watch||| whichsig||| write_no_mem||| write_to_stderr||| xmldump_all||| xmldump_attr||| xmldump_eval||| xmldump_form||| xmldump_indent|||v xmldump_packsubs||| xmldump_sub||| xmldump_vindent||| yyerror||| yylex||| yyparse||| yywarn||| ); if (exists $opt{'list-unsupported'}) { my $f; for $f (sort { lc $a cmp lc $b } keys %API) { next unless $API{$f}{todo}; print "$f ", '.'x(40-length($f)), " ", format_version($API{$f}{todo}), "\n"; } exit 0; } # Scan for possible replacement candidates my(%replace, %need, %hints, %warnings, %depends); my $replace = 0; my($hint, $define, $function); sub find_api { my $code = shift; $code =~ s{ / (?: \*[^*]*\*+(?:[^$ccs][^*]*\*+)* / | /[^\r\n]*) | "[^"\\]*(?:\\.[^"\\]*)*" | '[^'\\]*(?:\\.[^'\\]*)*' }{}egsx; grep { exists $API{$_} } $code =~ /(\w+)/mg; } while () { if ($hint) { my $h = $hint->[0] eq 'Hint' ? \%hints : \%warnings; if (m{^\s*\*\s(.*?)\s*$}) { for (@{$hint->[1]}) { $h->{$_} ||= ''; # suppress warning with older perls $h->{$_} .= "$1\n"; } } else { undef $hint } } $hint = [$1, [split /,?\s+/, $2]] if m{^\s*$rccs\s+(Hint|Warning):\s+(\w+(?:,?\s+\w+)*)\s*$}; if ($define) { if ($define->[1] =~ /\\$/) { $define->[1] .= $_; } else { if (exists $API{$define->[0]} && $define->[1] !~ /^DPPP_\(/) { my @n = find_api($define->[1]); push @{$depends{$define->[0]}}, @n if @n } undef $define; } } $define = [$1, $2] if m{^\s*#\s*define\s+(\w+)(?:\([^)]*\))?\s+(.*)}; if ($function) { if (/^}/) { if (exists $API{$function->[0]}) { my @n = find_api($function->[1]); push @{$depends{$function->[0]}}, @n if @n } undef $define; } else { $function->[1] .= $_; } } $function = [$1, ''] if m{^DPPP_\(my_(\w+)\)}; $replace = $1 if m{^\s*$rccs\s+Replace:\s+(\d+)\s+$rcce\s*$}; $replace{$2} = $1 if $replace and m{^\s*#\s*define\s+(\w+)(?:\([^)]*\))?\s+(\w+)}; $replace{$2} = $1 if m{^\s*#\s*define\s+(\w+)(?:\([^)]*\))?\s+(\w+).*$rccs\s+Replace\s+$rcce}; $replace{$1} = $2 if m{^\s*$rccs\s+Replace (\w+) with (\w+)\s+$rcce\s*$}; if (m{^\s*$rccs\s+(\w+)\s+depends\s+on\s+(\w+(\s*,\s*\w+)*)\s+$rcce\s*$}) { push @{$depends{$1}}, map { s/\s+//g; $_ } split /,/, $2; } $need{$1} = 1 if m{^#if\s+defined\(NEED_(\w+)(?:_GLOBAL)?\)}; } for (values %depends) { my %s; $_ = [sort grep !$s{$_}++, @$_]; } if (exists $opt{'api-info'}) { my $f; my $count = 0; my $match = $opt{'api-info'} =~ m!^/(.*)/$! ? $1 : "^\Q$opt{'api-info'}\E\$"; for $f (sort { lc $a cmp lc $b } keys %API) { next unless $f =~ /$match/; print "\n=== $f ===\n\n"; my $info = 0; if ($API{$f}{base} || $API{$f}{todo}) { my $base = format_version($API{$f}{base} || $API{$f}{todo}); print "Supported at least starting from perl-$base.\n"; $info++; } if ($API{$f}{provided}) { my $todo = $API{$f}{todo} ? format_version($API{$f}{todo}) : "5.003"; print "Support by $ppport provided back to perl-$todo.\n"; print "Support needs to be explicitly requested by NEED_$f.\n" if exists $need{$f}; print "Depends on: ", join(', ', @{$depends{$f}}), ".\n" if exists $depends{$f}; print "\n$hints{$f}" if exists $hints{$f}; print "\nWARNING:\n$warnings{$f}" if exists $warnings{$f}; $info++; } print "No portability information available.\n" unless $info; $count++; } $count or print "Found no API matching '$opt{'api-info'}'."; print "\n"; exit 0; } if (exists $opt{'list-provided'}) { my $f; for $f (sort { lc $a cmp lc $b } keys %API) { next unless $API{$f}{provided}; my @flags; push @flags, 'explicit' if exists $need{$f}; push @flags, 'depend' if exists $depends{$f}; push @flags, 'hint' if exists $hints{$f}; push @flags, 'warning' if exists $warnings{$f}; my $flags = @flags ? ' ['.join(', ', @flags).']' : ''; print "$f$flags\n"; } exit 0; } my @files; my @srcext = qw( .xs .c .h .cc .cpp -c.inc -xs.inc ); my $srcext = join '|', map { quotemeta $_ } @srcext; if (@ARGV) { my %seen; for (@ARGV) { if (-e) { if (-f) { push @files, $_ unless $seen{$_}++; } else { warn "'$_' is not a file.\n" } } else { my @new = grep { -f } glob $_ or warn "'$_' does not exist.\n"; push @files, grep { !$seen{$_}++ } @new; } } } else { eval { require File::Find; File::Find::find(sub { $File::Find::name =~ /($srcext)$/i and push @files, $File::Find::name; }, '.'); }; if ($@) { @files = map { glob "*$_" } @srcext; } } if (!@ARGV || $opt{filter}) { my(@in, @out); my %xsc = map { /(.*)\.xs$/ ? ("$1.c" => 1, "$1.cc" => 1) : () } @files; for (@files) { my $out = exists $xsc{$_} || /\b\Q$ppport\E$/i || !/($srcext)$/i; push @{ $out ? \@out : \@in }, $_; } if (@ARGV && @out) { warning("Skipping the following files (use --nofilter to avoid this):\n| ", join "\n| ", @out); } @files = @in; } die "No input files given!\n" unless @files; my(%files, %global, %revreplace); %revreplace = reverse %replace; my $filename; my $patch_opened = 0; for $filename (@files) { unless (open IN, "<$filename") { warn "Unable to read from $filename: $!\n"; next; } info("Scanning $filename ..."); my $c = do { local $/; }; close IN; my %file = (orig => $c, changes => 0); # Temporarily remove C/XS comments and strings from the code my @ccom; $c =~ s{ ( ^$HS*\#$HS*include\b[^\r\n]+\b(?:\Q$ppport\E|XSUB\.h)\b[^\r\n]* | ^$HS*\#$HS*(?:define|elif|if(?:def)?)\b[^\r\n]* ) | ( ^$HS*\#[^\r\n]* | "[^"\\]*(?:\\.[^"\\]*)*" | '[^'\\]*(?:\\.[^'\\]*)*' | / (?: \*[^*]*\*+(?:[^$ccs][^*]*\*+)* / | /[^\r\n]* ) ) }{ defined $2 and push @ccom, $2; defined $1 ? $1 : "$ccs$#ccom$cce" }mgsex; $file{ccom} = \@ccom; $file{code} = $c; $file{has_inc_ppport} = $c =~ /^$HS*#$HS*include[^\r\n]+\b\Q$ppport\E\b/m; my $func; for $func (keys %API) { my $match = $func; $match .= "|$revreplace{$func}" if exists $revreplace{$func}; if ($c =~ /\b(?:Perl_)?($match)\b/) { $file{uses_replace}{$1}++ if exists $revreplace{$func} && $1 eq $revreplace{$func}; $file{uses_Perl}{$func}++ if $c =~ /\bPerl_$func\b/; if (exists $API{$func}{provided}) { $file{uses_provided}{$func}++; if (!exists $API{$func}{base} || $API{$func}{base} > $opt{'compat-version'}) { $file{uses}{$func}++; my @deps = rec_depend($func); if (@deps) { $file{uses_deps}{$func} = \@deps; for (@deps) { $file{uses}{$_} = 0 unless exists $file{uses}{$_}; } } for ($func, @deps) { $file{needs}{$_} = 'static' if exists $need{$_}; } } } if (exists $API{$func}{todo} && $API{$func}{todo} > $opt{'compat-version'}) { if ($c =~ /\b$func\b/) { $file{uses_todo}{$func}++; } } } } while ($c =~ /^$HS*#$HS*define$HS+(NEED_(\w+?)(_GLOBAL)?)\b/mg) { if (exists $need{$2}) { $file{defined $3 ? 'needed_global' : 'needed_static'}{$2}++; } else { warning("Possibly wrong #define $1 in $filename") } } for (qw(uses needs uses_todo needed_global needed_static)) { for $func (keys %{$file{$_}}) { push @{$global{$_}{$func}}, $filename; } } $files{$filename} = \%file; } # Globally resolve NEED_'s my $need; for $need (keys %{$global{needs}}) { if (@{$global{needs}{$need}} > 1) { my @targets = @{$global{needs}{$need}}; my @t = grep $files{$_}{needed_global}{$need}, @targets; @targets = @t if @t; @t = grep /\.xs$/i, @targets; @targets = @t if @t; my $target = shift @targets; $files{$target}{needs}{$need} = 'global'; for (@{$global{needs}{$need}}) { $files{$_}{needs}{$need} = 'extern' if $_ ne $target; } } } for $filename (@files) { exists $files{$filename} or next; info("=== Analyzing $filename ==="); my %file = %{$files{$filename}}; my $func; my $c = $file{code}; my $warnings = 0; for $func (sort keys %{$file{uses_Perl}}) { if ($API{$func}{varargs}) { unless ($API{$func}{nothxarg}) { my $changes = ($c =~ s{\b(Perl_$func\s*\(\s*)(?!aTHX_?)(\)|[^\s)]*\))} { $1 . ($2 eq ')' ? 'aTHX' : 'aTHX_ ') . $2 }ge); if ($changes) { warning("Doesn't pass interpreter argument aTHX to Perl_$func"); $file{changes} += $changes; } } } else { warning("Uses Perl_$func instead of $func"); $file{changes} += ($c =~ s{\bPerl_$func(\s*)\((\s*aTHX_?)?\s*} {$func$1(}g); } } for $func (sort keys %{$file{uses_replace}}) { warning("Uses $func instead of $replace{$func}"); $file{changes} += ($c =~ s/\b$func\b/$replace{$func}/g); } for $func (sort keys %{$file{uses_provided}}) { if ($file{uses}{$func}) { if (exists $file{uses_deps}{$func}) { diag("Uses $func, which depends on ", join(', ', @{$file{uses_deps}{$func}})); } else { diag("Uses $func"); } } $warnings += hint($func); } unless ($opt{quiet}) { for $func (sort keys %{$file{uses_todo}}) { print "*** WARNING: Uses $func, which may not be portable below perl ", format_version($API{$func}{todo}), ", even with '$ppport'\n"; $warnings++; } } for $func (sort keys %{$file{needed_static}}) { my $message = ''; if (not exists $file{uses}{$func}) { $message = "No need to define NEED_$func if $func is never used"; } elsif (exists $file{needs}{$func} && $file{needs}{$func} ne 'static') { $message = "No need to define NEED_$func when already needed globally"; } if ($message) { diag($message); $file{changes} += ($c =~ s/^$HS*#$HS*define$HS+NEED_$func\b.*$LF//mg); } } for $func (sort keys %{$file{needed_global}}) { my $message = ''; if (not exists $global{uses}{$func}) { $message = "No need to define NEED_${func}_GLOBAL if $func is never used"; } elsif (exists $file{needs}{$func}) { if ($file{needs}{$func} eq 'extern') { $message = "No need to define NEED_${func}_GLOBAL when already needed globally"; } elsif ($file{needs}{$func} eq 'static') { $message = "No need to define NEED_${func}_GLOBAL when only used in this file"; } } if ($message) { diag($message); $file{changes} += ($c =~ s/^$HS*#$HS*define$HS+NEED_${func}_GLOBAL\b.*$LF//mg); } } $file{needs_inc_ppport} = keys %{$file{uses}}; if ($file{needs_inc_ppport}) { my $pp = ''; for $func (sort keys %{$file{needs}}) { my $type = $file{needs}{$func}; next if $type eq 'extern'; my $suffix = $type eq 'global' ? '_GLOBAL' : ''; unless (exists $file{"needed_$type"}{$func}) { if ($type eq 'global') { diag("Files [@{$global{needs}{$func}}] need $func, adding global request"); } else { diag("File needs $func, adding static request"); } $pp .= "#define NEED_$func$suffix\n"; } } if ($pp && ($c =~ s/^(?=$HS*#$HS*define$HS+NEED_\w+)/$pp/m)) { $pp = ''; $file{changes}++; } unless ($file{has_inc_ppport}) { diag("Needs to include '$ppport'"); $pp .= qq(#include "$ppport"\n) } if ($pp) { $file{changes} += ($c =~ s/^($HS*#$HS*define$HS+NEED_\w+.*?)^/$1$pp/ms) || ($c =~ s/^(?=$HS*#$HS*include.*\Q$ppport\E)/$pp/m) || ($c =~ s/^($HS*#$HS*include.*XSUB.*\s*?)^/$1$pp/m) || ($c =~ s/^/$pp/); } } else { if ($file{has_inc_ppport}) { diag("No need to include '$ppport'"); $file{changes} += ($c =~ s/^$HS*?#$HS*include.*\Q$ppport\E.*?$LF//m); } } # put back in our C comments my $ix; my $cppc = 0; my @ccom = @{$file{ccom}}; for $ix (0 .. $#ccom) { if (!$opt{cplusplus} && $ccom[$ix] =~ s!^//!!) { $cppc++; $file{changes} += $c =~ s/$rccs$ix$rcce/$ccs$ccom[$ix] $cce/; } else { $c =~ s/$rccs$ix$rcce/$ccom[$ix]/; } } if ($cppc) { my $s = $cppc != 1 ? 's' : ''; warning("Uses $cppc C++ style comment$s, which is not portable"); } my $s = $warnings != 1 ? 's' : ''; my $warn = $warnings ? " ($warnings warning$s)" : ''; info("Analysis completed$warn"); if ($file{changes}) { if (exists $opt{copy}) { my $newfile = "$filename$opt{copy}"; if (-e $newfile) { error("'$newfile' already exists, refusing to write copy of '$filename'"); } else { local *F; if (open F, ">$newfile") { info("Writing copy of '$filename' with changes to '$newfile'"); print F $c; close F; } else { error("Cannot open '$newfile' for writing: $!"); } } } elsif (exists $opt{patch} || $opt{changes}) { if (exists $opt{patch}) { unless ($patch_opened) { if (open PATCH, ">$opt{patch}") { $patch_opened = 1; } else { error("Cannot open '$opt{patch}' for writing: $!"); delete $opt{patch}; $opt{changes} = 1; goto fallback; } } mydiff(\*PATCH, $filename, $c); } else { fallback: info("Suggested changes:"); mydiff(\*STDOUT, $filename, $c); } } else { my $s = $file{changes} == 1 ? '' : 's'; info("$file{changes} potentially required change$s detected"); } } else { info("Looks good"); } } close PATCH if $patch_opened; exit 0; sub try_use { eval "use @_;"; return $@ eq '' } sub mydiff { local *F = shift; my($file, $str) = @_; my $diff; if (exists $opt{diff}) { $diff = run_diff($opt{diff}, $file, $str); } if (!defined $diff and try_use('Text::Diff')) { $diff = Text::Diff::diff($file, \$str, { STYLE => 'Unified' }); $diff = <
$tmp") { print F $str; close F; if (open F, "$prog $file $tmp |") { while () { s/\Q$tmp\E/$file.patched/; $diff .= $_; } close F; unlink $tmp; return $diff; } unlink $tmp; } else { error("Cannot open '$tmp' for writing: $!"); } return undef; } sub rec_depend { my($func, $seen) = @_; return () unless exists $depends{$func}; $seen = {%{$seen||{}}}; return () if $seen->{$func}++; my %s; grep !$s{$_}++, map { ($_, rec_depend($_, $seen)) } @{$depends{$func}}; } sub parse_version { my $ver = shift; if ($ver =~ /^(\d+)\.(\d+)\.(\d+)$/) { return ($1, $2, $3); } elsif ($ver !~ /^\d+\.[\d_]+$/) { die "cannot parse version '$ver'\n"; } $ver =~ s/_//g; $ver =~ s/$/000000/; my($r,$v,$s) = $ver =~ /(\d+)\.(\d{3})(\d{3})/; $v = int $v; $s = int $s; if ($r < 5 || ($r == 5 && $v < 6)) { if ($s % 10) { die "cannot parse version '$ver'\n"; } } return ($r, $v, $s); } sub format_version { my $ver = shift; $ver =~ s/$/000000/; my($r,$v,$s) = $ver =~ /(\d+)\.(\d{3})(\d{3})/; $v = int $v; $s = int $s; if ($r < 5 || ($r == 5 && $v < 6)) { if ($s % 10) { die "invalid version '$ver'\n"; } $s /= 10; $ver = sprintf "%d.%03d", $r, $v; $s > 0 and $ver .= sprintf "_%02d", $s; return $ver; } return sprintf "%d.%d.%d", $r, $v, $s; } sub info { $opt{quiet} and return; print @_, "\n"; } sub diag { $opt{quiet} and return; $opt{diag} and print @_, "\n"; } sub warning { $opt{quiet} and return; print "*** ", @_, "\n"; } sub error { print "*** ERROR: ", @_, "\n"; } my %given_hints; my %given_warnings; sub hint { $opt{quiet} and return; my $func = shift; my $rv = 0; if (exists $warnings{$func} && !$given_warnings{$func}++) { my $warn = $warnings{$func}; $warn =~ s!^!*** !mg; print "*** WARNING: $func\n", $warn; $rv++; } if ($opt{hints} && exists $hints{$func} && !$given_hints{$func}++) { my $hint = $hints{$func}; $hint =~ s/^/ /mg; print " --- hint for $func ---\n", $hint; } $rv; } sub usage { my($usage) = do { local(@ARGV,$/)=($0); <> } =~ /^=head\d$HS+SYNOPSIS\s*^(.*?)\s*^=/ms; my %M = ( 'I' => '*' ); $usage =~ s/^\s*perl\s+\S+/$^X $0/; $usage =~ s/([A-Z])<([^>]+)>/$M{$1}$2$M{$1}/g; print < }; my($copy) = $self =~ /^=head\d\s+COPYRIGHT\s*^(.*?)^=\w+/ms; $copy =~ s/^(?=\S+)/ /gms; $self =~ s/^$HS+Do NOT edit.*?(?=^-)/$copy/ms; $self =~ s/^SKIP.*(?=^__DATA__)/SKIP if (\@ARGV && \$ARGV[0] eq '--unstrip') { eval { require Devel::PPPort }; \$@ and die "Cannot require Devel::PPPort, please install.\\n"; if (\$Devel::PPPort::VERSION < $VERSION) { die "$0 was originally generated with Devel::PPPort $VERSION.\\n" . "Your Devel::PPPort is only version \$Devel::PPPort::VERSION.\\n" . "Please install a newer version, or --unstrip will not work.\\n"; } Devel::PPPort::WriteFile(\$0); exit 0; } print <$0" or die "cannot strip $0: $!\n"; print OUT "$pl$c\n"; exit 0; } __DATA__ */ #ifndef _P_P_PORTABILITY_H_ #define _P_P_PORTABILITY_H_ #ifndef DPPP_NAMESPACE # define DPPP_NAMESPACE DPPP_ #endif #define DPPP_CAT2(x,y) CAT2(x,y) #define DPPP_(name) DPPP_CAT2(DPPP_NAMESPACE, name) #ifndef PERL_REVISION # if !defined(__PATCHLEVEL_H_INCLUDED__) && !(defined(PATCHLEVEL) && defined(SUBVERSION)) # define PERL_PATCHLEVEL_H_IMPLICIT # include # endif # if !(defined(PERL_VERSION) || (defined(SUBVERSION) && defined(PATCHLEVEL))) # include # endif # ifndef PERL_REVISION # define PERL_REVISION (5) /* Replace: 1 */ # define PERL_VERSION PATCHLEVEL # define PERL_SUBVERSION SUBVERSION /* Replace PERL_PATCHLEVEL with PERL_VERSION */ /* Replace: 0 */ # endif #endif #define _dpppDEC2BCD(dec) ((((dec)/100)<<8)|((((dec)%100)/10)<<4)|((dec)%10)) #define PERL_BCDVERSION ((_dpppDEC2BCD(PERL_REVISION)<<24)|(_dpppDEC2BCD(PERL_VERSION)<<12)|_dpppDEC2BCD(PERL_SUBVERSION)) /* It is very unlikely that anyone will try to use this with Perl 6 (or greater), but who knows. */ #if PERL_REVISION != 5 # error ppport.h only works with Perl version 5 #endif /* PERL_REVISION != 5 */ #ifdef I_LIMITS # include #endif #ifndef PERL_UCHAR_MIN # define PERL_UCHAR_MIN ((unsigned char)0) #endif #ifndef PERL_UCHAR_MAX # ifdef UCHAR_MAX # define PERL_UCHAR_MAX ((unsigned char)UCHAR_MAX) # else # ifdef MAXUCHAR # define PERL_UCHAR_MAX ((unsigned char)MAXUCHAR) # else # define PERL_UCHAR_MAX ((unsigned char)~(unsigned)0) # endif # endif #endif #ifndef PERL_USHORT_MIN # define PERL_USHORT_MIN ((unsigned short)0) #endif #ifndef PERL_USHORT_MAX # ifdef USHORT_MAX # define PERL_USHORT_MAX ((unsigned short)USHORT_MAX) # else # ifdef MAXUSHORT # define PERL_USHORT_MAX ((unsigned short)MAXUSHORT) # else # ifdef USHRT_MAX # define PERL_USHORT_MAX ((unsigned short)USHRT_MAX) # else # define PERL_USHORT_MAX ((unsigned short)~(unsigned)0) # endif # endif # endif #endif #ifndef PERL_SHORT_MAX # ifdef SHORT_MAX # define PERL_SHORT_MAX ((short)SHORT_MAX) # else # ifdef MAXSHORT /* Often used in */ # define PERL_SHORT_MAX ((short)MAXSHORT) # else # ifdef SHRT_MAX # define PERL_SHORT_MAX ((short)SHRT_MAX) # else # define PERL_SHORT_MAX ((short) (PERL_USHORT_MAX >> 1)) # endif # endif # endif #endif #ifndef PERL_SHORT_MIN # ifdef SHORT_MIN # define PERL_SHORT_MIN ((short)SHORT_MIN) # else # ifdef MINSHORT # define PERL_SHORT_MIN ((short)MINSHORT) # else # ifdef SHRT_MIN # define PERL_SHORT_MIN ((short)SHRT_MIN) # else # define PERL_SHORT_MIN (-PERL_SHORT_MAX - ((3 & -1) == 3)) # endif # endif # endif #endif #ifndef PERL_UINT_MAX # ifdef UINT_MAX # define PERL_UINT_MAX ((unsigned int)UINT_MAX) # else # ifdef MAXUINT # define PERL_UINT_MAX ((unsigned int)MAXUINT) # else # define PERL_UINT_MAX (~(unsigned int)0) # endif # endif #endif #ifndef PERL_UINT_MIN # define PERL_UINT_MIN ((unsigned int)0) #endif #ifndef PERL_INT_MAX # ifdef INT_MAX # define PERL_INT_MAX ((int)INT_MAX) # else # ifdef MAXINT /* Often used in */ # define PERL_INT_MAX ((int)MAXINT) # else # define PERL_INT_MAX ((int)(PERL_UINT_MAX >> 1)) # endif # endif #endif #ifndef PERL_INT_MIN # ifdef INT_MIN # define PERL_INT_MIN ((int)INT_MIN) # else # ifdef MININT # define PERL_INT_MIN ((int)MININT) # else # define PERL_INT_MIN (-PERL_INT_MAX - ((3 & -1) == 3)) # endif # endif #endif #ifndef PERL_ULONG_MAX # ifdef ULONG_MAX # define PERL_ULONG_MAX ((unsigned long)ULONG_MAX) # else # ifdef MAXULONG # define PERL_ULONG_MAX ((unsigned long)MAXULONG) # else # define PERL_ULONG_MAX (~(unsigned long)0) # endif # endif #endif #ifndef PERL_ULONG_MIN # define PERL_ULONG_MIN ((unsigned long)0L) #endif #ifndef PERL_LONG_MAX # ifdef LONG_MAX # define PERL_LONG_MAX ((long)LONG_MAX) # else # ifdef MAXLONG # define PERL_LONG_MAX ((long)MAXLONG) # else # define PERL_LONG_MAX ((long) (PERL_ULONG_MAX >> 1)) # endif # endif #endif #ifndef PERL_LONG_MIN # ifdef LONG_MIN # define PERL_LONG_MIN ((long)LONG_MIN) # else # ifdef MINLONG # define PERL_LONG_MIN ((long)MINLONG) # else # define PERL_LONG_MIN (-PERL_LONG_MAX - ((3 & -1) == 3)) # endif # endif #endif #if defined(HAS_QUAD) && (defined(convex) || defined(uts)) # ifndef PERL_UQUAD_MAX # ifdef ULONGLONG_MAX # define PERL_UQUAD_MAX ((unsigned long long)ULONGLONG_MAX) # else # ifdef MAXULONGLONG # define PERL_UQUAD_MAX ((unsigned long long)MAXULONGLONG) # else # define PERL_UQUAD_MAX (~(unsigned long long)0) # endif # endif # endif # ifndef PERL_UQUAD_MIN # define PERL_UQUAD_MIN ((unsigned long long)0L) # endif # ifndef PERL_QUAD_MAX # ifdef LONGLONG_MAX # define PERL_QUAD_MAX ((long long)LONGLONG_MAX) # else # ifdef MAXLONGLONG # define PERL_QUAD_MAX ((long long)MAXLONGLONG) # else # define PERL_QUAD_MAX ((long long) (PERL_UQUAD_MAX >> 1)) # endif # endif # endif # ifndef PERL_QUAD_MIN # ifdef LONGLONG_MIN # define PERL_QUAD_MIN ((long long)LONGLONG_MIN) # else # ifdef MINLONGLONG # define PERL_QUAD_MIN ((long long)MINLONGLONG) # else # define PERL_QUAD_MIN (-PERL_QUAD_MAX - ((3 & -1) == 3)) # endif # endif # endif #endif /* This is based on code from 5.003 perl.h */ #ifdef HAS_QUAD # ifdef cray #ifndef IVTYPE # define IVTYPE int #endif #ifndef IV_MIN # define IV_MIN PERL_INT_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_INT_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_UINT_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_UINT_MAX #endif # ifdef INTSIZE #ifndef IVSIZE # define IVSIZE INTSIZE #endif # endif # else # if defined(convex) || defined(uts) #ifndef IVTYPE # define IVTYPE long long #endif #ifndef IV_MIN # define IV_MIN PERL_QUAD_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_QUAD_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_UQUAD_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_UQUAD_MAX #endif # ifdef LONGLONGSIZE #ifndef IVSIZE # define IVSIZE LONGLONGSIZE #endif # endif # else #ifndef IVTYPE # define IVTYPE long #endif #ifndef IV_MIN # define IV_MIN PERL_LONG_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_LONG_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_ULONG_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_ULONG_MAX #endif # ifdef LONGSIZE #ifndef IVSIZE # define IVSIZE LONGSIZE #endif # endif # endif # endif #ifndef IVSIZE # define IVSIZE 8 #endif #ifndef PERL_QUAD_MIN # define PERL_QUAD_MIN IV_MIN #endif #ifndef PERL_QUAD_MAX # define PERL_QUAD_MAX IV_MAX #endif #ifndef PERL_UQUAD_MIN # define PERL_UQUAD_MIN UV_MIN #endif #ifndef PERL_UQUAD_MAX # define PERL_UQUAD_MAX UV_MAX #endif #else #ifndef IVTYPE # define IVTYPE long #endif #ifndef IV_MIN # define IV_MIN PERL_LONG_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_LONG_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_ULONG_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_ULONG_MAX #endif #endif #ifndef IVSIZE # ifdef LONGSIZE # define IVSIZE LONGSIZE # else # define IVSIZE 4 /* A bold guess, but the best we can make. */ # endif #endif #ifndef UVTYPE # define UVTYPE unsigned IVTYPE #endif #ifndef UVSIZE # define UVSIZE IVSIZE #endif #ifndef sv_setuv # define sv_setuv(sv, uv) \ STMT_START { \ UV TeMpUv = uv; \ if (TeMpUv <= IV_MAX) \ sv_setiv(sv, TeMpUv); \ else \ sv_setnv(sv, (double)TeMpUv); \ } STMT_END #endif #ifndef newSVuv # define newSVuv(uv) ((uv) <= IV_MAX ? newSViv((IV)uv) : newSVnv((NV)uv)) #endif #ifndef sv_2uv # define sv_2uv(sv) ((PL_Sv = (sv)), (UV) (SvNOK(PL_Sv) ? SvNV(PL_Sv) : sv_2nv(PL_Sv))) #endif #ifndef SvUVX # define SvUVX(sv) ((UV)SvIVX(sv)) #endif #ifndef SvUVXx # define SvUVXx(sv) SvUVX(sv) #endif #ifndef SvUV # define SvUV(sv) (SvIOK(sv) ? SvUVX(sv) : sv_2uv(sv)) #endif #ifndef SvUVx # define SvUVx(sv) ((PL_Sv = (sv)), SvUV(PL_Sv)) #endif /* Hint: sv_uv * Always use the SvUVx() macro instead of sv_uv(). */ #ifndef sv_uv # define sv_uv(sv) SvUVx(sv) #endif #if !defined(SvUOK) && defined(SvIOK_UV) # define SvUOK(sv) SvIOK_UV(sv) #endif #ifndef XST_mUV # define XST_mUV(i,v) (ST(i) = sv_2mortal(newSVuv(v)) ) #endif #ifndef XSRETURN_UV # define XSRETURN_UV(v) STMT_START { XST_mUV(0,v); XSRETURN(1); } STMT_END #endif #ifndef PUSHu # define PUSHu(u) STMT_START { sv_setuv(TARG, (UV)(u)); PUSHTARG; } STMT_END #endif #ifndef XPUSHu # define XPUSHu(u) STMT_START { sv_setuv(TARG, (UV)(u)); XPUSHTARG; } STMT_END #endif #ifdef HAS_MEMCMP #ifndef memNE # define memNE(s1,s2,l) (memcmp(s1,s2,l)) #endif #ifndef memEQ # define memEQ(s1,s2,l) (!memcmp(s1,s2,l)) #endif #else #ifndef memNE # define memNE(s1,s2,l) (bcmp(s1,s2,l)) #endif #ifndef memEQ # define memEQ(s1,s2,l) (!bcmp(s1,s2,l)) #endif #endif #ifndef MoveD # define MoveD(s,d,n,t) memmove((char*)(d),(char*)(s), (n) * sizeof(t)) #endif #ifndef CopyD # define CopyD(s,d,n,t) memcpy((char*)(d),(char*)(s), (n) * sizeof(t)) #endif #ifdef HAS_MEMSET #ifndef ZeroD # define ZeroD(d,n,t) memzero((char*)(d), (n) * sizeof(t)) #endif #else #ifndef ZeroD # define ZeroD(d,n,t) ((void)memzero((char*)(d), (n) * sizeof(t)), d) #endif #endif #ifndef PoisonWith # define PoisonWith(d,n,t,b) (void)memset((char*)(d), (U8)(b), (n) * sizeof(t)) #endif #ifndef PoisonNew # define PoisonNew(d,n,t) PoisonWith(d,n,t,0xAB) #endif #ifndef PoisonFree # define PoisonFree(d,n,t) PoisonWith(d,n,t,0xEF) #endif #ifndef Poison # define Poison(d,n,t) PoisonFree(d,n,t) #endif #ifndef Newx # define Newx(v,n,t) New(0,v,n,t) #endif #ifndef Newxc # define Newxc(v,n,t,c) Newc(0,v,n,t,c) #endif #ifndef Newxz # define Newxz(v,n,t) Newz(0,v,n,t) #endif #ifndef PERL_UNUSED_DECL # ifdef HASATTRIBUTE # if (defined(__GNUC__) && defined(__cplusplus)) || defined(__INTEL_COMPILER) # define PERL_UNUSED_DECL # else # define PERL_UNUSED_DECL __attribute__((unused)) # endif # else # define PERL_UNUSED_DECL # endif #endif #ifndef PERL_UNUSED_ARG # if defined(lint) && defined(S_SPLINT_S) /* www.splint.org */ # include # define PERL_UNUSED_ARG(x) NOTE(ARGUNUSED(x)) # else # define PERL_UNUSED_ARG(x) ((void)x) # endif #endif #ifndef PERL_UNUSED_VAR # define PERL_UNUSED_VAR(x) ((void)x) #endif #ifndef PERL_UNUSED_CONTEXT # ifdef USE_ITHREADS # define PERL_UNUSED_CONTEXT PERL_UNUSED_ARG(my_perl) # else # define PERL_UNUSED_CONTEXT # endif #endif #ifndef NOOP # define NOOP /*EMPTY*/(void)0 #endif #ifndef dNOOP # define dNOOP extern int /*@unused@*/ Perl___notused PERL_UNUSED_DECL #endif #ifndef NVTYPE # if defined(USE_LONG_DOUBLE) && defined(HAS_LONG_DOUBLE) # define NVTYPE long double # else # define NVTYPE double # endif typedef NVTYPE NV; #endif #ifndef INT2PTR # if (IVSIZE == PTRSIZE) && (UVSIZE == PTRSIZE) # define PTRV UV # define INT2PTR(any,d) (any)(d) # else # if PTRSIZE == LONGSIZE # define PTRV unsigned long # else # define PTRV unsigned # endif # define INT2PTR(any,d) (any)(PTRV)(d) # endif # define NUM2PTR(any,d) (any)(PTRV)(d) # define PTR2IV(p) INT2PTR(IV,p) # define PTR2UV(p) INT2PTR(UV,p) # define PTR2NV(p) NUM2PTR(NV,p) # if PTRSIZE == LONGSIZE # define PTR2ul(p) (unsigned long)(p) # else # define PTR2ul(p) INT2PTR(unsigned long,p) # endif #endif /* !INT2PTR */ #undef START_EXTERN_C #undef END_EXTERN_C #undef EXTERN_C #ifdef __cplusplus # define START_EXTERN_C extern "C" { # define END_EXTERN_C } # define EXTERN_C extern "C" #else # define START_EXTERN_C # define END_EXTERN_C # define EXTERN_C extern #endif #if defined(PERL_GCC_PEDANTIC) # ifndef PERL_GCC_BRACE_GROUPS_FORBIDDEN # define PERL_GCC_BRACE_GROUPS_FORBIDDEN # endif #endif #if defined(__GNUC__) && !defined(PERL_GCC_BRACE_GROUPS_FORBIDDEN) && !defined(__cplusplus) # ifndef PERL_USE_GCC_BRACE_GROUPS # define PERL_USE_GCC_BRACE_GROUPS # endif #endif #undef STMT_START #undef STMT_END #ifdef PERL_USE_GCC_BRACE_GROUPS # define STMT_START (void)( /* gcc supports ``({ STATEMENTS; })'' */ # define STMT_END ) #else # if defined(VOIDFLAGS) && (VOIDFLAGS) && (defined(sun) || defined(__sun__)) && !defined(__GNUC__) # define STMT_START if (1) # define STMT_END else (void)0 # else # define STMT_START do # define STMT_END while (0) # endif #endif #ifndef boolSV # define boolSV(b) ((b) ? &PL_sv_yes : &PL_sv_no) #endif /* DEFSV appears first in 5.004_56 */ #ifndef DEFSV # define DEFSV GvSV(PL_defgv) #endif #ifndef SAVE_DEFSV # define SAVE_DEFSV SAVESPTR(GvSV(PL_defgv)) #endif /* Older perls (<=5.003) lack AvFILLp */ #ifndef AvFILLp # define AvFILLp AvFILL #endif #ifndef ERRSV # define ERRSV get_sv("@",FALSE) #endif #ifndef newSVpvn # define newSVpvn(data,len) ((data) \ ? ((len) ? newSVpv((data), (len)) : newSVpv("", 0)) \ : newSV(0)) #endif /* Hint: gv_stashpvn * This function's backport doesn't support the length parameter, but * rather ignores it. Portability can only be ensured if the length * parameter is used for speed reasons, but the length can always be * correctly computed from the string argument. */ #ifndef gv_stashpvn # define gv_stashpvn(str,len,create) gv_stashpv(str,create) #endif /* Replace: 1 */ #ifndef get_cv # define get_cv perl_get_cv #endif #ifndef get_sv # define get_sv perl_get_sv #endif #ifndef get_av # define get_av perl_get_av #endif #ifndef get_hv # define get_hv perl_get_hv #endif /* Replace: 0 */ #ifndef dUNDERBAR # define dUNDERBAR dNOOP #endif #ifndef UNDERBAR # define UNDERBAR DEFSV #endif #ifndef dAX # define dAX I32 ax = MARK - PL_stack_base + 1 #endif #ifndef dITEMS # define dITEMS I32 items = SP - MARK #endif #ifndef dXSTARG # define dXSTARG SV * targ = sv_newmortal() #endif #ifndef dAXMARK # define dAXMARK I32 ax = POPMARK; \ register SV ** const mark = PL_stack_base + ax++ #endif #ifndef XSprePUSH # define XSprePUSH (sp = PL_stack_base + ax - 1) #endif #if (PERL_BCDVERSION < 0x5005000) # undef XSRETURN # define XSRETURN(off) \ STMT_START { \ PL_stack_sp = PL_stack_base + ax + ((off) - 1); \ return; \ } STMT_END #endif #ifndef PERL_ABS # define PERL_ABS(x) ((x) < 0 ? -(x) : (x)) #endif #ifndef dVAR # define dVAR dNOOP #endif #ifndef SVf # define SVf "_" #endif #ifndef UTF8_MAXBYTES # define UTF8_MAXBYTES UTF8_MAXLEN #endif #ifndef PERL_HASH # define PERL_HASH(hash,str,len) \ STMT_START { \ const char *s_PeRlHaSh = str; \ I32 i_PeRlHaSh = len; \ U32 hash_PeRlHaSh = 0; \ while (i_PeRlHaSh--) \ hash_PeRlHaSh = hash_PeRlHaSh * 33 + *s_PeRlHaSh++; \ (hash) = hash_PeRlHaSh; \ } STMT_END #endif #ifndef PERL_SIGNALS_UNSAFE_FLAG #define PERL_SIGNALS_UNSAFE_FLAG 0x0001 #if (PERL_BCDVERSION < 0x5008000) # define D_PPP_PERL_SIGNALS_INIT PERL_SIGNALS_UNSAFE_FLAG #else # define D_PPP_PERL_SIGNALS_INIT 0 #endif #if defined(NEED_PL_signals) static U32 DPPP_(my_PL_signals) = D_PPP_PERL_SIGNALS_INIT; #elif defined(NEED_PL_signals_GLOBAL) U32 DPPP_(my_PL_signals) = D_PPP_PERL_SIGNALS_INIT; #else extern U32 DPPP_(my_PL_signals); #endif #define PL_signals DPPP_(my_PL_signals) #endif /* Hint: PL_ppaddr * Calling an op via PL_ppaddr requires passing a context argument * for threaded builds. Since the context argument is different for * 5.005 perls, you can use aTHXR (supplied by ppport.h), which will * automatically be defined as the correct argument. */ #if (PERL_BCDVERSION <= 0x5005005) /* Replace: 1 */ # define PL_ppaddr ppaddr # define PL_no_modify no_modify /* Replace: 0 */ #endif #if (PERL_BCDVERSION <= 0x5004005) /* Replace: 1 */ # define PL_DBsignal DBsignal # define PL_DBsingle DBsingle # define PL_DBsub DBsub # define PL_DBtrace DBtrace # define PL_Sv Sv # define PL_compiling compiling # define PL_copline copline # define PL_curcop curcop # define PL_curstash curstash # define PL_debstash debstash # define PL_defgv defgv # define PL_diehook diehook # define PL_dirty dirty # define PL_dowarn dowarn # define PL_errgv errgv # define PL_expect expect # define PL_hexdigit hexdigit # define PL_hints hints # define PL_laststatval laststatval # define PL_na na # define PL_perl_destruct_level perl_destruct_level # define PL_perldb perldb # define PL_rsfp_filters rsfp_filters # define PL_rsfp rsfp # define PL_stack_base stack_base # define PL_stack_sp stack_sp # define PL_statcache statcache # define PL_stdingv stdingv # define PL_sv_arenaroot sv_arenaroot # define PL_sv_no sv_no # define PL_sv_undef sv_undef # define PL_sv_yes sv_yes # define PL_tainted tainted # define PL_tainting tainting /* Replace: 0 */ #endif /* Warning: PL_expect, PL_copline, PL_rsfp, PL_rsfp_filters * Do not use this variable. It is internal to the perl parser * and may change or even be removed in the future. Note that * as of perl 5.9.5 you cannot assign to this variable anymore. */ /* TODO: cannot assign to these vars; is it worth fixing? */ #if (PERL_BCDVERSION >= 0x5009005) # define PL_expect (PL_parser ? PL_parser->expect : 0) # define PL_copline (PL_parser ? PL_parser->copline : 0) # define PL_rsfp (PL_parser ? PL_parser->rsfp : (PerlIO *) 0) # define PL_rsfp_filters (PL_parser ? PL_parser->rsfp_filters : (AV *) 0) #endif #ifndef dTHR # define dTHR dNOOP #endif #ifndef dTHX # define dTHX dNOOP #endif #ifndef dTHXa # define dTHXa(x) dNOOP #endif #ifndef pTHX # define pTHX void #endif #ifndef pTHX_ # define pTHX_ #endif #ifndef aTHX # define aTHX #endif #ifndef aTHX_ # define aTHX_ #endif #if (PERL_BCDVERSION < 0x5006000) # ifdef USE_THREADS # define aTHXR thr # define aTHXR_ thr, # else # define aTHXR # define aTHXR_ # endif # define dTHXR dTHR #else # define aTHXR aTHX # define aTHXR_ aTHX_ # define dTHXR dTHX #endif #ifndef dTHXoa # define dTHXoa(x) dTHXa(x) #endif #ifndef PUSHmortal # define PUSHmortal PUSHs(sv_newmortal()) #endif #ifndef mPUSHp # define mPUSHp(p,l) sv_setpvn_mg(PUSHmortal, (p), (l)) #endif #ifndef mPUSHn # define mPUSHn(n) sv_setnv_mg(PUSHmortal, (NV)(n)) #endif #ifndef mPUSHi # define mPUSHi(i) sv_setiv_mg(PUSHmortal, (IV)(i)) #endif #ifndef mPUSHu # define mPUSHu(u) sv_setuv_mg(PUSHmortal, (UV)(u)) #endif #ifndef XPUSHmortal # define XPUSHmortal XPUSHs(sv_newmortal()) #endif #ifndef mXPUSHp # define mXPUSHp(p,l) STMT_START { EXTEND(sp,1); sv_setpvn_mg(PUSHmortal, (p), (l)); } STMT_END #endif #ifndef mXPUSHn # define mXPUSHn(n) STMT_START { EXTEND(sp,1); sv_setnv_mg(PUSHmortal, (NV)(n)); } STMT_END #endif #ifndef mXPUSHi # define mXPUSHi(i) STMT_START { EXTEND(sp,1); sv_setiv_mg(PUSHmortal, (IV)(i)); } STMT_END #endif #ifndef mXPUSHu # define mXPUSHu(u) STMT_START { EXTEND(sp,1); sv_setuv_mg(PUSHmortal, (UV)(u)); } STMT_END #endif /* Replace: 1 */ #ifndef call_sv # define call_sv perl_call_sv #endif #ifndef call_pv # define call_pv perl_call_pv #endif #ifndef call_argv # define call_argv perl_call_argv #endif #ifndef call_method # define call_method perl_call_method #endif #ifndef eval_sv # define eval_sv perl_eval_sv #endif #ifndef PERL_LOADMOD_DENY # define PERL_LOADMOD_DENY 0x1 #endif #ifndef PERL_LOADMOD_NOIMPORT # define PERL_LOADMOD_NOIMPORT 0x2 #endif #ifndef PERL_LOADMOD_IMPORT_OPS # define PERL_LOADMOD_IMPORT_OPS 0x4 #endif /* Replace: 0 */ /* Replace perl_eval_pv with eval_pv */ #ifndef eval_pv #if defined(NEED_eval_pv) static SV* DPPP_(my_eval_pv)(char *p, I32 croak_on_error); static #else extern SV* DPPP_(my_eval_pv)(char *p, I32 croak_on_error); #endif #ifdef eval_pv # undef eval_pv #endif #define eval_pv(a,b) DPPP_(my_eval_pv)(aTHX_ a,b) #define Perl_eval_pv DPPP_(my_eval_pv) #if defined(NEED_eval_pv) || defined(NEED_eval_pv_GLOBAL) SV* DPPP_(my_eval_pv)(char *p, I32 croak_on_error) { dSP; SV* sv = newSVpv(p, 0); PUSHMARK(sp); eval_sv(sv, G_SCALAR); SvREFCNT_dec(sv); SPAGAIN; sv = POPs; PUTBACK; if (croak_on_error && SvTRUE(GvSV(errgv))) croak(SvPVx(GvSV(errgv), na)); return sv; } #endif #endif #ifndef vload_module #if defined(NEED_vload_module) static void DPPP_(my_vload_module)(U32 flags, SV *name, SV *ver, va_list *args); static #else extern void DPPP_(my_vload_module)(U32 flags, SV *name, SV *ver, va_list *args); #endif #ifdef vload_module # undef vload_module #endif #define vload_module(a,b,c,d) DPPP_(my_vload_module)(aTHX_ a,b,c,d) #define Perl_vload_module DPPP_(my_vload_module) #if defined(NEED_vload_module) || defined(NEED_vload_module_GLOBAL) void DPPP_(my_vload_module)(U32 flags, SV *name, SV *ver, va_list *args) { dTHR; dVAR; OP *veop, *imop; OP * const modname = newSVOP(OP_CONST, 0, name); /* 5.005 has a somewhat hacky force_normal that doesn't croak on SvREADONLY() if PL_compling is true. Current perls take care in ck_require() to correctly turn off SvREADONLY before calling force_normal_flags(). This seems a better fix than fudging PL_compling */ SvREADONLY_off(((SVOP*)modname)->op_sv); modname->op_private |= OPpCONST_BARE; if (ver) { veop = newSVOP(OP_CONST, 0, ver); } else veop = NULL; if (flags & PERL_LOADMOD_NOIMPORT) { imop = sawparens(newNULLLIST()); } else if (flags & PERL_LOADMOD_IMPORT_OPS) { imop = va_arg(*args, OP*); } else { SV *sv; imop = NULL; sv = va_arg(*args, SV*); while (sv) { imop = append_elem(OP_LIST, imop, newSVOP(OP_CONST, 0, sv)); sv = va_arg(*args, SV*); } } { const line_t ocopline = PL_copline; COP * const ocurcop = PL_curcop; const int oexpect = PL_expect; #if (PERL_BCDVERSION >= 0x5004000) utilize(!(flags & PERL_LOADMOD_DENY), start_subparse(FALSE, 0), veop, modname, imop); #else utilize(!(flags & PERL_LOADMOD_DENY), start_subparse(), modname, imop); #endif PL_expect = oexpect; PL_copline = ocopline; PL_curcop = ocurcop; } } #endif #endif #ifndef load_module #if defined(NEED_load_module) static void DPPP_(my_load_module)(U32 flags, SV *name, SV *ver, ...); static #else extern void DPPP_(my_load_module)(U32 flags, SV *name, SV *ver, ...); #endif #ifdef load_module # undef load_module #endif #define load_module DPPP_(my_load_module) #define Perl_load_module DPPP_(my_load_module) #if defined(NEED_load_module) || defined(NEED_load_module_GLOBAL) void DPPP_(my_load_module)(U32 flags, SV *name, SV *ver, ...) { va_list args; va_start(args, ver); vload_module(flags, name, ver, &args); va_end(args); } #endif #endif #ifndef newRV_inc # define newRV_inc(sv) newRV(sv) /* Replace */ #endif #ifndef newRV_noinc #if defined(NEED_newRV_noinc) static SV * DPPP_(my_newRV_noinc)(SV *sv); static #else extern SV * DPPP_(my_newRV_noinc)(SV *sv); #endif #ifdef newRV_noinc # undef newRV_noinc #endif #define newRV_noinc(a) DPPP_(my_newRV_noinc)(aTHX_ a) #define Perl_newRV_noinc DPPP_(my_newRV_noinc) #if defined(NEED_newRV_noinc) || defined(NEED_newRV_noinc_GLOBAL) SV * DPPP_(my_newRV_noinc)(SV *sv) { SV *rv = (SV *)newRV(sv); SvREFCNT_dec(sv); return rv; } #endif #endif /* Hint: newCONSTSUB * Returns a CV* as of perl-5.7.1. This return value is not supported * by Devel::PPPort. */ /* newCONSTSUB from IO.xs is in the core starting with 5.004_63 */ #if (PERL_BCDVERSION < 0x5004063) && (PERL_BCDVERSION != 0x5004005) #if defined(NEED_newCONSTSUB) static void DPPP_(my_newCONSTSUB)(HV *stash, const char *name, SV *sv); static #else extern void DPPP_(my_newCONSTSUB)(HV *stash, const char *name, SV *sv); #endif #ifdef newCONSTSUB # undef newCONSTSUB #endif #define newCONSTSUB(a,b,c) DPPP_(my_newCONSTSUB)(aTHX_ a,b,c) #define Perl_newCONSTSUB DPPP_(my_newCONSTSUB) #if defined(NEED_newCONSTSUB) || defined(NEED_newCONSTSUB_GLOBAL) void DPPP_(my_newCONSTSUB)(HV *stash, const char *name, SV *sv) { U32 oldhints = PL_hints; HV *old_cop_stash = PL_curcop->cop_stash; HV *old_curstash = PL_curstash; line_t oldline = PL_curcop->cop_line; PL_curcop->cop_line = PL_copline; PL_hints &= ~HINT_BLOCK_SCOPE; if (stash) PL_curstash = PL_curcop->cop_stash = stash; newSUB( #if (PERL_BCDVERSION < 0x5003022) start_subparse(), #elif (PERL_BCDVERSION == 0x5003022) start_subparse(0), #else /* 5.003_23 onwards */ start_subparse(FALSE, 0), #endif newSVOP(OP_CONST, 0, newSVpv((char *) name, 0)), newSVOP(OP_CONST, 0, &PL_sv_no), /* SvPV(&PL_sv_no) == "" -- GMB */ newSTATEOP(0, Nullch, newSVOP(OP_CONST, 0, sv)) ); PL_hints = oldhints; PL_curcop->cop_stash = old_cop_stash; PL_curstash = old_curstash; PL_curcop->cop_line = oldline; } #endif #endif /* * Boilerplate macros for initializing and accessing interpreter-local * data from C. All statics in extensions should be reworked to use * this, if you want to make the extension thread-safe. See ext/re/re.xs * for an example of the use of these macros. * * Code that uses these macros is responsible for the following: * 1. #define MY_CXT_KEY to a unique string, e.g. "DynaLoader_guts" * 2. Declare a typedef named my_cxt_t that is a structure that contains * all the data that needs to be interpreter-local. * 3. Use the START_MY_CXT macro after the declaration of my_cxt_t. * 4. Use the MY_CXT_INIT macro such that it is called exactly once * (typically put in the BOOT: section). * 5. Use the members of the my_cxt_t structure everywhere as * MY_CXT.member. * 6. Use the dMY_CXT macro (a declaration) in all the functions that * access MY_CXT. */ #if defined(MULTIPLICITY) || defined(PERL_OBJECT) || \ defined(PERL_CAPI) || defined(PERL_IMPLICIT_CONTEXT) #ifndef START_MY_CXT /* This must appear in all extensions that define a my_cxt_t structure, * right after the definition (i.e. at file scope). The non-threads * case below uses it to declare the data as static. */ #define START_MY_CXT #if (PERL_BCDVERSION < 0x5004068) /* Fetches the SV that keeps the per-interpreter data. */ #define dMY_CXT_SV \ SV *my_cxt_sv = get_sv(MY_CXT_KEY, FALSE) #else /* >= perl5.004_68 */ #define dMY_CXT_SV \ SV *my_cxt_sv = *hv_fetch(PL_modglobal, MY_CXT_KEY, \ sizeof(MY_CXT_KEY)-1, TRUE) #endif /* < perl5.004_68 */ /* This declaration should be used within all functions that use the * interpreter-local data. */ #define dMY_CXT \ dMY_CXT_SV; \ my_cxt_t *my_cxtp = INT2PTR(my_cxt_t*,SvUV(my_cxt_sv)) /* Creates and zeroes the per-interpreter data. * (We allocate my_cxtp in a Perl SV so that it will be released when * the interpreter goes away.) */ #define MY_CXT_INIT \ dMY_CXT_SV; \ /* newSV() allocates one more than needed */ \ my_cxt_t *my_cxtp = (my_cxt_t*)SvPVX(newSV(sizeof(my_cxt_t)-1));\ Zero(my_cxtp, 1, my_cxt_t); \ sv_setuv(my_cxt_sv, PTR2UV(my_cxtp)) /* This macro must be used to access members of the my_cxt_t structure. * e.g. MYCXT.some_data */ #define MY_CXT (*my_cxtp) /* Judicious use of these macros can reduce the number of times dMY_CXT * is used. Use is similar to pTHX, aTHX etc. */ #define pMY_CXT my_cxt_t *my_cxtp #define pMY_CXT_ pMY_CXT, #define _pMY_CXT ,pMY_CXT #define aMY_CXT my_cxtp #define aMY_CXT_ aMY_CXT, #define _aMY_CXT ,aMY_CXT #endif /* START_MY_CXT */ #ifndef MY_CXT_CLONE /* Clones the per-interpreter data. */ #define MY_CXT_CLONE \ dMY_CXT_SV; \ my_cxt_t *my_cxtp = (my_cxt_t*)SvPVX(newSV(sizeof(my_cxt_t)-1));\ Copy(INT2PTR(my_cxt_t*, SvUV(my_cxt_sv)), my_cxtp, 1, my_cxt_t);\ sv_setuv(my_cxt_sv, PTR2UV(my_cxtp)) #endif #else /* single interpreter */ #ifndef START_MY_CXT #define START_MY_CXT static my_cxt_t my_cxt; #define dMY_CXT_SV dNOOP #define dMY_CXT dNOOP #define MY_CXT_INIT NOOP #define MY_CXT my_cxt #define pMY_CXT void #define pMY_CXT_ #define _pMY_CXT #define aMY_CXT #define aMY_CXT_ #define _aMY_CXT #endif /* START_MY_CXT */ #ifndef MY_CXT_CLONE #define MY_CXT_CLONE NOOP #endif #endif #ifndef IVdf # if IVSIZE == LONGSIZE # define IVdf "ld" # define UVuf "lu" # define UVof "lo" # define UVxf "lx" # define UVXf "lX" # else # if IVSIZE == INTSIZE # define IVdf "d" # define UVuf "u" # define UVof "o" # define UVxf "x" # define UVXf "X" # endif # endif #endif #ifndef NVef # if defined(USE_LONG_DOUBLE) && defined(HAS_LONG_DOUBLE) && \ defined(PERL_PRIfldbl) /* Not very likely, but let's try anyway. */ # define NVef PERL_PRIeldbl # define NVff PERL_PRIfldbl # define NVgf PERL_PRIgldbl # else # define NVef "e" # define NVff "f" # define NVgf "g" # endif #endif #ifndef SvREFCNT_inc # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc(sv) \ ({ \ SV * const _sv = (SV*)(sv); \ if (_sv) \ (SvREFCNT(_sv))++; \ _sv; \ }) # else # define SvREFCNT_inc(sv) \ ((PL_Sv=(SV*)(sv)) ? (++(SvREFCNT(PL_Sv)),PL_Sv) : NULL) # endif #endif #ifndef SvREFCNT_inc_simple # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc_simple(sv) \ ({ \ if (sv) \ (SvREFCNT(sv))++; \ (SV *)(sv); \ }) # else # define SvREFCNT_inc_simple(sv) \ ((sv) ? (SvREFCNT(sv)++,(SV*)(sv)) : NULL) # endif #endif #ifndef SvREFCNT_inc_NN # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc_NN(sv) \ ({ \ SV * const _sv = (SV*)(sv); \ SvREFCNT(_sv)++; \ _sv; \ }) # else # define SvREFCNT_inc_NN(sv) \ (PL_Sv=(SV*)(sv),++(SvREFCNT(PL_Sv)),PL_Sv) # endif #endif #ifndef SvREFCNT_inc_void # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc_void(sv) \ ({ \ SV * const _sv = (SV*)(sv); \ if (_sv) \ (void)(SvREFCNT(_sv)++); \ }) # else # define SvREFCNT_inc_void(sv) \ (void)((PL_Sv=(SV*)(sv)) ? ++(SvREFCNT(PL_Sv)) : 0) # endif #endif #ifndef SvREFCNT_inc_simple_void # define SvREFCNT_inc_simple_void(sv) STMT_START { if (sv) SvREFCNT(sv)++; } STMT_END #endif #ifndef SvREFCNT_inc_simple_NN # define SvREFCNT_inc_simple_NN(sv) (++SvREFCNT(sv), (SV*)(sv)) #endif #ifndef SvREFCNT_inc_void_NN # define SvREFCNT_inc_void_NN(sv) (void)(++SvREFCNT((SV*)(sv))) #endif #ifndef SvREFCNT_inc_simple_void_NN # define SvREFCNT_inc_simple_void_NN(sv) (void)(++SvREFCNT((SV*)(sv))) #endif /* Backwards compatibility stuff... :-( */ #if !defined(NEED_sv_2pv_flags) && defined(NEED_sv_2pv_nolen) # define NEED_sv_2pv_flags #endif #if !defined(NEED_sv_2pv_flags_GLOBAL) && defined(NEED_sv_2pv_nolen_GLOBAL) # define NEED_sv_2pv_flags_GLOBAL #endif /* Hint: sv_2pv_nolen * Use the SvPV_nolen() or SvPV_nolen_const() macros instead of sv_2pv_nolen(). */ #ifndef sv_2pv_nolen # define sv_2pv_nolen(sv) SvPV_nolen(sv) #endif #ifdef SvPVbyte /* Hint: SvPVbyte * Does not work in perl-5.6.1, ppport.h implements a version * borrowed from perl-5.7.3. */ #if (PERL_BCDVERSION < 0x5007000) #if defined(NEED_sv_2pvbyte) static char * DPPP_(my_sv_2pvbyte)(pTHX_ SV * sv, STRLEN * lp); static #else extern char * DPPP_(my_sv_2pvbyte)(pTHX_ SV * sv, STRLEN * lp); #endif #ifdef sv_2pvbyte # undef sv_2pvbyte #endif #define sv_2pvbyte(a,b) DPPP_(my_sv_2pvbyte)(aTHX_ a,b) #define Perl_sv_2pvbyte DPPP_(my_sv_2pvbyte) #if defined(NEED_sv_2pvbyte) || defined(NEED_sv_2pvbyte_GLOBAL) char * DPPP_(my_sv_2pvbyte)(pTHX_ SV *sv, STRLEN *lp) { sv_utf8_downgrade(sv,0); return SvPV(sv,*lp); } #endif /* Hint: sv_2pvbyte * Use the SvPVbyte() macro instead of sv_2pvbyte(). */ #undef SvPVbyte #define SvPVbyte(sv, lp) \ ((SvFLAGS(sv) & (SVf_POK|SVf_UTF8)) == (SVf_POK) \ ? ((lp = SvCUR(sv)), SvPVX(sv)) : sv_2pvbyte(sv, &lp)) #endif #else # define SvPVbyte SvPV # define sv_2pvbyte sv_2pv #endif #ifndef sv_2pvbyte_nolen # define sv_2pvbyte_nolen(sv) sv_2pv_nolen(sv) #endif /* Hint: sv_pvn * Always use the SvPV() macro instead of sv_pvn(). */ /* Hint: sv_pvn_force * Always use the SvPV_force() macro instead of sv_pvn_force(). */ /* If these are undefined, they're not handled by the core anyway */ #ifndef SV_IMMEDIATE_UNREF # define SV_IMMEDIATE_UNREF 0 #endif #ifndef SV_GMAGIC # define SV_GMAGIC 0 #endif #ifndef SV_COW_DROP_PV # define SV_COW_DROP_PV 0 #endif #ifndef SV_UTF8_NO_ENCODING # define SV_UTF8_NO_ENCODING 0 #endif #ifndef SV_NOSTEAL # define SV_NOSTEAL 0 #endif #ifndef SV_CONST_RETURN # define SV_CONST_RETURN 0 #endif #ifndef SV_MUTABLE_RETURN # define SV_MUTABLE_RETURN 0 #endif #ifndef SV_SMAGIC # define SV_SMAGIC 0 #endif #ifndef SV_HAS_TRAILING_NUL # define SV_HAS_TRAILING_NUL 0 #endif #ifndef SV_COW_SHARED_HASH_KEYS # define SV_COW_SHARED_HASH_KEYS 0 #endif #if (PERL_BCDVERSION < 0x5007002) #if defined(NEED_sv_2pv_flags) static char * DPPP_(my_sv_2pv_flags)(pTHX_ SV * sv, STRLEN * lp, I32 flags); static #else extern char * DPPP_(my_sv_2pv_flags)(pTHX_ SV * sv, STRLEN * lp, I32 flags); #endif #ifdef sv_2pv_flags # undef sv_2pv_flags #endif #define sv_2pv_flags(a,b,c) DPPP_(my_sv_2pv_flags)(aTHX_ a,b,c) #define Perl_sv_2pv_flags DPPP_(my_sv_2pv_flags) #if defined(NEED_sv_2pv_flags) || defined(NEED_sv_2pv_flags_GLOBAL) char * DPPP_(my_sv_2pv_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags) { STRLEN n_a = (STRLEN) flags; return sv_2pv(sv, lp ? lp : &n_a); } #endif #if defined(NEED_sv_pvn_force_flags) static char * DPPP_(my_sv_pvn_force_flags)(pTHX_ SV * sv, STRLEN * lp, I32 flags); static #else extern char * DPPP_(my_sv_pvn_force_flags)(pTHX_ SV * sv, STRLEN * lp, I32 flags); #endif #ifdef sv_pvn_force_flags # undef sv_pvn_force_flags #endif #define sv_pvn_force_flags(a,b,c) DPPP_(my_sv_pvn_force_flags)(aTHX_ a,b,c) #define Perl_sv_pvn_force_flags DPPP_(my_sv_pvn_force_flags) #if defined(NEED_sv_pvn_force_flags) || defined(NEED_sv_pvn_force_flags_GLOBAL) char * DPPP_(my_sv_pvn_force_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags) { STRLEN n_a = (STRLEN) flags; return sv_pvn_force(sv, lp ? lp : &n_a); } #endif #endif #ifndef SvPV_const # define SvPV_const(sv, lp) SvPV_flags_const(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_mutable # define SvPV_mutable(sv, lp) SvPV_flags_mutable(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_flags # define SvPV_flags(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX(sv)) : sv_2pv_flags(sv, &lp, flags)) #endif #ifndef SvPV_flags_const # define SvPV_flags_const(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX_const(sv)) : \ (const char*) sv_2pv_flags(sv, &lp, flags|SV_CONST_RETURN)) #endif #ifndef SvPV_flags_const_nolen # define SvPV_flags_const_nolen(sv, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? SvPVX_const(sv) : \ (const char*) sv_2pv_flags(sv, 0, flags|SV_CONST_RETURN)) #endif #ifndef SvPV_flags_mutable # define SvPV_flags_mutable(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX_mutable(sv)) : \ sv_2pv_flags(sv, &lp, flags|SV_MUTABLE_RETURN)) #endif #ifndef SvPV_force # define SvPV_force(sv, lp) SvPV_force_flags(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_force_nolen # define SvPV_force_nolen(sv) SvPV_force_flags_nolen(sv, SV_GMAGIC) #endif #ifndef SvPV_force_mutable # define SvPV_force_mutable(sv, lp) SvPV_force_flags_mutable(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_force_nomg # define SvPV_force_nomg(sv, lp) SvPV_force_flags(sv, lp, 0) #endif #ifndef SvPV_force_nomg_nolen # define SvPV_force_nomg_nolen(sv) SvPV_force_flags_nolen(sv, 0) #endif #ifndef SvPV_force_flags # define SvPV_force_flags(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK|SVf_THINKFIRST)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX(sv)) : sv_pvn_force_flags(sv, &lp, flags)) #endif #ifndef SvPV_force_flags_nolen # define SvPV_force_flags_nolen(sv, flags) \ ((SvFLAGS(sv) & (SVf_POK|SVf_THINKFIRST)) == SVf_POK \ ? SvPVX(sv) : sv_pvn_force_flags(sv, 0, flags)) #endif #ifndef SvPV_force_flags_mutable # define SvPV_force_flags_mutable(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK|SVf_THINKFIRST)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX_mutable(sv)) \ : sv_pvn_force_flags(sv, &lp, flags|SV_MUTABLE_RETURN)) #endif #ifndef SvPV_nolen # define SvPV_nolen(sv) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? SvPVX(sv) : sv_2pv_flags(sv, 0, SV_GMAGIC)) #endif #ifndef SvPV_nolen_const # define SvPV_nolen_const(sv) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? SvPVX_const(sv) : sv_2pv_flags(sv, 0, SV_GMAGIC|SV_CONST_RETURN)) #endif #ifndef SvPV_nomg # define SvPV_nomg(sv, lp) SvPV_flags(sv, lp, 0) #endif #ifndef SvPV_nomg_const # define SvPV_nomg_const(sv, lp) SvPV_flags_const(sv, lp, 0) #endif #ifndef SvPV_nomg_const_nolen # define SvPV_nomg_const_nolen(sv) SvPV_flags_const_nolen(sv, 0) #endif #ifndef SvMAGIC_set # define SvMAGIC_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_PVMG); \ (((XPVMG*) SvANY(sv))->xmg_magic = (val)); } STMT_END #endif #if (PERL_BCDVERSION < 0x5009003) #ifndef SvPVX_const # define SvPVX_const(sv) ((const char*) (0 + SvPVX(sv))) #endif #ifndef SvPVX_mutable # define SvPVX_mutable(sv) (0 + SvPVX(sv)) #endif #ifndef SvRV_set # define SvRV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_RV); \ (((XRV*) SvANY(sv))->xrv_rv = (val)); } STMT_END #endif #else #ifndef SvPVX_const # define SvPVX_const(sv) ((const char*)((sv)->sv_u.svu_pv)) #endif #ifndef SvPVX_mutable # define SvPVX_mutable(sv) ((sv)->sv_u.svu_pv) #endif #ifndef SvRV_set # define SvRV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_RV); \ ((sv)->sv_u.svu_rv = (val)); } STMT_END #endif #endif #ifndef SvSTASH_set # define SvSTASH_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_PVMG); \ (((XPVMG*) SvANY(sv))->xmg_stash = (val)); } STMT_END #endif #if (PERL_BCDVERSION < 0x5004000) #ifndef SvUV_set # define SvUV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) == SVt_IV || SvTYPE(sv) >= SVt_PVIV); \ (((XPVIV*) SvANY(sv))->xiv_iv = (IV) (val)); } STMT_END #endif #else #ifndef SvUV_set # define SvUV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) == SVt_IV || SvTYPE(sv) >= SVt_PVIV); \ (((XPVUV*) SvANY(sv))->xuv_uv = (val)); } STMT_END #endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(vnewSVpvf) #if defined(NEED_vnewSVpvf) static SV * DPPP_(my_vnewSVpvf)(pTHX_ const char * pat, va_list * args); static #else extern SV * DPPP_(my_vnewSVpvf)(pTHX_ const char * pat, va_list * args); #endif #ifdef vnewSVpvf # undef vnewSVpvf #endif #define vnewSVpvf(a,b) DPPP_(my_vnewSVpvf)(aTHX_ a,b) #define Perl_vnewSVpvf DPPP_(my_vnewSVpvf) #if defined(NEED_vnewSVpvf) || defined(NEED_vnewSVpvf_GLOBAL) SV * DPPP_(my_vnewSVpvf)(pTHX_ const char *pat, va_list *args) { register SV *sv = newSV(0); sv_vsetpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)); return sv; } #endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vcatpvf) # define sv_vcatpvf(sv, pat, args) sv_vcatpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)) #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vsetpvf) # define sv_vsetpvf(sv, pat, args) sv_vsetpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)) #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_catpvf_mg) #if defined(NEED_sv_catpvf_mg) static void DPPP_(my_sv_catpvf_mg)(pTHX_ SV * sv, const char * pat, ...); static #else extern void DPPP_(my_sv_catpvf_mg)(pTHX_ SV * sv, const char * pat, ...); #endif #define Perl_sv_catpvf_mg DPPP_(my_sv_catpvf_mg) #if defined(NEED_sv_catpvf_mg) || defined(NEED_sv_catpvf_mg_GLOBAL) void DPPP_(my_sv_catpvf_mg)(pTHX_ SV *sv, const char *pat, ...) { va_list args; va_start(args, pat); sv_vcatpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #ifdef PERL_IMPLICIT_CONTEXT #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_catpvf_mg_nocontext) #if defined(NEED_sv_catpvf_mg_nocontext) static void DPPP_(my_sv_catpvf_mg_nocontext)(SV * sv, const char * pat, ...); static #else extern void DPPP_(my_sv_catpvf_mg_nocontext)(SV * sv, const char * pat, ...); #endif #define sv_catpvf_mg_nocontext DPPP_(my_sv_catpvf_mg_nocontext) #define Perl_sv_catpvf_mg_nocontext DPPP_(my_sv_catpvf_mg_nocontext) #if defined(NEED_sv_catpvf_mg_nocontext) || defined(NEED_sv_catpvf_mg_nocontext_GLOBAL) void DPPP_(my_sv_catpvf_mg_nocontext)(SV *sv, const char *pat, ...) { dTHX; va_list args; va_start(args, pat); sv_vcatpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #endif /* sv_catpvf_mg depends on sv_catpvf_mg_nocontext */ #ifndef sv_catpvf_mg # ifdef PERL_IMPLICIT_CONTEXT # define sv_catpvf_mg Perl_sv_catpvf_mg_nocontext # else # define sv_catpvf_mg Perl_sv_catpvf_mg # endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vcatpvf_mg) # define sv_vcatpvf_mg(sv, pat, args) \ STMT_START { \ sv_vcatpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)); \ SvSETMAGIC(sv); \ } STMT_END #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_setpvf_mg) #if defined(NEED_sv_setpvf_mg) static void DPPP_(my_sv_setpvf_mg)(pTHX_ SV * sv, const char * pat, ...); static #else extern void DPPP_(my_sv_setpvf_mg)(pTHX_ SV * sv, const char * pat, ...); #endif #define Perl_sv_setpvf_mg DPPP_(my_sv_setpvf_mg) #if defined(NEED_sv_setpvf_mg) || defined(NEED_sv_setpvf_mg_GLOBAL) void DPPP_(my_sv_setpvf_mg)(pTHX_ SV *sv, const char *pat, ...) { va_list args; va_start(args, pat); sv_vsetpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #ifdef PERL_IMPLICIT_CONTEXT #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_setpvf_mg_nocontext) #if defined(NEED_sv_setpvf_mg_nocontext) static void DPPP_(my_sv_setpvf_mg_nocontext)(SV * sv, const char * pat, ...); static #else extern void DPPP_(my_sv_setpvf_mg_nocontext)(SV * sv, const char * pat, ...); #endif #define sv_setpvf_mg_nocontext DPPP_(my_sv_setpvf_mg_nocontext) #define Perl_sv_setpvf_mg_nocontext DPPP_(my_sv_setpvf_mg_nocontext) #if defined(NEED_sv_setpvf_mg_nocontext) || defined(NEED_sv_setpvf_mg_nocontext_GLOBAL) void DPPP_(my_sv_setpvf_mg_nocontext)(SV *sv, const char *pat, ...) { dTHX; va_list args; va_start(args, pat); sv_vsetpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #endif /* sv_setpvf_mg depends on sv_setpvf_mg_nocontext */ #ifndef sv_setpvf_mg # ifdef PERL_IMPLICIT_CONTEXT # define sv_setpvf_mg Perl_sv_setpvf_mg_nocontext # else # define sv_setpvf_mg Perl_sv_setpvf_mg # endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vsetpvf_mg) # define sv_vsetpvf_mg(sv, pat, args) \ STMT_START { \ sv_vsetpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)); \ SvSETMAGIC(sv); \ } STMT_END #endif #ifndef newSVpvn_share #if defined(NEED_newSVpvn_share) static SV * DPPP_(my_newSVpvn_share)(pTHX_ const char *src, I32 len, U32 hash); static #else extern SV * DPPP_(my_newSVpvn_share)(pTHX_ const char *src, I32 len, U32 hash); #endif #ifdef newSVpvn_share # undef newSVpvn_share #endif #define newSVpvn_share(a,b,c) DPPP_(my_newSVpvn_share)(aTHX_ a,b,c) #define Perl_newSVpvn_share DPPP_(my_newSVpvn_share) #if defined(NEED_newSVpvn_share) || defined(NEED_newSVpvn_share_GLOBAL) SV * DPPP_(my_newSVpvn_share)(pTHX_ const char *src, I32 len, U32 hash) { SV *sv; if (len < 0) len = -len; if (!hash) PERL_HASH(hash, (char*) src, len); sv = newSVpvn((char *) src, len); sv_upgrade(sv, SVt_PVIV); SvIVX(sv) = hash; SvREADONLY_on(sv); SvPOK_on(sv); return sv; } #endif #endif #ifndef SvSHARED_HASH # define SvSHARED_HASH(sv) (0 + SvUVX(sv)) #endif #ifndef WARN_ALL # define WARN_ALL 0 #endif #ifndef WARN_CLOSURE # define WARN_CLOSURE 1 #endif #ifndef WARN_DEPRECATED # define WARN_DEPRECATED 2 #endif #ifndef WARN_EXITING # define WARN_EXITING 3 #endif #ifndef WARN_GLOB # define WARN_GLOB 4 #endif #ifndef WARN_IO # define WARN_IO 5 #endif #ifndef WARN_CLOSED # define WARN_CLOSED 6 #endif #ifndef WARN_EXEC # define WARN_EXEC 7 #endif #ifndef WARN_LAYER # define WARN_LAYER 8 #endif #ifndef WARN_NEWLINE # define WARN_NEWLINE 9 #endif #ifndef WARN_PIPE # define WARN_PIPE 10 #endif #ifndef WARN_UNOPENED # define WARN_UNOPENED 11 #endif #ifndef WARN_MISC # define WARN_MISC 12 #endif #ifndef WARN_NUMERIC # define WARN_NUMERIC 13 #endif #ifndef WARN_ONCE # define WARN_ONCE 14 #endif #ifndef WARN_OVERFLOW # define WARN_OVERFLOW 15 #endif #ifndef WARN_PACK # define WARN_PACK 16 #endif #ifndef WARN_PORTABLE # define WARN_PORTABLE 17 #endif #ifndef WARN_RECURSION # define WARN_RECURSION 18 #endif #ifndef WARN_REDEFINE # define WARN_REDEFINE 19 #endif #ifndef WARN_REGEXP # define WARN_REGEXP 20 #endif #ifndef WARN_SEVERE # define WARN_SEVERE 21 #endif #ifndef WARN_DEBUGGING # define WARN_DEBUGGING 22 #endif #ifndef WARN_INPLACE # define WARN_INPLACE 23 #endif #ifndef WARN_INTERNAL # define WARN_INTERNAL 24 #endif #ifndef WARN_MALLOC # define WARN_MALLOC 25 #endif #ifndef WARN_SIGNAL # define WARN_SIGNAL 26 #endif #ifndef WARN_SUBSTR # define WARN_SUBSTR 27 #endif #ifndef WARN_SYNTAX # define WARN_SYNTAX 28 #endif #ifndef WARN_AMBIGUOUS # define WARN_AMBIGUOUS 29 #endif #ifndef WARN_BAREWORD # define WARN_BAREWORD 30 #endif #ifndef WARN_DIGIT # define WARN_DIGIT 31 #endif #ifndef WARN_PARENTHESIS # define WARN_PARENTHESIS 32 #endif #ifndef WARN_PRECEDENCE # define WARN_PRECEDENCE 33 #endif #ifndef WARN_PRINTF # define WARN_PRINTF 34 #endif #ifndef WARN_PROTOTYPE # define WARN_PROTOTYPE 35 #endif #ifndef WARN_QW # define WARN_QW 36 #endif #ifndef WARN_RESERVED # define WARN_RESERVED 37 #endif #ifndef WARN_SEMICOLON # define WARN_SEMICOLON 38 #endif #ifndef WARN_TAINT # define WARN_TAINT 39 #endif #ifndef WARN_THREADS # define WARN_THREADS 40 #endif #ifndef WARN_UNINITIALIZED # define WARN_UNINITIALIZED 41 #endif #ifndef WARN_UNPACK # define WARN_UNPACK 42 #endif #ifndef WARN_UNTIE # define WARN_UNTIE 43 #endif #ifndef WARN_UTF8 # define WARN_UTF8 44 #endif #ifndef WARN_VOID # define WARN_VOID 45 #endif #ifndef WARN_ASSERTIONS # define WARN_ASSERTIONS 46 #endif #ifndef packWARN # define packWARN(a) (a) #endif #ifndef ckWARN # ifdef G_WARN_ON # define ckWARN(a) (PL_dowarn & G_WARN_ON) # else # define ckWARN(a) PL_dowarn # endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(warner) #if defined(NEED_warner) static void DPPP_(my_warner)(U32 err, const char *pat, ...); static #else extern void DPPP_(my_warner)(U32 err, const char *pat, ...); #endif #define Perl_warner DPPP_(my_warner) #if defined(NEED_warner) || defined(NEED_warner_GLOBAL) void DPPP_(my_warner)(U32 err, const char *pat, ...) { SV *sv; va_list args; PERL_UNUSED_ARG(err); va_start(args, pat); sv = vnewSVpvf(pat, &args); va_end(args); sv_2mortal(sv); warn("%s", SvPV_nolen(sv)); } #define warner Perl_warner #define Perl_warner_nocontext Perl_warner #endif #endif /* concatenating with "" ensures that only literal strings are accepted as argument * note that STR_WITH_LEN() can't be used as argument to macros or functions that * under some configurations might be macros */ #ifndef STR_WITH_LEN # define STR_WITH_LEN(s) (s ""), (sizeof(s)-1) #endif #ifndef newSVpvs # define newSVpvs(str) newSVpvn(str "", sizeof(str) - 1) #endif #ifndef sv_catpvs # define sv_catpvs(sv, str) sv_catpvn(sv, str "", sizeof(str) - 1) #endif #ifndef sv_setpvs # define sv_setpvs(sv, str) sv_setpvn(sv, str "", sizeof(str) - 1) #endif #ifndef hv_fetchs # define hv_fetchs(hv, key, lval) hv_fetch(hv, key "", sizeof(key) - 1, lval) #endif #ifndef hv_stores # define hv_stores(hv, key, val) hv_store(hv, key "", sizeof(key) - 1, val, 0) #endif #ifndef SvGETMAGIC # define SvGETMAGIC(x) STMT_START { if (SvGMAGICAL(x)) mg_get(x); } STMT_END #endif #ifndef PERL_MAGIC_sv # define PERL_MAGIC_sv '\0' #endif #ifndef PERL_MAGIC_overload # define PERL_MAGIC_overload 'A' #endif #ifndef PERL_MAGIC_overload_elem # define PERL_MAGIC_overload_elem 'a' #endif #ifndef PERL_MAGIC_overload_table # define PERL_MAGIC_overload_table 'c' #endif #ifndef PERL_MAGIC_bm # define PERL_MAGIC_bm 'B' #endif #ifndef PERL_MAGIC_regdata # define PERL_MAGIC_regdata 'D' #endif #ifndef PERL_MAGIC_regdatum # define PERL_MAGIC_regdatum 'd' #endif #ifndef PERL_MAGIC_env # define PERL_MAGIC_env 'E' #endif #ifndef PERL_MAGIC_envelem # define PERL_MAGIC_envelem 'e' #endif #ifndef PERL_MAGIC_fm # define PERL_MAGIC_fm 'f' #endif #ifndef PERL_MAGIC_regex_global # define PERL_MAGIC_regex_global 'g' #endif #ifndef PERL_MAGIC_isa # define PERL_MAGIC_isa 'I' #endif #ifndef PERL_MAGIC_isaelem # define PERL_MAGIC_isaelem 'i' #endif #ifndef PERL_MAGIC_nkeys # define PERL_MAGIC_nkeys 'k' #endif #ifndef PERL_MAGIC_dbfile # define PERL_MAGIC_dbfile 'L' #endif #ifndef PERL_MAGIC_dbline # define PERL_MAGIC_dbline 'l' #endif #ifndef PERL_MAGIC_mutex # define PERL_MAGIC_mutex 'm' #endif #ifndef PERL_MAGIC_shared # define PERL_MAGIC_shared 'N' #endif #ifndef PERL_MAGIC_shared_scalar # define PERL_MAGIC_shared_scalar 'n' #endif #ifndef PERL_MAGIC_collxfrm # define PERL_MAGIC_collxfrm 'o' #endif #ifndef PERL_MAGIC_tied # define PERL_MAGIC_tied 'P' #endif #ifndef PERL_MAGIC_tiedelem # define PERL_MAGIC_tiedelem 'p' #endif #ifndef PERL_MAGIC_tiedscalar # define PERL_MAGIC_tiedscalar 'q' #endif #ifndef PERL_MAGIC_qr # define PERL_MAGIC_qr 'r' #endif #ifndef PERL_MAGIC_sig # define PERL_MAGIC_sig 'S' #endif #ifndef PERL_MAGIC_sigelem # define PERL_MAGIC_sigelem 's' #endif #ifndef PERL_MAGIC_taint # define PERL_MAGIC_taint 't' #endif #ifndef PERL_MAGIC_uvar # define PERL_MAGIC_uvar 'U' #endif #ifndef PERL_MAGIC_uvar_elem # define PERL_MAGIC_uvar_elem 'u' #endif #ifndef PERL_MAGIC_vstring # define PERL_MAGIC_vstring 'V' #endif #ifndef PERL_MAGIC_vec # define PERL_MAGIC_vec 'v' #endif #ifndef PERL_MAGIC_utf8 # define PERL_MAGIC_utf8 'w' #endif #ifndef PERL_MAGIC_substr # define PERL_MAGIC_substr 'x' #endif #ifndef PERL_MAGIC_defelem # define PERL_MAGIC_defelem 'y' #endif #ifndef PERL_MAGIC_glob # define PERL_MAGIC_glob '*' #endif #ifndef PERL_MAGIC_arylen # define PERL_MAGIC_arylen '#' #endif #ifndef PERL_MAGIC_pos # define PERL_MAGIC_pos '.' #endif #ifndef PERL_MAGIC_backref # define PERL_MAGIC_backref '<' #endif #ifndef PERL_MAGIC_ext # define PERL_MAGIC_ext '~' #endif /* That's the best we can do... */ #ifndef sv_catpvn_nomg # define sv_catpvn_nomg sv_catpvn #endif #ifndef sv_catsv_nomg # define sv_catsv_nomg sv_catsv #endif #ifndef sv_setsv_nomg # define sv_setsv_nomg sv_setsv #endif #ifndef sv_pvn_nomg # define sv_pvn_nomg sv_pvn #endif #ifndef SvIV_nomg # define SvIV_nomg SvIV #endif #ifndef SvUV_nomg # define SvUV_nomg SvUV #endif #ifndef sv_catpv_mg # define sv_catpv_mg(sv, ptr) \ STMT_START { \ SV *TeMpSv = sv; \ sv_catpv(TeMpSv,ptr); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_catpvn_mg # define sv_catpvn_mg(sv, ptr, len) \ STMT_START { \ SV *TeMpSv = sv; \ sv_catpvn(TeMpSv,ptr,len); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_catsv_mg # define sv_catsv_mg(dsv, ssv) \ STMT_START { \ SV *TeMpSv = dsv; \ sv_catsv(TeMpSv,ssv); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setiv_mg # define sv_setiv_mg(sv, i) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setiv(TeMpSv,i); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setnv_mg # define sv_setnv_mg(sv, num) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setnv(TeMpSv,num); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setpv_mg # define sv_setpv_mg(sv, ptr) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setpv(TeMpSv,ptr); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setpvn_mg # define sv_setpvn_mg(sv, ptr, len) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setpvn(TeMpSv,ptr,len); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setsv_mg # define sv_setsv_mg(dsv, ssv) \ STMT_START { \ SV *TeMpSv = dsv; \ sv_setsv(TeMpSv,ssv); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setuv_mg # define sv_setuv_mg(sv, i) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setuv(TeMpSv,i); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_usepvn_mg # define sv_usepvn_mg(sv, ptr, len) \ STMT_START { \ SV *TeMpSv = sv; \ sv_usepvn(TeMpSv,ptr,len); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef SvVSTRING_mg # define SvVSTRING_mg(sv) (SvMAGICAL(sv) ? mg_find(sv, PERL_MAGIC_vstring) : NULL) #endif /* Hint: sv_magic_portable * This is a compatibility function that is only available with * Devel::PPPort. It is NOT in the perl core. * Its purpose is to mimic the 5.8.0 behaviour of sv_magic() when * it is being passed a name pointer with namlen == 0. In that * case, perl 5.8.0 and later store the pointer, not a copy of it. * The compatibility can be provided back to perl 5.004. With * earlier versions, the code will not compile. */ #if (PERL_BCDVERSION < 0x5004000) /* code that uses sv_magic_portable will not compile */ #elif (PERL_BCDVERSION < 0x5008000) # define sv_magic_portable(sv, obj, how, name, namlen) \ STMT_START { \ SV *SvMp_sv = (sv); \ char *SvMp_name = (char *) (name); \ I32 SvMp_namlen = (namlen); \ if (SvMp_name && SvMp_namlen == 0) \ { \ MAGIC *mg; \ sv_magic(SvMp_sv, obj, how, 0, 0); \ mg = SvMAGIC(SvMp_sv); \ mg->mg_len = -42; /* XXX: this is the tricky part */ \ mg->mg_ptr = SvMp_name; \ } \ else \ { \ sv_magic(SvMp_sv, obj, how, SvMp_name, SvMp_namlen); \ } \ } STMT_END #else # define sv_magic_portable(a, b, c, d, e) sv_magic(a, b, c, d, e) #endif #ifdef USE_ITHREADS #ifndef CopFILE # define CopFILE(c) ((c)->cop_file) #endif #ifndef CopFILEGV # define CopFILEGV(c) (CopFILE(c) ? gv_fetchfile(CopFILE(c)) : Nullgv) #endif #ifndef CopFILE_set # define CopFILE_set(c,pv) ((c)->cop_file = savepv(pv)) #endif #ifndef CopFILESV # define CopFILESV(c) (CopFILE(c) ? GvSV(gv_fetchfile(CopFILE(c))) : Nullsv) #endif #ifndef CopFILEAV # define CopFILEAV(c) (CopFILE(c) ? GvAV(gv_fetchfile(CopFILE(c))) : Nullav) #endif #ifndef CopSTASHPV # define CopSTASHPV(c) ((c)->cop_stashpv) #endif #ifndef CopSTASHPV_set # define CopSTASHPV_set(c,pv) ((c)->cop_stashpv = ((pv) ? savepv(pv) : Nullch)) #endif #ifndef CopSTASH # define CopSTASH(c) (CopSTASHPV(c) ? gv_stashpv(CopSTASHPV(c),GV_ADD) : Nullhv) #endif #ifndef CopSTASH_set # define CopSTASH_set(c,hv) CopSTASHPV_set(c, (hv) ? HvNAME(hv) : Nullch) #endif #ifndef CopSTASH_eq # define CopSTASH_eq(c,hv) ((hv) && (CopSTASHPV(c) == HvNAME(hv) \ || (CopSTASHPV(c) && HvNAME(hv) \ && strEQ(CopSTASHPV(c), HvNAME(hv))))) #endif #else #ifndef CopFILEGV # define CopFILEGV(c) ((c)->cop_filegv) #endif #ifndef CopFILEGV_set # define CopFILEGV_set(c,gv) ((c)->cop_filegv = (GV*)SvREFCNT_inc(gv)) #endif #ifndef CopFILE_set # define CopFILE_set(c,pv) CopFILEGV_set((c), gv_fetchfile(pv)) #endif #ifndef CopFILESV # define CopFILESV(c) (CopFILEGV(c) ? GvSV(CopFILEGV(c)) : Nullsv) #endif #ifndef CopFILEAV # define CopFILEAV(c) (CopFILEGV(c) ? GvAV(CopFILEGV(c)) : Nullav) #endif #ifndef CopFILE # define CopFILE(c) (CopFILESV(c) ? SvPVX(CopFILESV(c)) : Nullch) #endif #ifndef CopSTASH # define CopSTASH(c) ((c)->cop_stash) #endif #ifndef CopSTASH_set # define CopSTASH_set(c,hv) ((c)->cop_stash = (hv)) #endif #ifndef CopSTASHPV # define CopSTASHPV(c) (CopSTASH(c) ? HvNAME(CopSTASH(c)) : Nullch) #endif #ifndef CopSTASHPV_set # define CopSTASHPV_set(c,pv) CopSTASH_set((c), gv_stashpv(pv,GV_ADD)) #endif #ifndef CopSTASH_eq # define CopSTASH_eq(c,hv) (CopSTASH(c) == (hv)) #endif #endif /* USE_ITHREADS */ #ifndef IN_PERL_COMPILETIME # define IN_PERL_COMPILETIME (PL_curcop == &PL_compiling) #endif #ifndef IN_LOCALE_RUNTIME # define IN_LOCALE_RUNTIME (PL_curcop->op_private & HINT_LOCALE) #endif #ifndef IN_LOCALE_COMPILETIME # define IN_LOCALE_COMPILETIME (PL_hints & HINT_LOCALE) #endif #ifndef IN_LOCALE # define IN_LOCALE (IN_PERL_COMPILETIME ? IN_LOCALE_COMPILETIME : IN_LOCALE_RUNTIME) #endif #ifndef IS_NUMBER_IN_UV # define IS_NUMBER_IN_UV 0x01 #endif #ifndef IS_NUMBER_GREATER_THAN_UV_MAX # define IS_NUMBER_GREATER_THAN_UV_MAX 0x02 #endif #ifndef IS_NUMBER_NOT_INT # define IS_NUMBER_NOT_INT 0x04 #endif #ifndef IS_NUMBER_NEG # define IS_NUMBER_NEG 0x08 #endif #ifndef IS_NUMBER_INFINITY # define IS_NUMBER_INFINITY 0x10 #endif #ifndef IS_NUMBER_NAN # define IS_NUMBER_NAN 0x20 #endif #ifndef GROK_NUMERIC_RADIX # define GROK_NUMERIC_RADIX(sp, send) grok_numeric_radix(sp, send) #endif #ifndef PERL_SCAN_GREATER_THAN_UV_MAX # define PERL_SCAN_GREATER_THAN_UV_MAX 0x02 #endif #ifndef PERL_SCAN_SILENT_ILLDIGIT # define PERL_SCAN_SILENT_ILLDIGIT 0x04 #endif #ifndef PERL_SCAN_ALLOW_UNDERSCORES # define PERL_SCAN_ALLOW_UNDERSCORES 0x01 #endif #ifndef PERL_SCAN_DISALLOW_PREFIX # define PERL_SCAN_DISALLOW_PREFIX 0x02 #endif #ifndef grok_numeric_radix #if defined(NEED_grok_numeric_radix) static bool DPPP_(my_grok_numeric_radix)(pTHX_ const char ** sp, const char * send); static #else extern bool DPPP_(my_grok_numeric_radix)(pTHX_ const char ** sp, const char * send); #endif #ifdef grok_numeric_radix # undef grok_numeric_radix #endif #define grok_numeric_radix(a,b) DPPP_(my_grok_numeric_radix)(aTHX_ a,b) #define Perl_grok_numeric_radix DPPP_(my_grok_numeric_radix) #if defined(NEED_grok_numeric_radix) || defined(NEED_grok_numeric_radix_GLOBAL) bool DPPP_(my_grok_numeric_radix)(pTHX_ const char **sp, const char *send) { #ifdef USE_LOCALE_NUMERIC #ifdef PL_numeric_radix_sv if (PL_numeric_radix_sv && IN_LOCALE) { STRLEN len; char* radix = SvPV(PL_numeric_radix_sv, len); if (*sp + len <= send && memEQ(*sp, radix, len)) { *sp += len; return TRUE; } } #else /* older perls don't have PL_numeric_radix_sv so the radix * must manually be requested from locale.h */ #include dTHR; /* needed for older threaded perls */ struct lconv *lc = localeconv(); char *radix = lc->decimal_point; if (radix && IN_LOCALE) { STRLEN len = strlen(radix); if (*sp + len <= send && memEQ(*sp, radix, len)) { *sp += len; return TRUE; } } #endif #endif /* USE_LOCALE_NUMERIC */ /* always try "." if numeric radix didn't match because * we may have data from different locales mixed */ if (*sp < send && **sp == '.') { ++*sp; return TRUE; } return FALSE; } #endif #endif #ifndef grok_number #if defined(NEED_grok_number) static int DPPP_(my_grok_number)(pTHX_ const char * pv, STRLEN len, UV * valuep); static #else extern int DPPP_(my_grok_number)(pTHX_ const char * pv, STRLEN len, UV * valuep); #endif #ifdef grok_number # undef grok_number #endif #define grok_number(a,b,c) DPPP_(my_grok_number)(aTHX_ a,b,c) #define Perl_grok_number DPPP_(my_grok_number) #if defined(NEED_grok_number) || defined(NEED_grok_number_GLOBAL) int DPPP_(my_grok_number)(pTHX_ const char *pv, STRLEN len, UV *valuep) { const char *s = pv; const char *send = pv + len; const UV max_div_10 = UV_MAX / 10; const char max_mod_10 = UV_MAX % 10; int numtype = 0; int sawinf = 0; int sawnan = 0; while (s < send && isSPACE(*s)) s++; if (s == send) { return 0; } else if (*s == '-') { s++; numtype = IS_NUMBER_NEG; } else if (*s == '+') s++; if (s == send) return 0; /* next must be digit or the radix separator or beginning of infinity */ if (isDIGIT(*s)) { /* UVs are at least 32 bits, so the first 9 decimal digits cannot overflow. */ UV value = *s - '0'; /* This construction seems to be more optimiser friendly. (without it gcc does the isDIGIT test and the *s - '0' separately) With it gcc on arm is managing 6 instructions (6 cycles) per digit. In theory the optimiser could deduce how far to unroll the loop before checking for overflow. */ if (++s < send) { int digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { /* Now got 9 digits, so need to check each time for overflow. */ digit = *s - '0'; while (digit >= 0 && digit <= 9 && (value < max_div_10 || (value == max_div_10 && digit <= max_mod_10))) { value = value * 10 + digit; if (++s < send) digit = *s - '0'; else break; } if (digit >= 0 && digit <= 9 && (s < send)) { /* value overflowed. skip the remaining digits, don't worry about setting *valuep. */ do { s++; } while (s < send && isDIGIT(*s)); numtype |= IS_NUMBER_GREATER_THAN_UV_MAX; goto skip_value; } } } } } } } } } } } } } } } } } } numtype |= IS_NUMBER_IN_UV; if (valuep) *valuep = value; skip_value: if (GROK_NUMERIC_RADIX(&s, send)) { numtype |= IS_NUMBER_NOT_INT; while (s < send && isDIGIT(*s)) /* optional digits after the radix */ s++; } } else if (GROK_NUMERIC_RADIX(&s, send)) { numtype |= IS_NUMBER_NOT_INT | IS_NUMBER_IN_UV; /* valuep assigned below */ /* no digits before the radix means we need digits after it */ if (s < send && isDIGIT(*s)) { do { s++; } while (s < send && isDIGIT(*s)); if (valuep) { /* integer approximation is valid - it's 0. */ *valuep = 0; } } else return 0; } else if (*s == 'I' || *s == 'i') { s++; if (s == send || (*s != 'N' && *s != 'n')) return 0; s++; if (s == send || (*s != 'F' && *s != 'f')) return 0; s++; if (s < send && (*s == 'I' || *s == 'i')) { s++; if (s == send || (*s != 'N' && *s != 'n')) return 0; s++; if (s == send || (*s != 'I' && *s != 'i')) return 0; s++; if (s == send || (*s != 'T' && *s != 't')) return 0; s++; if (s == send || (*s != 'Y' && *s != 'y')) return 0; s++; } sawinf = 1; } else if (*s == 'N' || *s == 'n') { /* XXX TODO: There are signaling NaNs and quiet NaNs. */ s++; if (s == send || (*s != 'A' && *s != 'a')) return 0; s++; if (s == send || (*s != 'N' && *s != 'n')) return 0; s++; sawnan = 1; } else return 0; if (sawinf) { numtype &= IS_NUMBER_NEG; /* Keep track of sign */ numtype |= IS_NUMBER_INFINITY | IS_NUMBER_NOT_INT; } else if (sawnan) { numtype &= IS_NUMBER_NEG; /* Keep track of sign */ numtype |= IS_NUMBER_NAN | IS_NUMBER_NOT_INT; } else if (s < send) { /* we can have an optional exponent part */ if (*s == 'e' || *s == 'E') { /* The only flag we keep is sign. Blow away any "it's UV" */ numtype &= IS_NUMBER_NEG; numtype |= IS_NUMBER_NOT_INT; s++; if (s < send && (*s == '-' || *s == '+')) s++; if (s < send && isDIGIT(*s)) { do { s++; } while (s < send && isDIGIT(*s)); } else return 0; } } while (s < send && isSPACE(*s)) s++; if (s >= send) return numtype; if (len == 10 && memEQ(pv, "0 but true", 10)) { if (valuep) *valuep = 0; return IS_NUMBER_IN_UV; } return 0; } #endif #endif /* * The grok_* routines have been modified to use warn() instead of * Perl_warner(). Also, 'hexdigit' was the former name of PL_hexdigit, * which is why the stack variable has been renamed to 'xdigit'. */ #ifndef grok_bin #if defined(NEED_grok_bin) static UV DPPP_(my_grok_bin)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); static #else extern UV DPPP_(my_grok_bin)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); #endif #ifdef grok_bin # undef grok_bin #endif #define grok_bin(a,b,c,d) DPPP_(my_grok_bin)(aTHX_ a,b,c,d) #define Perl_grok_bin DPPP_(my_grok_bin) #if defined(NEED_grok_bin) || defined(NEED_grok_bin_GLOBAL) UV DPPP_(my_grok_bin)(pTHX_ const char *start, STRLEN *len_p, I32 *flags, NV *result) { const char *s = start; STRLEN len = *len_p; UV value = 0; NV value_nv = 0; const UV max_div_2 = UV_MAX / 2; bool allow_underscores = *flags & PERL_SCAN_ALLOW_UNDERSCORES; bool overflowed = FALSE; if (!(*flags & PERL_SCAN_DISALLOW_PREFIX)) { /* strip off leading b or 0b. for compatibility silently suffer "b" and "0b" as valid binary numbers. */ if (len >= 1) { if (s[0] == 'b') { s++; len--; } else if (len >= 2 && s[0] == '0' && s[1] == 'b') { s+=2; len-=2; } } } for (; len-- && *s; s++) { char bit = *s; if (bit == '0' || bit == '1') { /* Write it in this wonky order with a goto to attempt to get the compiler to make the common case integer-only loop pretty tight. With gcc seems to be much straighter code than old scan_bin. */ redo: if (!overflowed) { if (value <= max_div_2) { value = (value << 1) | (bit - '0'); continue; } /* Bah. We're just overflowed. */ warn("Integer overflow in binary number"); overflowed = TRUE; value_nv = (NV) value; } value_nv *= 2.0; /* If an NV has not enough bits in its mantissa to * represent a UV this summing of small low-order numbers * is a waste of time (because the NV cannot preserve * the low-order bits anyway): we could just remember when * did we overflow and in the end just multiply value_nv by the * right amount. */ value_nv += (NV)(bit - '0'); continue; } if (bit == '_' && len && allow_underscores && (bit = s[1]) && (bit == '0' || bit == '1')) { --len; ++s; goto redo; } if (!(*flags & PERL_SCAN_SILENT_ILLDIGIT)) warn("Illegal binary digit '%c' ignored", *s); break; } if ( ( overflowed && value_nv > 4294967295.0) #if UVSIZE > 4 || (!overflowed && value > 0xffffffff ) #endif ) { warn("Binary number > 0b11111111111111111111111111111111 non-portable"); } *len_p = s - start; if (!overflowed) { *flags = 0; return value; } *flags = PERL_SCAN_GREATER_THAN_UV_MAX; if (result) *result = value_nv; return UV_MAX; } #endif #endif #ifndef grok_hex #if defined(NEED_grok_hex) static UV DPPP_(my_grok_hex)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); static #else extern UV DPPP_(my_grok_hex)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); #endif #ifdef grok_hex # undef grok_hex #endif #define grok_hex(a,b,c,d) DPPP_(my_grok_hex)(aTHX_ a,b,c,d) #define Perl_grok_hex DPPP_(my_grok_hex) #if defined(NEED_grok_hex) || defined(NEED_grok_hex_GLOBAL) UV DPPP_(my_grok_hex)(pTHX_ const char *start, STRLEN *len_p, I32 *flags, NV *result) { const char *s = start; STRLEN len = *len_p; UV value = 0; NV value_nv = 0; const UV max_div_16 = UV_MAX / 16; bool allow_underscores = *flags & PERL_SCAN_ALLOW_UNDERSCORES; bool overflowed = FALSE; const char *xdigit; if (!(*flags & PERL_SCAN_DISALLOW_PREFIX)) { /* strip off leading x or 0x. for compatibility silently suffer "x" and "0x" as valid hex numbers. */ if (len >= 1) { if (s[0] == 'x') { s++; len--; } else if (len >= 2 && s[0] == '0' && s[1] == 'x') { s+=2; len-=2; } } } for (; len-- && *s; s++) { xdigit = strchr((char *) PL_hexdigit, *s); if (xdigit) { /* Write it in this wonky order with a goto to attempt to get the compiler to make the common case integer-only loop pretty tight. With gcc seems to be much straighter code than old scan_hex. */ redo: if (!overflowed) { if (value <= max_div_16) { value = (value << 4) | ((xdigit - PL_hexdigit) & 15); continue; } warn("Integer overflow in hexadecimal number"); overflowed = TRUE; value_nv = (NV) value; } value_nv *= 16.0; /* If an NV has not enough bits in its mantissa to * represent a UV this summing of small low-order numbers * is a waste of time (because the NV cannot preserve * the low-order bits anyway): we could just remember when * did we overflow and in the end just multiply value_nv by the * right amount of 16-tuples. */ value_nv += (NV)((xdigit - PL_hexdigit) & 15); continue; } if (*s == '_' && len && allow_underscores && s[1] && (xdigit = strchr((char *) PL_hexdigit, s[1]))) { --len; ++s; goto redo; } if (!(*flags & PERL_SCAN_SILENT_ILLDIGIT)) warn("Illegal hexadecimal digit '%c' ignored", *s); break; } if ( ( overflowed && value_nv > 4294967295.0) #if UVSIZE > 4 || (!overflowed && value > 0xffffffff ) #endif ) { warn("Hexadecimal number > 0xffffffff non-portable"); } *len_p = s - start; if (!overflowed) { *flags = 0; return value; } *flags = PERL_SCAN_GREATER_THAN_UV_MAX; if (result) *result = value_nv; return UV_MAX; } #endif #endif #ifndef grok_oct #if defined(NEED_grok_oct) static UV DPPP_(my_grok_oct)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); static #else extern UV DPPP_(my_grok_oct)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); #endif #ifdef grok_oct # undef grok_oct #endif #define grok_oct(a,b,c,d) DPPP_(my_grok_oct)(aTHX_ a,b,c,d) #define Perl_grok_oct DPPP_(my_grok_oct) #if defined(NEED_grok_oct) || defined(NEED_grok_oct_GLOBAL) UV DPPP_(my_grok_oct)(pTHX_ const char *start, STRLEN *len_p, I32 *flags, NV *result) { const char *s = start; STRLEN len = *len_p; UV value = 0; NV value_nv = 0; const UV max_div_8 = UV_MAX / 8; bool allow_underscores = *flags & PERL_SCAN_ALLOW_UNDERSCORES; bool overflowed = FALSE; for (; len-- && *s; s++) { /* gcc 2.95 optimiser not smart enough to figure that this subtraction out front allows slicker code. */ int digit = *s - '0'; if (digit >= 0 && digit <= 7) { /* Write it in this wonky order with a goto to attempt to get the compiler to make the common case integer-only loop pretty tight. */ redo: if (!overflowed) { if (value <= max_div_8) { value = (value << 3) | digit; continue; } /* Bah. We're just overflowed. */ warn("Integer overflow in octal number"); overflowed = TRUE; value_nv = (NV) value; } value_nv *= 8.0; /* If an NV has not enough bits in its mantissa to * represent a UV this summing of small low-order numbers * is a waste of time (because the NV cannot preserve * the low-order bits anyway): we could just remember when * did we overflow and in the end just multiply value_nv by the * right amount of 8-tuples. */ value_nv += (NV)digit; continue; } if (digit == ('_' - '0') && len && allow_underscores && (digit = s[1] - '0') && (digit >= 0 && digit <= 7)) { --len; ++s; goto redo; } /* Allow \octal to work the DWIM way (that is, stop scanning * as soon as non-octal characters are seen, complain only iff * someone seems to want to use the digits eight and nine). */ if (digit == 8 || digit == 9) { if (!(*flags & PERL_SCAN_SILENT_ILLDIGIT)) warn("Illegal octal digit '%c' ignored", *s); } break; } if ( ( overflowed && value_nv > 4294967295.0) #if UVSIZE > 4 || (!overflowed && value > 0xffffffff ) #endif ) { warn("Octal number > 037777777777 non-portable"); } *len_p = s - start; if (!overflowed) { *flags = 0; return value; } *flags = PERL_SCAN_GREATER_THAN_UV_MAX; if (result) *result = value_nv; return UV_MAX; } #endif #endif #if !defined(my_snprintf) #if defined(NEED_my_snprintf) static int DPPP_(my_my_snprintf)(char * buffer, const Size_t len, const char * format, ...); static #else extern int DPPP_(my_my_snprintf)(char * buffer, const Size_t len, const char * format, ...); #endif #define my_snprintf DPPP_(my_my_snprintf) #define Perl_my_snprintf DPPP_(my_my_snprintf) #if defined(NEED_my_snprintf) || defined(NEED_my_snprintf_GLOBAL) int DPPP_(my_my_snprintf)(char *buffer, const Size_t len, const char *format, ...) { dTHX; int retval; va_list ap; va_start(ap, format); #ifdef HAS_VSNPRINTF retval = vsnprintf(buffer, len, format, ap); #else retval = vsprintf(buffer, format, ap); #endif va_end(ap); if (retval >= (int)len) Perl_croak(aTHX_ "panic: my_snprintf buffer overflow"); return retval; } #endif #endif #ifdef NO_XSLOCKS # ifdef dJMPENV # define dXCPT dJMPENV; int rEtV = 0 # define XCPT_TRY_START JMPENV_PUSH(rEtV); if (rEtV == 0) # define XCPT_TRY_END JMPENV_POP; # define XCPT_CATCH if (rEtV != 0) # define XCPT_RETHROW JMPENV_JUMP(rEtV) # else # define dXCPT Sigjmp_buf oldTOP; int rEtV = 0 # define XCPT_TRY_START Copy(top_env, oldTOP, 1, Sigjmp_buf); rEtV = Sigsetjmp(top_env, 1); if (rEtV == 0) # define XCPT_TRY_END Copy(oldTOP, top_env, 1, Sigjmp_buf); # define XCPT_CATCH if (rEtV != 0) # define XCPT_RETHROW Siglongjmp(top_env, rEtV) # endif #endif #if !defined(my_strlcat) #if defined(NEED_my_strlcat) static Size_t DPPP_(my_my_strlcat)(char * dst, const char * src, Size_t size); static #else extern Size_t DPPP_(my_my_strlcat)(char * dst, const char * src, Size_t size); #endif #define my_strlcat DPPP_(my_my_strlcat) #define Perl_my_strlcat DPPP_(my_my_strlcat) #if defined(NEED_my_strlcat) || defined(NEED_my_strlcat_GLOBAL) Size_t DPPP_(my_my_strlcat)(char *dst, const char *src, Size_t size) { Size_t used, length, copy; used = strlen(dst); length = strlen(src); if (size > 0 && used < size - 1) { copy = (length >= size - used) ? size - used - 1 : length; memcpy(dst + used, src, copy); dst[used + copy] = '\0'; } return used + length; } #endif #endif #if !defined(my_strlcpy) #if defined(NEED_my_strlcpy) static Size_t DPPP_(my_my_strlcpy)(char * dst, const char * src, Size_t size); static #else extern Size_t DPPP_(my_my_strlcpy)(char * dst, const char * src, Size_t size); #endif #define my_strlcpy DPPP_(my_my_strlcpy) #define Perl_my_strlcpy DPPP_(my_my_strlcpy) #if defined(NEED_my_strlcpy) || defined(NEED_my_strlcpy_GLOBAL) Size_t DPPP_(my_my_strlcpy)(char *dst, const char *src, Size_t size) { Size_t length, copy; length = strlen(src); if (size > 0) { copy = (length >= size) ? size - 1 : length; memcpy(dst, src, copy); dst[copy] = '\0'; } return length; } #endif #endif #endif /* _P_P_PORTABILITY_H_ */ /* End of File ppport.h */ slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/slurmdb-perl.h000066400000000000000000000034551265000126300254750ustar00rootroot00000000000000/* * slurmdb-perl.h - prototypes of msg-hv converting functions */ #ifndef _SLURMDB_PERL_H #define _SLURMDB_PERL_H #include #define FETCH_LIST_FIELD(hv, ptr, field) \ do { \ SV** svp; \ if ( (svp = hv_fetch (hv, #field, strlen(#field), FALSE)) ) { \ if(SvROK(*svp) && SvTYPE(SvRV(*svp)) == SVt_PVAV) { \ ptr->field = slurm_list_create(NULL); \ element_av = (AV*)SvRV(*svp); \ elements = av_len(element_av) + 1; \ for(i = 0; i < elements; i ++) { \ if((svp = av_fetch(element_av, i, FALSE))) { \ str = slurm_xstrdup((char*)SvPV_nolen(*svp)); \ slurm_list_append(ptr->field, str); \ } else { \ Perl_warn(aTHX_ "error fetching \"" #field "\" from \"" #ptr "\""); \ return -1; \ } \ } \ } else { \ Perl_warn(aTHX_ "\"" #field "\" of \"" #ptr "\" is not an array reference"); \ return -1; \ } \ } \ } while (0) extern uint64_t slurmdb_find_tres_count_in_string(char *tres_str_in, int id); extern int av_to_cluster_grouping_list(AV* av, List grouping_list); extern int hv_to_assoc_cond(HV* hv, slurmdb_assoc_cond_t* assoc_cond); extern int hv_to_cluster_cond(HV* hv, slurmdb_cluster_cond_t* cluster_cond); extern int hv_to_job_cond(HV* hv, slurmdb_job_cond_t* job_cond); extern int hv_to_user_cond(HV* hv, slurmdb_user_cond_t* user_cond); extern int hv_to_qos_cond(HV* hv, slurmdb_qos_cond_t* qos_cond); extern int cluster_grouping_list_to_av(List list, AV* av); extern int cluster_rec_to_hv(slurmdb_cluster_rec_t *rec, HV* hv); extern int report_cluster_rec_list_to_av(List list, AV* av); extern int report_user_rec_to_hv(slurmdb_report_user_rec_t *rec, HV* hv); extern int job_rec_to_hv(slurmdb_job_rec_t *rec, HV* hv); extern int qos_rec_to_hv(slurmdb_qos_rec_t *rec, HV* hv, List all_qos); #endif /* _SLURMDB_PERL_H */ slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/t/000077500000000000000000000000001265000126300231505ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/t/00-use.t000077500000000000000000000004541265000126300243540ustar00rootroot00000000000000#!/usr/bin/perl -T # Before `make install' is performed this script should be runnable with # `make test'. After `make install' it should work as `perl Slurmdb.t' use strict; use warnings; ######################### use Test::More tests => 1; BEGIN { use_ok('Slurmdb') }; ######################### slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/t/01-clusters_get.t000077500000000000000000000024621265000126300262650ustar00rootroot00000000000000#!/usr/bin/perl -T # Before `make install' is performed this script should be runnable with # `make test'. After `make install' it should work as `perl Slurmdb.t' use strict; use warnings; ######################### use Test::More tests => 3; BEGIN { use_ok('Slurmdb') }; ######################### # Insert your test code below, the Test::More module is use()ed here so read # its man page ( perldoc Test::More ) for help writing this test script. my $db_conn = Slurmdb::connection_get(); my %hv = (); my $clusters = Slurmdb::clusters_get($db_conn, \%hv); ok( $clusters != 0, 'clusters_get' ); for (my $i = 0; $i < @$clusters; $i++) { # print "accounting_list $clusters->[$i]{'accounting_list'}\n"; print "classification $clusters->[$i]{'classification'}\n"; print "control_host $clusters->[$i]{'control_host'}\n"; print "control_port $clusters->[$i]{'control_port'}\n"; print "cpu_count $clusters->[$i]{'cpu_count'}\n"; print "name $clusters->[$i]{'name'}\n"; print "nodes $clusters->[$i]{'nodes'}\n" if exists $clusters->[$i]{'nodes'}; # print "root_assoc $clusters->[$i]{'root_assoc'}\n"; print "rpc_version $clusters->[$i]{'rpc_version'}\n\n"; } my $rc = Slurmdb::connection_close(\$db_conn); ok( $rc == 0, 'connection_close' ); slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/t/02-report_cluster_account_by_user.t000077500000000000000000000061101265000126300320750ustar00rootroot00000000000000#!/usr/bin/perl -T # Before `make install' is performed this script should be runnable with # `make test'. After `make install' it should work as `perl Slurmdb.t' use strict; use warnings; ######################### use Test::More tests => 2; BEGIN { use_ok('Slurmdb') }; ######################### # Insert your test code below, the Test::More module is use()ed here so read # its man page ( perldoc Test::More ) for help writing this test script. my $db_conn = Slurmdb::connection_get(); my %assoc_cond = (); $assoc_cond{'usage_start'} = '1270000000'; $assoc_cond{'usage_end'} = '1273000000'; my $clusters = Slurmdb::report_cluster_account_by_user($db_conn, \%assoc_cond); for (my $i = 0; $i < @$clusters; $i++) { print "name $clusters->[$i]{'name'}\n" if exists $clusters->[$i]{'name'}; print "cpu_count $clusters->[$i]{'cpu_count'}\n" if exists $clusters->[$i]{'cpu_count'}; print "cpu_secs $clusters->[$i]{'cpu_secs'}\n" if exists $clusters->[$i]{'cpu_secs'}; for (my $j = 0; $j < @{$clusters->[$i]{'assoc_list'}}; $j++) { print "$j assoc_list acct $clusters->[$i]{'assoc_list'}[$j]{'acct'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'acct'}; print "$j assoc_list cluster $clusters->[$i]{'assoc_list'}[$j]{'cluster'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'cluster'}; print "$j assoc_list cpu_secs $clusters->[$i]{'assoc_list'}[$j]{'cpu_secs'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'cpu_secs'}; print "$j assoc_list parent_acct $clusters->[$i]{'assoc_list'}[$j]{'parent_acct'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'parent_acct'}; print "$j assoc_list user $clusters->[$i]{'assoc_list'}[$j]{'user'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'user'}; } for (my $j = 0; $j < @{$clusters->[$i]{'user_list'}}; $j++) { print "$j user_list acct $clusters->[$i]{'user_list'}->[$j]{'acct'}\n" if exists $clusters->[$i]{'user_list'}->[$j]{'acct'}; for (my $k = 0; $k < @{$clusters->[$i]{'user_list'}->[$j]{'acct_list'}}; $k++) { print "$j $k user_list acct_list $clusters->[$i]{'user_list'}->[$j]{'acct_list'}->[$k]\n"; } for (my $k = 0; $k < @{$clusters->[$i]{'user_list'}->[$j]{'assoc_list'}}; $k++) { print "$j $k user_list assoc_list acct $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'acct'}\n"; print "$j $k user_list assoc_list cluster $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'cluster'}\n"; print "$j $k user_list assoc_list cpu_secs $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'cpu_secs'}\n"; print "$j $k user_list assoc_list parent_acct $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'parent_acct'}\n"; print "$j $k user_list assoc_list user $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'user'}\n"; } print "$j user_list cpu_secs $clusters->[$i]{'user_list'}->[$j]{'cpu_secs'}\n"; print "$j user_list name $clusters->[$i]{'user_list'}->[$j]{'name'}\n"; print "$j user_list uid $clusters->[$i]{'user_list'}->[$j]{'uid'}\n"; } print "\n"; } my $rc = Slurmdb::connection_close(\$db_conn); ok( $rc == 0, 'connection_close' ); slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/t/03-report_cluster_user_by_account.t000077500000000000000000000061101265000126300320760ustar00rootroot00000000000000#!/usr/bin/perl -T # Before `make install' is performed this script should be runnable with # `make test'. After `make install' it should work as `perl Slurmdb.t' use strict; use warnings; ######################### use Test::More tests => 2; BEGIN { use_ok('Slurmdb') }; ######################### # Insert your test code below, the Test::More module is use()ed here so read # its man page ( perldoc Test::More ) for help writing this test script. my $db_conn = Slurmdb::connection_get(); my %assoc_cond = (); $assoc_cond{'usage_start'} = '1270000000'; $assoc_cond{'usage_end'} = '1273000000'; my $clusters = Slurmdb::report_cluster_user_by_account($db_conn, \%assoc_cond); for (my $i = 0; $i < @$clusters; $i++) { print "name $clusters->[$i]{'name'}\n" if exists $clusters->[$i]{'name'}; print "cpu_count $clusters->[$i]{'cpu_count'}\n" if exists $clusters->[$i]{'cpu_count'}; print "cpu_secs $clusters->[$i]{'cpu_secs'}\n" if exists $clusters->[$i]{'cpu_secs'}; for (my $j = 0; $j < @{$clusters->[$i]{'assoc_list'}}; $j++) { print "$j assoc_list acct $clusters->[$i]{'assoc_list'}[$j]{'acct'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'acct'}; print "$j assoc_list cluster $clusters->[$i]{'assoc_list'}[$j]{'cluster'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'cluster'}; print "$j assoc_list cpu_secs $clusters->[$i]{'assoc_list'}[$j]{'cpu_secs'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'cpu_secs'}; print "$j assoc_list parent_acct $clusters->[$i]{'assoc_list'}[$j]{'parent_acct'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'parent_acct'}; print "$j assoc_list user $clusters->[$i]{'assoc_list'}[$j]{'user'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'user'}; } for (my $j = 0; $j < @{$clusters->[$i]{'user_list'}}; $j++) { print "$j user_list acct $clusters->[$i]{'user_list'}->[$j]{'acct'}\n" if exists $clusters->[$i]{'user_list'}->[$j]{'acct'}; for (my $k = 0; $k < @{$clusters->[$i]{'user_list'}->[$j]{'acct_list'}}; $k++) { print "$j $k user_list acct_list $clusters->[$i]{'user_list'}->[$j]{'acct_list'}->[$k]\n"; } for (my $k = 0; $k < @{$clusters->[$i]{'user_list'}->[$j]{'assoc_list'}}; $k++) { print "$j $k user_list assoc_list acct $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'acct'}\n"; print "$j $k user_list assoc_list cluster $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'cluster'}\n"; print "$j $k user_list assoc_list cpu_secs $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'cpu_secs'}\n"; print "$j $k user_list assoc_list parent_acct $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'parent_acct'}\n"; print "$j $k user_list assoc_list user $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'user'}\n"; } print "$j user_list cpu_secs $clusters->[$i]{'user_list'}->[$j]{'cpu_secs'}\n"; print "$j user_list name $clusters->[$i]{'user_list'}->[$j]{'name'}\n"; print "$j user_list uid $clusters->[$i]{'user_list'}->[$j]{'uid'}\n"; } print "\n"; } my $rc = Slurmdb::connection_close(\$db_conn); ok( $rc == 0, 'connection_close' ); 04-report_job_sizes_grouped_by_top_account.t000077500000000000000000000041121265000126300336770ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/t#!/usr/bin/perl -T # Before `make install' is performed this script should be runnable with # `make test'. After `make install' it should work as `perl Slurmdb.t' use strict; use warnings; ######################### use Test::More tests => 2; BEGIN { use_ok('Slurmdb') }; ######################### # Insert your test code below, the Test::More module is use()ed here so read # its man page ( perldoc Test::More ) for help writing this test script. my $db_conn = Slurmdb::connection_get(); my %job_cond = (); $job_cond{'usage_start'} = '1270000000'; $job_cond{'usage_end'} = '1273000000'; my @grouping = qw( 50 250 500 1000 ); my $flat_view = 0; my $clusters = Slurmdb::report_job_sizes_grouped_by_top_account($db_conn, \%job_cond, \@grouping, $flat_view); for (my $i = 0; $i < @$clusters; $i++) { print "cluster $clusters->[$i]{'cluster'}\n"; print "cpu_secs $clusters->[$i]{'cpu_secs'}\n"; for (my $j = 0; $j < @{$clusters->[$i]{'acct_list'}}; $j++) { print "$j acct_list acct $clusters->[$i]{'acct_list'}[$j]{'acct'}\n" if exists $clusters->[$i]{'acct_list'}[$j]{'acct'}; print "$j acct_list cpu_secs $clusters->[$i]{'acct_list'}[$j]{'cpu_secs'}\n" if exists $clusters->[$i]{'acct_list'}[$j]{'cpu_secs'}; print "$j acct_list lft $clusters->[$i]{'acct_list'}[$j]{'lft'}\n" if exists $clusters->[$i]{'acct_list'}[$j]{'lft'}; print "$j acct_list rgt $clusters->[$i]{'acct_list'}[$j]{'rgt'}\n" if exists $clusters->[$i]{'acct_list'}[$j]{'rgt'}; for (my $k = 0; $k < @{$clusters->[$i]{'acct_list'}->[$j]{'groups'}}; $k++) { print "$j $k acct_list groups min_size $clusters->[$i]{'acct_list'}->[$j]{'groups'}->[$k]{'min_size'}\n"; print "$j $k acct_list groups max_size $clusters->[$i]{'acct_list'}->[$j]{'groups'}->[$k]{'max_size'}\n"; print "$j $k acct_list groups jobcount $clusters->[$i]{'acct_list'}->[$j]{'groups'}->[$k]{'count'}\n"; print "$j $k acct_list groups cpu_secs $clusters->[$i]{'acct_list'}->[$j]{'groups'}->[$k]{'cpu_secs'}\n"; } } print "\n"; } my $rc = Slurmdb::connection_close(\$db_conn); ok( $rc == 0, 'connection_close' ); slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/t/05-report_user_top_usage.t000077500000000000000000000062401265000126300302030ustar00rootroot00000000000000#!/usr/bin/perl -T # Before `make install' is performed this script should be runnable with # `make test'. After `make install' it should work as `perl Slurmdb.t' use strict; use warnings; ######################### use Test::More tests => 2; BEGIN { use_ok('Slurmdb') }; ######################### # Insert your test code below, the Test::More module is use()ed here so read # its man page ( perldoc Test::More ) for help writing this test script. my $db_conn = Slurmdb::connection_get(); my %user_cond = (); $user_cond{'assoc_cond'}{'usage_start'} = '1270000000'; #$user_cond{'assoc_cond'}{'usage_end'} = '1273000000'; $user_cond{'with_assocs'} = 0; my $group_accounts = 0; my $clusters = Slurmdb::report_user_top_usage($db_conn, \%user_cond, $group_accounts); for (my $i = 0; $i < @$clusters; $i++) { print "name $clusters->[$i]{'name'}\n" if exists $clusters->[$i]{'name'}; print "cpu_count $clusters->[$i]{'cpu_count'}\n" if exists $clusters->[$i]{'cpu_count'}; print "cpu_secs $clusters->[$i]{'cpu_secs'}\n" if exists $clusters->[$i]{'cpu_secs'}; for (my $j = 0; $j < @{$clusters->[$i]{'assoc_list'}}; $j++) { print "$j assoc_list acct $clusters->[$i]{'assoc_list'}[$j]{'acct'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'acct'}; print "$j assoc_list cluster $clusters->[$i]{'assoc_list'}[$j]{'cluster'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'cluster'}; print "$j assoc_list cpu_secs $clusters->[$i]{'assoc_list'}[$j]{'cpu_secs'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'cpu_secs'}; print "$j assoc_list parent_acct $clusters->[$i]{'assoc_list'}[$j]{'parent_acct'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'parent_acct'}; print "$j assoc_list user $clusters->[$i]{'assoc_list'}[$j]{'user'}\n" if exists $clusters->[$i]{'assoc_list'}[$j]{'user'}; } for (my $j = 0; $j < @{$clusters->[$i]{'user_list'}}; $j++) { print "$j user_list acct $clusters->[$i]{'user_list'}->[$j]{'acct'}\n" if exists $clusters->[$i]{'user_list'}->[$j]{'acct'}; for (my $k = 0; $k < @{$clusters->[$i]{'user_list'}->[$j]{'acct_list'}}; $k++) { print "$j $k user_list acct_list $clusters->[$i]{'user_list'}->[$j]{'acct_list'}->[$k]\n"; } for (my $k = 0; $k < @{$clusters->[$i]{'user_list'}->[$j]{'assoc_list'}}; $k++) { print "$j $k user_list assoc_list acct $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'acct'}\n"; print "$j $k user_list assoc_list cluster $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'cluster'}\n"; print "$j $k user_list assoc_list cpu_secs $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'cpu_secs'}\n"; print "$j $k user_list assoc_list parent_acct $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'parent_acct'}\n"; print "$j $k user_list assoc_list user $clusters->[$i]{'user_list'}->[$j]{'assoc_list'}->[$k]{'user'}\n"; } print "$j user_list cpu_secs $clusters->[$i]{'user_list'}->[$j]{'cpu_secs'}\n"; print "$j user_list name $clusters->[$i]{'user_list'}->[$j]{'name'}\n"; print "$j user_list uid $clusters->[$i]{'user_list'}->[$j]{'uid'}\n"; } print "\n"; } my $rc = Slurmdb::connection_close(\$db_conn); ok( $rc == 0, 'connection_close' ); slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/t/06-jobs_get.t000077500000000000000000000021531265000126300253600ustar00rootroot00000000000000#!/usr/bin/perl -T # Before `make install' is performed this script should be runnable with # `make test'. After `make install' it should work as `perl Slurmdb.t' use strict; use warnings; ######################### use Test::More tests => 2; BEGIN { use_ok('Slurmdb') }; use Data::Dumper; ######################### # Insert your test code below, the Test::More module is use()ed here so read # its man page ( perldoc Test::More ) for help writing this test script. my $db_conn = Slurmdb::connection_get(); my %job_cond = (); #$job_cond{'usage_start'} = 0; #$job_cond{'usage_end'} = 0; #$job_cond{acct_list} = ["blah"]; #$job_cond{userid_list} = [1003]; #$job_cond{groupid_list} = [500]; #$job_cond{jobname_list} = ["hostname","pwd"]; #my @states = ("CA", "CD", "FAILED"); #my @state_nums = map {$slurm->job_state_num($_)} @states; #$job_cond{state_list} = \@state_nums; #$job_cond{step_list} = "2547,2549,2550.1"; $job_cond{'without_usage_truncation'} = 1; my $jobs = Slurmdb::jobs_get($db_conn, \%job_cond); print Dumper($jobs); my $rc = Slurmdb::connection_close(\$db_conn); ok( $rc == 0, 'connection_close' ); slurm-slurm-15-08-7-1/contribs/perlapi/libslurmdb/perl/t/07-qos_get.t000077500000000000000000000015541265000126300252320ustar00rootroot00000000000000#!/usr/bin/perl -T # Before `make install' is performed this script should be runnable with # `make test'. After `make install' it should work as `perl Slurmdb.t' use strict; use warnings; ######################### use Test::More tests => 2; BEGIN { use_ok('Slurmdb') }; use Data::Dumper; ######################### # Insert your test code below, the Test::More module is use()ed here so read # its man page ( perldoc Test::More ) for help writing this test script. my $db_conn = Slurmdb::connection_get(); my %qos_cond = (); #$qos_cond{description_list} = ["general","other"]; #$qos_cond{id_list} = ["1","2","14"]; #$qos_cond{name_list} = ["normal","special"]; #$qos_cond{with_deleted} = "1"; my $qoss = Slurmdb::qos_get($db_conn, \%qos_cond); print Dumper($qoss); my $rc = Slurmdb::connection_close(\$db_conn); ok( $rc == 0, 'connection_close' ); slurm-slurm-15-08-7-1/contribs/phpext/000077500000000000000000000000001265000126300174605ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/phpext/Makefile.am000066400000000000000000000016341265000126300215200ustar00rootroot00000000000000AUTOMAKE_OPTIONS = foreign php_dir=slurm_php phpize=/usr/bin/phpize if HAVE_AIX config_line=CC="$(CC)" CCFLAGS="-g -static $(CFLAGS) $(CPPFLAGS)" ./configure else config_line=CC="$(CC)" LD="$(CC) $(CFLAGS) $(LDFLAGS)" CCFLAGS="-g -static $(CFLAGS)" CFLAGS="$(CFLAGS)" CPPFLAGS="$(CPPFLAGS)" ./configure endif all-local: @cd $(php_dir) && \ if [ ! -f Makefile ]; then \ if [ ! -f configure ]; then \ $(phpize); \ fi && \ $(config_line); \ if [ ! -f Makefile ]; then \ exit 0;\ fi \ fi && \ $(MAKE); \ cd ..; install-exec-local: @cd $(php_dir) && \ if [ ! -f Makefile ]; then \ exit 0;\ fi && \ $(MAKE) INSTALL_ROOT=$(DESTDIR) install && \ cd ..; clean-generic: @cd $(php_dir); \ if [ ! -f Makefile ]; then \ exit 0;\ fi && \ $(MAKE) clean; \ cd ..; distclean-generic: @cd $(php_dir); \ if [ ! -f Makefile ]; then \ exit 0;\ fi && \ $(MAKE) clean; \ $(phpize) --clean; \ cd ..; slurm-slurm-15-08-7-1/contribs/phpext/Makefile.in000066400000000000000000000433341265000126300215340ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/phpext DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am README ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign php_dir = slurm_php phpize = /usr/bin/phpize @HAVE_AIX_FALSE@config_line = CC="$(CC)" LD="$(CC) $(CFLAGS) $(LDFLAGS)" CCFLAGS="-g -static $(CFLAGS)" CFLAGS="$(CFLAGS)" CPPFLAGS="$(CPPFLAGS)" ./configure @HAVE_AIX_TRUE@config_line = CC="$(CC)" CCFLAGS="-g -static $(CFLAGS) $(CPPFLAGS)" ./configure all: all-am .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign contribs/phpext/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign contribs/phpext/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile all-local installdirs: install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-exec-local install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: .MAKE: install-am install-strip .PHONY: all all-am all-local check check-am clean clean-generic \ clean-libtool cscopelist-am ctags-am distclean \ distclean-generic distclean-libtool distdir dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-exec-local install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags-am uninstall uninstall-am all-local: @cd $(php_dir) && \ if [ ! -f Makefile ]; then \ if [ ! -f configure ]; then \ $(phpize); \ fi && \ $(config_line); \ if [ ! -f Makefile ]; then \ exit 0;\ fi \ fi && \ $(MAKE); \ cd ..; install-exec-local: @cd $(php_dir) && \ if [ ! -f Makefile ]; then \ exit 0;\ fi && \ $(MAKE) INSTALL_ROOT=$(DESTDIR) install && \ cd ..; clean-generic: @cd $(php_dir); \ if [ ! -f Makefile ]; then \ exit 0;\ fi && \ $(MAKE) clean; \ cd ..; distclean-generic: @cd $(php_dir); \ if [ ! -f Makefile ]; then \ exit 0;\ fi && \ $(MAKE) clean; \ $(phpize) --clean; \ cd ..; # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/phpext/README000066400000000000000000000006311265000126300203400ustar00rootroot00000000000000README for the php extension for SLURM. This was made primarily for SLURMWEB to connect to slurm. Any extra interactions are welcome. to compile... phpize ./configure make this should make modules/slurm_php.so make install as root should install this where your extensions are in your php install in your php.ini file add the line extension=slurm.so and you should be able to use the functions here. slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/000077500000000000000000000000001265000126300214715ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/AUTHORS000066400000000000000000000002331265000126300225370ustar00rootroot00000000000000Vermeulen Peter, nMCT Howest Jimmy Tang, Trinity Centre for High Performance Computing, Trinity College Dublin slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/DISCLAIMER000066400000000000000000000017241265000126300230340ustar00rootroot00000000000000Disclaimer The php-slurm program, its documentation, and any other auxiliary resources involved in building, installing and running the program, such as graphics, Makefiles, and user interface definition files, are licensed under the GNU General Public License. This includes, but is not limited to, all the files in the official source distribution, as well as the source distribution itself. A copy of the GNU General Public License can be found in the file LICENSE in the top directory of the official source distribution. The license is also available in several formats through the World Wide Web, via http://www.gnu.org/licenses/licenses.html#GPL, or you can write the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. php-slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/LICENSE000066400000000000000000000432541265000126300225060ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/README000066400000000000000000000015621265000126300223550ustar00rootroot00000000000000Slurm PHP extension =================== Requirements (tested with) * SLURM 2.2.0 * PHP 5.1.6 * APACHE (optional, but recommended) This was made primarily for SLURMWEB to connect to slurm. Any extra interactions are welcome. to compile... phpize ./configure make this should make modules/slurm_php.so make install as root should install this where your extensions are in your php install in your php.ini file add the line extension=slurm_php.so and you should be able to use the functions here. TEST CASES ========== It is assumed that the user has both slurmctld and slurmd is configured up with at least 1 partition and 1 node for these tests to pass. Developer Notes =============== To clean up the directory to a clean state do the following ~~~~ phpize --clean ~~~~ The coding style that should be adopted is http://www.kernel.org/doc/Documentation/CodingStyle slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/RELEASE_NOTES000066400000000000000000000020141265000126300234410ustar00rootroot00000000000000NOTES FOR PHP-SLURM VERSION 1.0 =============================== This is PHP extensions goal is to provide just enough functionality to a web developer read data from the slurm controller daemon to create a *status* or *monitoring* application which can be viewed by the end user. All the code has been written by 'Vermeulen Peter' with contributions from TCHPC staff. Installation Requirements ========================= * SLURM 2.2.0 or newer * PHP 5.1.6 or newer * APACHE (optional, but recommended) Added the following API's ========================= slurm_hostlist_to_array() slurm_array_to_hostlist() slurm_ping() slurm_slurmd_status() slurm_version() slurm_print_partition_names() slurm_get_specific_partition_info() slurm_get_partition_node_names() slurm_get_node_names() slurm_get_node_elements() slurm_get_node_element_by_name() slurm_get_node_state_by_name() slurm_get_node_states() slurm_get_control_configuration_keys() slurm_get_control_configuration_values() slurm_load_partition_jobs() slurm_load_job_information() slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/config.m4.in000066400000000000000000000036661265000126300236200ustar00rootroot00000000000000##***************************************************************************** ## $Id: config.m4 8863 2006-08-10 18:47:55Z da $ ##***************************************************************************** # AUTHOR: # Danny Auble # # DESCRIPTION: # Use to make the php slurm extension ##***************************************************************************** PHP_ARG_WITH(slurm, whether to use slurm, [ --with-slurm SLURM install dir]) AC_MSG_CHECKING([for phpize in default path]) if test ! -f "/usr/bin/phpize"; then PHP_SLURM="no" AC_MSG_RESULT([NO, CANNOT MAKE SLURM_PHP]) else AC_MSG_RESULT([yes]) fi if test "$PHP_SLURM" != "no"; then SLURMLIB_PATH="@prefix@/lib @top_builddir@/src/db_api/.libs" SLURMINCLUDE_PATH="@prefix@/include" SEARCH_FOR="libslurmdb.so" # --with-libslurm -> check with-path if test -r $PHP_SLURM/; then # path given as parameter SLURM_DIR=$PHP_SLURM SLURMLIB_PATH="$SLURM_DIR/lib" else # search default path list AC_MSG_CHECKING([for libslurmdb.so in default paths]) for i in $SLURMLIB_PATH ; do if test -r $i/$SEARCH_FOR; then SLURM_DIR=$i PHP_ADD_LIBPATH($i, SLURM_PHP_SHARED_LIBADD) AC_MSG_RESULT([found in $i]) fi done fi if test -z "$SLURM_DIR"; then AC_MSG_RESULT([not found]) AC_MSG_ERROR([Please reinstall the slurm distribution]) fi PHP_ADD_INCLUDE($SLURMINCLUDE_PATH) PHP_ADD_INCLUDE(@top_srcdir@) LIBNAME=slurmdb LIBSYMBOL=slurm_acct_storage_init PHP_CHECK_LIBRARY($LIBNAME, $LIBSYMBOL, [PHP_ADD_LIBRARY($LIBNAME, , SLURM_PHP_SHARED_LIBADD) AC_DEFINE(HAVE_SLURMLIB,1,[ ])], [AC_MSG_ERROR([wrong libslurmdb version or lib not found])], [-L$SLURM_DIR -l$LIBNAME]) PHP_SUBST(SLURM_PHP_SHARED_LIBADD) AC_CHECK_HEADERS(stdbool.h) AC_DEFINE(HAVE_SLURM_PHP, 1, [Whether you have SLURM]) #PHP_EXTENSION(slurm_php, $ext_shared) PHP_NEW_EXTENSION(slurm_php, @top_srcdir@/contribs/phpext/slurm_php/slurm_php.c, $ext_shared) fi slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/slurm_php.c000066400000000000000000000550751265000126300236620ustar00rootroot00000000000000/*****************************************************************************\ * slurm_php.c - php interface to slurm. * ***************************************************************************** * Copyright (C) 2011 - Trinity Centre for High Performance Computing * Copyright (C) 2011 - Trinity College Dublin * Written By : Vermeulen Peter * * This file is part of php-slurm, a resource management program. * Please also read the included file: DISCLAIMER. * * php-slurm is free software; you can redistribute it and/or modify it under * the terms of the GNU General Public License as published by the Free * Software Foundation; either version 2 of the License, or (at your option) * any later version. * * In addition, as a special exception, the copyright holders give permission * to link the code of portions of this program with the OpenSSL library under * certain conditions as described in each individual source file, and * distribute linked combinations including the two. You must obey the GNU * General Public License in all respects for all of the code used other than * OpenSSL. If you modify file(s) with this exception, you may extend this * exception to your version of the file(s), but you are not obligated to do * so. If you do not wish to do so, delete this exception statement from your * version. If you delete this exception statement from all source files in * the program, then also delete it here. * * php-slurm is distributed in the hope that it will be useful, but WITHOUT ANY * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more * details. * * You should have received a copy of the GNU General Public License along * with php-slurm; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. \*****************************************************************************/ /*****************************************************************************\ * * Documentation for each function can be found in the slurm_php.h file * \*****************************************************************************/ #ifdef HAVE_CONFIG_H #include "config.h" #endif #include "slurm_php.h" static function_entry slurm_functions[] = { PHP_FE(slurm_ping, NULL) PHP_FE(slurm_slurmd_status, NULL) PHP_FE(slurm_print_partition_names, NULL) PHP_FE(slurm_get_specific_partition_info, NULL) PHP_FE(slurm_get_partition_node_names, NULL) PHP_FE(slurm_version, NULL) PHP_FE(slurm_get_node_names, NULL) PHP_FE(slurm_get_node_elements, NULL) PHP_FE(slurm_get_node_element_by_name, NULL) PHP_FE(slurm_get_node_state_by_name, NULL) PHP_FE(slurm_get_control_configuration_keys, NULL) PHP_FE(slurm_get_control_configuration_values, NULL) PHP_FE(slurm_load_job_information, NULL) PHP_FE(slurm_load_partition_jobs, NULL) PHP_FE(slurm_get_node_states, NULL) PHP_FE(slurm_hostlist_to_array, NULL) PHP_FE(slurm_array_to_hostlist, NULL) { NULL, NULL, NULL } }; zend_module_entry slurm_php_module_entry = { #if ZEND_MODULE_API_NO >= 20010901 STANDARD_MODULE_HEADER, #endif SLURM_PHP_EXTNAME, slurm_functions, NULL, NULL, NULL, NULL, NULL, #if ZEND_MODULE_API_NO >= 20010901 SLURM_PHP_VERSION, #endif STANDARD_MODULE_PROPERTIES }; #ifdef COMPILE_DL_SLURM_PHP ZEND_GET_MODULE(slurm_php) #endif /*****************************************************************************\ * HELPER FUNCTION PROTOTYPES \*****************************************************************************/ /* * _parse_node_pointer - Parse a node pointer's contents into an * assocative zval array where the key is descriptive to the * value * * IN sub_arr - array to store the contents of the node pointer * IN node_arr - node pointer that needs parsing */ static void _parse_node_pointer(zval *sub_arr, node_info_t *node_arr); /* * _parse_assoc_array - Parse a character array where the elements are * key-value pairs separated by delimiters into an associative * array * * IN char_arr - character array that needs parsing * IN delims - character array that contains the delimeters used in parsing * IN result_arr - associative array used to store the key_value pairs in */ static void _parse_assoc_array(char *char_arr, char *delims, zval *result_arr); /* * _parse_array - Parse a character array where the elements are values * separated by delimiters into a numerically indexed array * * IN char_arr - character array that needs parsing * IN delims - character array that contains the delimeters used in parsing * IN result_arr - numerically indexed array used to store the values in */ static void _parse_array(char *char_arr, char *delims, zval *rslt_arr); /* * _zend_add_valid_assoc_string - checks a character array to see if * it's NULL or not, if so an associative null is added, if not * an associative string is added. * * IN rstl_arr - array to store the associative key_value pairs in * IN key - character array used as the associative key * IN val - character array to be validated and added as value if valid */ static void _zend_add_valid_assoc_string(zval *rstl_arr, char *key, char *val); /* * _zend_add_valid_assoc_time_string - checks a unix timestamp to see if it's * 0 or not, if so an associative null is added, if not a formatted string * is added. * * IN rstl_arr - array to store the associative key_value pairs in * IN key - character array used as the associative key * IN val - time_t unix timestamp to be validated and added if valid * NOTE : If you'd like to change the format in which the valid strings are * returned, you can change the TIME_FORMAT_STRING macro to the needed format */ static void _zend_add_valid_assoc_time_string( zval *rstl_arr, char *key, time_t *val); /*****************************************************************************\ * TODO ***************************************************************************** * [ADJUSTING EXISTING FUNCTIONS] * - _parse_node_pointer * dynamic_plugin_data_t is currently not returned * [EXTRA FUNCTIONS] * - Functions that filter jobs on the nodes they are running on * - Scheduling * - ... \*****************************************************************************/ /*****************************************************************************\ * HELPER FUNCTIONS \*****************************************************************************/ static void _parse_node_pointer(zval *sub_arr, node_info_t *node_arr) { zval *sub_arr_2 = NULL; _zend_add_valid_assoc_string(sub_arr, "Name", node_arr->name); _zend_add_valid_assoc_string(sub_arr, "Arch.", node_arr->arch); _zend_add_valid_assoc_time_string(sub_arr, "Boot Time", &node_arr->boot_time); add_assoc_long(sub_arr, "#CPU'S", node_arr->cpus); add_assoc_long(sub_arr, "#Cores/CPU", node_arr->cores); if (node_arr->features == NULL) { add_assoc_null(sub_arr, "Features"); } else { ALLOC_INIT_ZVAL(sub_arr_2); array_init(sub_arr_2); _parse_array(node_arr->features, ",", sub_arr_2); add_assoc_zval(sub_arr, "Features", sub_arr_2); } _zend_add_valid_assoc_string(sub_arr, "GRES", node_arr->gres); add_assoc_long(sub_arr, "State", node_arr->node_state); _zend_add_valid_assoc_string(sub_arr, "OS", node_arr->os); add_assoc_long(sub_arr, "Real Mem", node_arr->real_memory); if (node_arr->reason!=NULL) { _zend_add_valid_assoc_string(sub_arr, "Reason", node_arr->reason); _zend_add_valid_assoc_time_string(sub_arr,"Reason Timestamp", &node_arr->reason_time); add_assoc_long(sub_arr, "Reason User Id", node_arr->reason_uid); } else { add_assoc_null(sub_arr, "Reason"); add_assoc_null(sub_arr, "Reason Timestamp"); add_assoc_null(sub_arr, "Reason User Id"); } _zend_add_valid_assoc_time_string(sub_arr, "Slurmd Startup Time", &node_arr->slurmd_start_time); add_assoc_long(sub_arr, "#Sockets/Node", node_arr->sockets); add_assoc_long(sub_arr, "#Threads/Core", node_arr->threads); add_assoc_long(sub_arr, "TmpDisk", node_arr->tmp_disk); add_assoc_long(sub_arr, "Weight", node_arr->weight); } static void _parse_assoc_array(char *char_arr, char *delims, zval *result_arr) { char *rslt = NULL; char *tmp; int i = 0; rslt = strtok(char_arr, delims); while (rslt != NULL) { if (i == 0) { tmp = rslt; } else if (i == 1) { if (strcmp(rslt,"(null)")==0) { add_assoc_null(result_arr, tmp); } else { _zend_add_valid_assoc_string(result_arr, tmp, rslt); } } i++; if (i == 2) { i = 0; } rslt = strtok(NULL, delims); } } static void _parse_array(char *char_arr, char *delims, zval *rslt_arr) { char *rslt = NULL; char *tmp = NULL; rslt = strtok(char_arr, delims); while (rslt != NULL) { if (strcmp(rslt, "(null)")==0) { add_next_index_null(rslt_arr); } else { tmp = slurm_xstrdup(rslt); add_next_index_string(rslt_arr, tmp, 1); xfree(tmp); } rslt = strtok(NULL, delims); } } static void _zend_add_valid_assoc_string(zval *rstl_arr, char *key, char *val) { if (!val) add_assoc_null(rstl_arr, key); else add_assoc_string(rstl_arr, key, val, 1); } static void _zend_add_valid_assoc_time_string( zval *rstl_arr, char *key, time_t *val) { char buf[80]; struct tm *timeinfo; if (val==0) { add_assoc_null(rstl_arr, key); } else { timeinfo = localtime(val); strftime(buf, 80, TIME_FORMAT_STRING, timeinfo); add_assoc_string(rstl_arr, key, buf, 1); } } /*****************************************************************************\ * SLURM STATUS FUNCTIONS \*****************************************************************************/ PHP_FUNCTION(slurm_ping) { int err = SLURM_SUCCESS; array_init(return_value); err = slurm_ping(1); add_assoc_long(return_value,"Prim. Controller",err); err = slurm_ping(2); add_assoc_long(return_value,"Sec. Controller",err); } PHP_FUNCTION(slurm_slurmd_status) { int err = SLURM_SUCCESS; slurmd_status_t *status_ptr = NULL; err = slurm_load_slurmd_status(&status_ptr); if (err) { RETURN_LONG(-2); } array_init(return_value); _zend_add_valid_assoc_time_string(return_value,"Booted_at", &status_ptr->booted); _zend_add_valid_assoc_time_string(return_value,"Last_Msg", &status_ptr->last_slurmctld_msg); add_assoc_long(return_value,"Logging_Level", status_ptr->slurmd_debug); add_assoc_long(return_value,"Actual_CPU's", status_ptr->actual_cpus); add_assoc_long(return_value,"Actual_Sockets", status_ptr->actual_sockets); add_assoc_long(return_value,"Actual_Cores",status_ptr->actual_cores); add_assoc_long(return_value,"Actual_Threads", status_ptr->actual_threads); add_assoc_long(return_value,"Actual_Real_Mem", status_ptr->actual_real_mem); add_assoc_long(return_value,"Actual_Tmp_Disk", status_ptr->actual_tmp_disk); add_assoc_long(return_value,"PID",status_ptr->pid); _zend_add_valid_assoc_string(return_value, "Hostname", status_ptr->hostname); _zend_add_valid_assoc_string(return_value, "Slurm Logfile", status_ptr->slurmd_logfile); _zend_add_valid_assoc_string(return_value, "Step List", status_ptr->step_list); _zend_add_valid_assoc_string(return_value, "Version", status_ptr->version); if (status_ptr != NULL) { slurm_free_slurmd_status(status_ptr); } } PHP_FUNCTION(slurm_version) { long option = -1; if (zend_parse_parameters(ZEND_NUM_ARGS()TSRMLS_CC, "l", &option) == FAILURE) { RETURN_LONG(-3); } switch (option) { case 0: RETURN_LONG(SLURM_VERSION_MAJOR(SLURM_VERSION_NUMBER)); break; case 1: RETURN_LONG(SLURM_VERSION_MINOR(SLURM_VERSION_NUMBER)); break; case 2: RETURN_LONG(SLURM_VERSION_MICRO(SLURM_VERSION_NUMBER)); break; default: array_init(return_value); add_next_index_long(return_value, SLURM_VERSION_MAJOR(SLURM_VERSION_NUMBER)); add_next_index_long(return_value, SLURM_VERSION_MINOR(SLURM_VERSION_NUMBER)); add_next_index_long(return_value, SLURM_VERSION_MICRO(SLURM_VERSION_NUMBER)); break; } } /*****************************************************************************\ * SLURM PHP HOSTLIST FUNCTIONS \*****************************************************************************/ PHP_FUNCTION(slurm_hostlist_to_array) { long lngth = 0; char *host_list = NULL; hostlist_t hl = NULL; int hl_length = 0; int i=0; if (zend_parse_parameters(ZEND_NUM_ARGS()TSRMLS_CC, "s|d", &host_list, &lngth) == FAILURE) { RETURN_LONG(-3); } if ((host_list == NULL) || !strcmp(host_list, "")) { RETURN_LONG(-3); } hl = slurm_hostlist_create(host_list); hl_length = slurm_hostlist_count(hl); if (hl_length==0) { RETURN_LONG(-2); } array_init(return_value); for (i=0; irecord_count; i++) { add_next_index_string(return_value, prt_ptr->partition_array[i].name, 1); } slurm_free_partition_info_msg(prt_ptr); if (i == 0) { RETURN_LONG(-1); } } PHP_FUNCTION(slurm_get_specific_partition_info) { long lngth = 0; int err = SLURM_SUCCESS; partition_info_msg_t *prt_ptr = NULL; partition_info_t *prt_data = NULL; char *name = NULL; char *tmp = NULL; int i = 0; int y = 0; if (zend_parse_parameters(ZEND_NUM_ARGS()TSRMLS_CC, "s|d", &name, &lngth) == FAILURE) { RETURN_LONG(-3); } if ((name == NULL) || !strcmp(name, "")) { RETURN_LONG(-3); } err = slurm_load_partitions((time_t) NULL, &prt_ptr, 0); if (err) { RETURN_LONG(-2); } if (prt_ptr->record_count != 0) { for (i = 0; i < prt_ptr->record_count; i++) { if (strcmp(prt_ptr->partition_array->name, name) == 0) { prt_data = &prt_ptr->partition_array[i]; tmp = slurm_sprint_partition_info(prt_data, 1); array_init(return_value); _parse_assoc_array(tmp, "= ", return_value); y++; break; } } } slurm_free_partition_info_msg(prt_ptr); if (y == 0) { RETURN_LONG(-1); } } PHP_FUNCTION(slurm_get_partition_node_names) { char *prt_name = NULL; long lngth = 0; int err = SLURM_SUCCESS; partition_info_msg_t *prt_ptr = NULL; partition_info_t *prt_data = NULL; int i = 0; int y = 0; if (zend_parse_parameters(ZEND_NUM_ARGS()TSRMLS_CC, "s|d", &prt_name, &lngth) == FAILURE) { RETURN_LONG(-3); } if ((prt_name == NULL) || (strcmp(prt_name,"")==0)) { RETURN_LONG(-3); } err = slurm_load_partitions((time_t) NULL, &prt_ptr, 0); if (err) RETURN_LONG(-2); if (prt_ptr->record_count != 0) { for (i = 0; i < prt_ptr->record_count; i++) { if (!strcmp(prt_ptr->partition_array->name, prt_name)) { prt_data = &prt_ptr->partition_array[i]; array_init(return_value); add_next_index_string( return_value, prt_data->nodes, 1); y++; break; } } } slurm_free_partition_info_msg(prt_ptr); if (y == 0) RETURN_LONG(-1); } /*****************************************************************************\ * SLURM NODE CONFIGURATION READ FUNCTIONS \*****************************************************************************/ PHP_FUNCTION(slurm_get_node_names) { int err = SLURM_SUCCESS; int i = 0; node_info_msg_t *node_ptr = NULL; err = slurm_load_node((time_t) NULL, &node_ptr, 0); if (err) { RETURN_LONG(-2); } if (node_ptr->record_count > 0) { array_init(return_value); for (i = 0; i < node_ptr->record_count; i++) { add_next_index_string( return_value, node_ptr->node_array[i].name, 1); } } slurm_free_node_info_msg(node_ptr); if(i==0) { RETURN_LONG(-1); } } PHP_FUNCTION(slurm_get_node_elements) { int err = SLURM_SUCCESS; int i = 0; node_info_msg_t *node_ptr; zval *sub_arr = NULL; err = slurm_load_node((time_t) NULL, &node_ptr, 0); if (err) { RETURN_LONG(-2); } if (node_ptr->record_count > 0) { array_init(return_value); for (i = 0; i < node_ptr->record_count; i++) { ALLOC_INIT_ZVAL(sub_arr); array_init(sub_arr); _parse_node_pointer(sub_arr, &node_ptr->node_array[i]); add_assoc_zval(return_value, node_ptr->node_array[i].name, sub_arr); } } slurm_free_node_info_msg(node_ptr); if(i==0) { RETURN_LONG(-1); } } PHP_FUNCTION(slurm_get_node_element_by_name) { int err = SLURM_SUCCESS; int i = 0,y = 0; node_info_msg_t *node_ptr; char *node_name = NULL; long lngth; zval *sub_arr = NULL; if (zend_parse_parameters(ZEND_NUM_ARGS()TSRMLS_CC, "s|d", &node_name, &lngth) == FAILURE) { RETURN_LONG(-3); } if ((node_name == NULL) || (strcmp(node_name,"")==0)) { RETURN_LONG(-3); } err = slurm_load_node((time_t) NULL, &node_ptr, 0); if (err) { RETURN_LONG(-2); } array_init(return_value); for (i = 0; i < node_ptr->record_count; i++) { if (strcmp(node_ptr->node_array->name, node_name) == 0) { y++; ALLOC_INIT_ZVAL(sub_arr); array_init(sub_arr); _parse_node_pointer(sub_arr, &node_ptr->node_array[i]); add_assoc_zval(return_value, node_name, sub_arr); break; } } slurm_free_node_info_msg(node_ptr); if (y == 0) { RETURN_LONG(-1); } } PHP_FUNCTION(slurm_get_node_state_by_name) { int err = SLURM_SUCCESS; int i = 0,y = 0; node_info_msg_t *node_ptr; char *node_name = NULL; long lngth; if (zend_parse_parameters(ZEND_NUM_ARGS()TSRMLS_CC, "s|d", &node_name, &lngth) == FAILURE) { RETURN_LONG(-3); } if ((node_name == NULL) || (strcmp(node_name,"")==0)) { RETURN_LONG(-3); } err = slurm_load_node((time_t) NULL, &node_ptr, 0); if (err) { RETURN_LONG(-2); } for (i = 0; i < node_ptr->record_count; i++) { if (strcmp(node_ptr->node_array->name, node_name) == 0) { y++; RETURN_LONG(node_ptr->node_array[i].node_state); break; } } slurm_free_node_info_msg(node_ptr); if (i == 0) { RETURN_LONG(-1); } if (y==0) { RETURN_LONG(-1); } } PHP_FUNCTION(slurm_get_node_states) { int err = SLURM_SUCCESS; int i = 0; node_info_msg_t *node_ptr; err = slurm_load_node((time_t) NULL, &node_ptr, 0); if (err) { RETURN_LONG(-2); } array_init(return_value); for (i = 0; i < node_ptr->record_count; i++) { add_next_index_long(return_value, node_ptr->node_array[i].node_state); } slurm_free_node_info_msg(node_ptr); if (i == 0) { RETURN_LONG(-1); } } /*****************************************************************************\ * SLURM CONFIGURATION READ FUNCTIONS \*****************************************************************************/ PHP_FUNCTION(slurm_get_control_configuration_keys) { int err = SLURM_SUCCESS; slurm_ctl_conf_t *ctrl_conf_ptr; List lst; ListIterator iter = NULL; key_pair_t *k_p; err = slurm_load_ctl_conf((time_t) NULL, &ctrl_conf_ptr); if (err) { RETURN_LONG(-2); } lst = slurm_ctl_conf_2_key_pairs(ctrl_conf_ptr); if (!lst) { RETURN_LONG(-1); } iter = slurm_list_iterator_create(lst); array_init(return_value); while ((k_p = slurm_list_next(iter))) { add_next_index_string(return_value, k_p->name, 1); } slurm_free_ctl_conf(ctrl_conf_ptr); } PHP_FUNCTION(slurm_get_control_configuration_values) { int err = SLURM_SUCCESS; slurm_ctl_conf_t *ctrl_conf_ptr; List lst; ListIterator iter = NULL; key_pair_t *k_p; err = slurm_load_ctl_conf((time_t) NULL, &ctrl_conf_ptr); if (err) { RETURN_LONG(-2); } lst = slurm_ctl_conf_2_key_pairs(ctrl_conf_ptr); if (!lst) { RETURN_LONG(-1); } iter = slurm_list_iterator_create(lst); array_init(return_value); while ((k_p = slurm_list_next(iter))) { if (k_p->value==NULL) { add_next_index_null(return_value); } else { add_next_index_string(return_value, k_p->value, 1); } } slurm_free_ctl_conf(ctrl_conf_ptr); } /*****************************************************************************\ * SLURM JOB READ FUNCTIONS \*****************************************************************************/ PHP_FUNCTION(slurm_load_job_information) { int err = SLURM_SUCCESS; int i = 0; job_info_msg_t *job_ptr; zval *sub_arr = NULL; char *tmp; err = slurm_load_jobs((time_t) NULL, &job_ptr, 0); if (err) { RETURN_LONG(-2); } array_init(return_value); for (i = 0; i < job_ptr->record_count; i++) { ALLOC_INIT_ZVAL(sub_arr); array_init(sub_arr); _parse_assoc_array(slurm_sprint_job_info( &job_ptr->job_array[i], 1), "= ", sub_arr); tmp = slurm_xstrdup_printf("%u", job_ptr->job_array[i].job_id); add_assoc_zval(return_value, tmp, sub_arr); xfree(tmp); } slurm_free_job_info_msg(job_ptr); if (i == 0) { RETURN_LONG(-1); } } PHP_FUNCTION(slurm_load_partition_jobs) { int err = SLURM_SUCCESS; int i = 0; job_info_msg_t *job_ptr; zval *sub_arr = NULL; char *tmp; char *pname = NULL; long lngth; long checker = 0; if (zend_parse_parameters(ZEND_NUM_ARGS()TSRMLS_CC, "s|d", &pname, &lngth) == FAILURE) { RETURN_LONG(-3); } if ((pname == NULL) || !strcmp(pname,"")) { RETURN_LONG(-3); } err = slurm_load_jobs((time_t) NULL, &job_ptr, 0); if (err) { RETURN_LONG(-2); } array_init(return_value); for (i = 0; i < job_ptr->record_count; i++) { if (!strcmp(job_ptr->job_array->partition, pname)) { checker++; ALLOC_INIT_ZVAL(sub_arr); array_init(sub_arr); _parse_assoc_array(slurm_sprint_job_info( &job_ptr->job_array[i], 1), "= ", sub_arr); tmp = slurm_xstrdup_printf( "%u", job_ptr->job_array[i].job_id); add_assoc_zval(return_value, tmp, sub_arr); xfree(tmp); } } slurm_free_job_info_msg(job_ptr); if (i == 0) { RETURN_LONG(-1); } if (checker==0) { RETURN_LONG(-1); } } slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/slurm_php.h000066400000000000000000000326121265000126300236570ustar00rootroot00000000000000/*****************************************************************************\ * slurm_php.h - php interface to slurm. * ***************************************************************************** * Copyright (C) 2011 - Trinity Centre for High Performance Computing * Copyright (C) 2011 - Trinity College Dublin * Written By : Vermeulen Peter * * This file is part of php-slurm, a resource management program. * Please also read the included file: DISCLAIMER. * * php-slurm is free software; you can redistribute it and/or modify it under * the terms of the GNU General Public License as published by the Free * Software Foundation; either version 2 of the License, or (at your option) * any later version. * * In addition, as a special exception, the copyright holders give permission * to link the code of portions of this program with the OpenSSL library under * certain conditions as described in each individual source file, and * distribute linked combinations including the two. You must obey the GNU * General Public License in all respects for all of the code used other than * OpenSSL. If you modify file(s) with this exception, you may extend this * exception to your version of the file(s), but you are not obligated to do * so. If you do not wish to do so, delete this exception statement from your * version. If you delete this exception statement from all source files in * the program, then also delete it here. * * php-slurm is distributed in the hope that it will be useful, but WITHOUT ANY * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more * details. * * You should have received a copy of the GNU General Public License along * with php-slurm; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. \*****************************************************************************/ #ifndef SLURM_PHP_H #define SLURM_PHP_H 1 #define SLURM_PHP_VERSION "1.0.1" #define SLURM_PHP_EXTNAME "slurm" /* * Adjust this value to change the format of the returned string * values. * * For more information on formatting options : * http://www.java2s.com/Tutorial/C/0460__time.h/strftime.htm */ #define TIME_FORMAT_STRING "%c" #include #include #include #include #include #include #include "src/common/xmalloc.h" extern zend_module_entry slurm_php_module_entry; /*****************************************************************************\ * TYPEDEFS \*****************************************************************************/ typedef struct key_value { char *name; /* key */ char *value; /* value */ } key_pair_t; /* define functions needed to avoid warnings (they are defined in * src/common/xstring.h) If you can figure out a way to make it so we * don't have to make these declarations that would be awesome. I * didn't have time to spend on it when I was working on it. -da */ /* ** strdup which uses xmalloc routines */ char *slurm_xstrdup(const char *str); /* ** strdup formatted which uses xmalloc routines */ char *slurm_xstrdup_printf(const char *fmt, ...) __attribute__ ((format (printf, 1, 2))); /*****************************************************************************\ * SLURM PHP HOSTLIST FUNCTIONS \*****************************************************************************/ /* * slurm_hostlist_to_array - converts a hostlist string to * a numerically indexed array. * * IN host_list - string value containing the hostlist * RET numerically indexed array containing the names of the nodes */ PHP_FUNCTION(slurm_hostlist_to_array); /* * slurm_array_to_hostlist - convert an array of nodenames into a hostlist * string * * IN node_arr - Numerically indexed array containing a nodename on each index * RET String variable containing the hostlist string */ PHP_FUNCTION(slurm_array_to_hostlist); /*****************************************************************************\ * SLURM STATUS FUNCTIONS \*****************************************************************************/ /* * slurm_ping - Issues the slurm interface to return the status of the slurm * primary and secondary controller * * RET associative array containing the status ( status = 0 if online, = -1 if * offline ) of both controllers * NOTE : the error codes and their meaning are described in the section * labelled EXTRA */ PHP_FUNCTION(slurm_ping); /* * slurm_slurmd_status - Issues the slurm interface to return the * status of the slave daemon ( running on this machine ) * * RET associative array containing the status or a negative long variable * containing an error code * NOTE : the error codes and their meaning are described in the section * labelled EXTRA */ PHP_FUNCTION(slurm_slurmd_status); /* * slurm_version - Returns the slurm version number in the requested format * * IN option - long/integer value linking to the formatting of the version * number * RET long value containing the specific formatted version number a numeric * array containing the version number or a negative long variable * containing an error code. * NOTE : the possible cases and their meaning are described in the section * labelled EXTRA */ PHP_FUNCTION(slurm_version); /*****************************************************************************\ * SLURM PARTITION READ FUNCTIONS \*****************************************************************************/ /* * slurm_print_partition_names - Creates and returns a numerically * indexed array containing the names of the partitions * * RET numerically indexed array containing the partitionnames or a * negative long variable containing an error code NOTE : the * error codes and their meaning are described in the section * labelled EXTRA */ PHP_FUNCTION(slurm_print_partition_names); /* * slurm_get_specific_partition_info - Searches for the requested * partition and if found it returns an associative array * containing the information about this specific partition * * IN name - a string variable containing the partitionname * OPTIONAL IN lngth - a long variable containing the length of the * partitionname * RET an associative array containing the information about a * specific partition, or a negative long value containing an * error code * NOTE : the error codes and their meaning are described in the * section labelled EXTRA */ PHP_FUNCTION(slurm_get_specific_partition_info); /* * slurm_get_partition_node_names - Searches for the requested partition and * if found it parses the nodes into a numerically indexed array, which is * then returned to the calling function. * * IN name - a string variable containing the partitionname * * OPTIONAL IN lngth - a long variable containing the length of the * partitionname * * RET a numerically indexed array containing the names of all the * nodes connected to this partition, or a negative long value * containing an error code * * NOTE : the error codes and their meaning are described in the * section labelled EXTRA */ PHP_FUNCTION(slurm_get_partition_node_names); /*****************************************************************************\ * SLURM NODE CONFIGURATION READ FUNCTIONS \*****************************************************************************/ /* * slurm_get_node_names - Creates and returns a numerically index array * containing the nodenames. * * RET a numerically indexed array containing the requested nodenames, * or a negative long value containing an error code * NOTE : the error codes and their meaning are described in the * section labelled EXTRA */ PHP_FUNCTION(slurm_get_node_names); /* * slurm_get_node_elements - Creates and returns an associative array * containing all the nodes indexed by nodename and as value an * associative array containing their information. * * RET an associative array containing the nodes as keys and their * information as value, or a long value containing an error code * NOTE : the error codes and their meaning are described in the * section labelled EXTRA */ PHP_FUNCTION(slurm_get_node_elements); /* * slurm_get_node_element_by_name - Searches for the requested node * and if found it parses its information into an associative * array, which is then returned to the calling function. * * IN name - a string variable containing the nodename * OPTIONAL IN lngth - a long variable containing the length of the nodename * RET an assocative array containing the requested information or a * long value containing an error code * NOTE : the error codes and their meaning are described in the * section labelled EXTRA */ PHP_FUNCTION(slurm_get_node_element_by_name); /* * slurm_get_node_state_by_name - Searches for the requested node and * if found it returns the state of that node * * IN name - a string variable containing the nodename * OPTIONAL IN lngth - a long variable containing the length of the nodename * RET a long value containing the state of the node [0-7] or a * negative long value containing the error code * NOTE : the error codes and their meaning are described in the * section labelled EXTRA */ PHP_FUNCTION(slurm_get_node_state_by_name); /* * slurm_get_node_states - Creates a numerically indexed array * containing the state of each node ( only the state ! ) as a * long value. This function could be used to create a summary of * the node states without having to do a lot of processing ( or * having to deal with overlapping nodes between partitions ). * * RET a numerically indexed array containing node states */ PHP_FUNCTION(slurm_get_node_states); /*****************************************************************************\ * SLURM CONFIGURATION READ FUNCTIONS \*****************************************************************************/ /* * Due to the configuration being quite large, i decided to create 2 functions * to return the keys and values separately. ( to prevent a buffer overflow ) */ /* * slurm_get_control_configuration_keys - Retreives the configuration * from the slurm daemon and parses it into a numerically indexed * array containg the keys that link to the values ( the values * are retreived by the slurm_get_control_configuration_values * function ) * * RET a numerically indexed array containing keys that describe the * values of the configuration of the slurm daemon, or a long * value containing an error code * * NOTE : the error codes and their meaning are described in the * section labelled EXTRA */ PHP_FUNCTION(slurm_get_control_configuration_keys); /* * slurm_get_control_configuration_values - Retreives the * configuration from the slurm daemon and parses it into a * numerically indexed array containg the values that link to the * keys ( the keys are retreived by the * slurm_get_control_configuration_keys function ) * * RET a numerically indexed array containing the values of the * configuration of the slurm daemon, or a long value containing * an error code * NOTE : the error codes and their meaning are described in the * section labelled EXTRA */ PHP_FUNCTION(slurm_get_control_configuration_values); /*****************************************************************************\ * SLURM JOB READ FUNCTIONS \*****************************************************************************/ /* * slurm_load_job_information - Loads the information of all the jobs, * parses it and returns the values as an associative array where * each key is the job id linking to an associative array with * the information of the job * * RET an associative array containing the information of all jobs, or * a long value containing an error code. * * NOTE : the error codes and their meaning are described in the * section labelled EXTRA */ PHP_FUNCTION(slurm_load_job_information); /* * slurm_load_partition_jobs - Retreive the information of all the * jobs running on a single partition. * * IN pname - The partition name as a string value * OPTIONAL IN lngth - a long variable containing the length of the * partitionname * RET an associative array containing the information of all the jobs * running on this partition. Or a long value containing an error * code * NOTE : the error codes and their meaning are described in the * section labelled EXTRA */ PHP_FUNCTION(slurm_load_partition_jobs); /*****************************************************************************\ * EXTRA ***************************************************************************** * * [ERROR CODES] * * -3 : no/incorrect variables where passed on * -2 : An error occurred whilst trying to communicate * with the daemon * -1 : Your query produced no results * * [VERSION FORMATTING OPTIONS] * * 0 : major of the version number * 1 : minor of the version number * 2 : micro of the version number * default : full version number * * [EXPLANATION] * * Consider the version number 2.2.3, * if we were to split this into an array * where the "." sign is the delimiter * we would receive the following * * [2] => MAJOR * [2] => MINOR * [3] => MICRO * * When requesting the major you would * only receive the major, when requesting * the full version you would receive the array * as depicted above. * \*****************************************************************************/ #define phpext_slurm_php_ptr &slurm_php_module_entry #endif slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/000077500000000000000000000000001265000126300226335ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_array_to_hostlist_basic.phpt000066400000000000000000000013501265000126300316630ustar00rootroot00000000000000--TEST-- Test function slurm_array_to_hostlist() by calling it with its expected arguments --CREDIT-- Jimmy Tang --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** array(1) { ["HOSTLIST"]=> string(26) "host[01-02],another-host02" } slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_array_to_hostlist_error.phpt000066400000000000000000000014261265000126300317370ustar00rootroot00000000000000--TEST-- Test function slurm_array_to_hostlist() by calling it more than or less than its expected arguments --CREDIT-- Jimmy Tang --SKIPIF-- --FILE-- --EXPECTF-- *** Test by calling method or function with incorrect numbers of arguments *** ! ret -2 < 0 slurm_get_control_configuration_keys_basic.phpt000066400000000000000000000022701265000126300343360ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests--TEST-- Test function slurm_get_control_configuration_keys() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** ! slurm_get_control_configuration_keys : SUCCESS slurm_get_control_configuration_values_basic.phpt000066400000000000000000000024061265000126300346630ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests--TEST-- Test function slurm_get_control_configuration_values() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** ! slurm_get_control_configuration_values : SUCCESS slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_get_node_element_by_name_basic.phpt000066400000000000000000000022641265000126300331060ustar00rootroot00000000000000--TEST-- Test function slurm_get_node_element_by_name() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function without any arguments *** [SLURM:ERROR] -1 : No node by that name was found on your system [SLURM:ERROR] -3 : Faulty variables ( or no variables ) where passed on [SLURM:ERROR] -3 : Faulty variables ( or no variables ) where passed on slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_get_node_element_by_name_error.phpt000066400000000000000000000020051265000126300331470ustar00rootroot00000000000000--TEST-- Test function slurm_get_node_element_by_name() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** [SLURM:ERROR] -1 : No node by that name was found on your system slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_get_node_elements_basic.phpt000066400000000000000000000016741265000126300316030ustar00rootroot00000000000000--TEST-- Test function slurm_get_node_elements() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** ! slurm_get_node_elements() : SUCCESS slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_get_node_names_basic.phpt000066400000000000000000000014451265000126300310660ustar00rootroot00000000000000--TEST-- Test function slurm_get_node_names() by calling it with its expected arguments --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** ! slurm_get_node_names : SUCCESS slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_get_node_state_by_name_basic.phpt000066400000000000000000000025641265000126300326000ustar00rootroot00000000000000--TEST-- Test function slurm_get_node_state_by_name() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with faulty arguments *** [SLURM:ERROR] -1 : No node by that name was found on your system [SLURM:ERROR] -3 : Faulty variables ( or no variables ) where passed on [SLURM:ERROR] -3 : Faulty variables ( or no variables ) where passed on slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_get_node_state_by_name_error.phpt000066400000000000000000000015531265000126300326450ustar00rootroot00000000000000--TEST-- Test function slurm_get_node_state_by_name() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with faulty arguments *** [SLURM:ERROR] -1 : No node by that name was found on your system slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_get_node_states_basic.phpt000066400000000000000000000015721265000126300312670ustar00rootroot00000000000000--TEST-- Test function slurm_get_node_states() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with correct arguments *** [SLURM:SUCCESS] : slurm_get_node_states() succesfully returned it's data slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_get_partition_node_names_basic.phpt000066400000000000000000000006471265000126300331620ustar00rootroot00000000000000--TEST-- Test function slurm_get_partition_node_names() by calling it with its expected arguments --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** ! slurm_get_partition_node_names ok slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_get_partition_node_names_error.phpt000066400000000000000000000024221265000126300332230ustar00rootroot00000000000000--TEST-- Test function slurm_get_partition_node_names() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** [SLURM:ERROR] -1 : No partition by that name was found on your system slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_get_specific_partition_info_basic.phpt000066400000000000000000000024501265000126300336440ustar00rootroot00000000000000--TEST-- Test function slurm_get_specific_partition_info() by calling it with its expected arguments --CREDIT-- Jimmy Tang --SKIPIF-- --FILE-- --EXPECTF-- *** Test by calling method or function with its expected arguments *** [SLURM:SUCCESS] slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_get_specific_partition_info_error.phpt000066400000000000000000000024411265000126300337140ustar00rootroot00000000000000--TEST-- Test function slurm_get_specific_partition_info() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** [SLURM:ERROR] -1 : No partition by that name was found on your system slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_hostlist_to_array_basic.phpt000066400000000000000000000012771265000126300316730ustar00rootroot00000000000000--TEST-- Test function slurm_hostlist_to_array() by calling it with its expected arguments --CREDIT-- Jimmy Tang --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** array(3) { [0]=> string(6) "host01" [1]=> string(6) "host02" [2]=> string(14) "another-host02" } slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_hostlist_to_array_error.phpt000066400000000000000000000012361265000126300317360ustar00rootroot00000000000000--TEST-- Test function slurm_hostlist_to_array() by calling it more than or less than its expected arguments --CREDIT-- Jimmy Tang --SKIPIF-- --FILE-- --EXPECTF-- *** Test by calling method or function with incorrect numbers of arguments *** int(-3) slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_load_job_information_basic.phpt000066400000000000000000000017711265000126300322770ustar00rootroot00000000000000--TEST-- Test function slurm_load_job_information() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** ! slurm_load_job_information : SUCCESS slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_load_partition_jobs_basic.phpt000066400000000000000000000021411265000126300321360ustar00rootroot00000000000000--TEST-- Test function slurm_load_partition_jobs() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECTF-- *** Test by calling method or function with correct arguments *** [SLURM:SUCCESS] slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_load_partition_jobs_error.phpt000066400000000000000000000022341265000126300322110ustar00rootroot00000000000000--TEST-- Test function slurm_load_partition_jobs() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with faulty arguments *** [SLURM:ERROR] -1 : No jobs where found for a partition by that name [SLURM:ERROR] -3 : Faulty variables ( or no variables ) where passed on [SLURM:ERROR] -3 : Faulty variables ( or no variables ) where passed on slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_ping_basic.phpt000066400000000000000000000005261265000126300270530ustar00rootroot00000000000000--TEST-- Test function slurm_ping() by calling it with its expected arguments --SKIPIF-- --FILE-- --EXPECT-- int(0) slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_ping_error.phpt000066400000000000000000000017531265000126300271260ustar00rootroot00000000000000--TEST-- Test function slurm_ping() by calling it more than or less than its expected arguments --CREDIT-- Jimmy Tang --SKIPIF-- --FILE-- --EXPECTF-- *** Test by calling method or function with incorrect numbers of arguments *** ! slurm_ping Array == 0 ok ! slurm_ping Array == -1 ok ! slurm_ping Array == 0 ok ! slurm_ping Array == -1 ok slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_print_partition_names_basic.phpt000066400000000000000000000020401265000126300325170ustar00rootroot00000000000000--TEST-- Test function slurm_print_partition_names() by calling it with its expected arguments --CREDIT-- Jimmy Tang --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** ! slurm_print_partition_names : SUCCESS slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_slurmd_status_basic.phpt000066400000000000000000000016461265000126300310330ustar00rootroot00000000000000--TEST-- Test function slurm_slurmd_status() by calling it with its expected arguments --CREDIT-- Peter Vermeulen --SKIPIF-- --FILE-- --EXPECT-- *** Test by calling method or function with its expected arguments *** ! slurm_slurmd_status() : SUCCESS slurm-slurm-15-08-7-1/contribs/phpext/slurm_php/tests/slurm_version_basic.phpt000066400000000000000000000016501265000126300276020ustar00rootroot00000000000000--TEST-- Test function slurm_version() by calling it with its expected arguments --CREDIT-- Jimmy Tang Peter Vermeulen --SKIPIF-- --FILE-- 0)) { echo "! slurm_version : SUCCESS"; } else if($ver == -3) { echo "[SLURM:ERROR] -3 : Faulty variables ( or no variables ) where passed on"; } else if($ver == -2) { echo "[SLURM:ERROR] -2 : Daemons not online"; } else if($ver == -1) { echo "[SLURM:ERROR] -1 : No version was found on the system"; } ?> --EXPECT-- *** Test by calling method or function with its expected arguments *** ! slurm_version : SUCCESS slurm-slurm-15-08-7-1/contribs/pmi2/000077500000000000000000000000001265000126300170175ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/pmi2/COPYRIGHT000066400000000000000000000026371265000126300203220ustar00rootroot00000000000000 COPYRIGHT The following is a notice of limited availability of the code, and disclaimer which must be included in the prologue of the code and in all source listings of the code. Copyright Notice + 2002 University of Chicago Permission is hereby granted to use, reproduce, prepare derivative works, and to redistribute to others. This software was authored by: Mathematics and Computer Science Division Argonne National Laboratory, Argonne IL 60439 (and) Department of Computer Science University of Illinois at Urbana-Champaign GOVERNMENT LICENSE Portions of this material resulted from work developed under a U.S. Government Contract and are subject to the following license: the Government is granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable worldwide license in this computer software to reproduce, prepare derivative works, and perform publicly and display publicly. DISCLAIMER This computer code material was prepared, in part, as an account of work sponsored by an agency of the United States Government. Neither the United States, nor the University of Chicago, nor any of their employees, makes any warranty express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. slurm-slurm-15-08-7-1/contribs/pmi2/Makefile.am000066400000000000000000000014721265000126300210570ustar00rootroot00000000000000# Makefile for PMI2 client side library. # AUTOMAKE_OPTIONS = foreign pkginclude_HEADERS = slurm/pmi2.h noinst_HEADERS = pmi2_util.h if WITH_GNU_LD PMI2_VERSION_SCRIPT = \ pmi2_version.map PMI2_OTHER_FLAGS = \ -Wl,--version-script=$(PMI2_VERSION_SCRIPT) endif libpmi2_current = 0 libpmi2_age = 0 libpmi2_rev = 0 BUILT_SOURCES = $(PMI2_VERSION_SCRIPT) lib_LTLIBRARIES = libpmi2.la libpmi2_la_SOURCES = pmi2_api.c pmi2_util.c slurm/pmi2.h libpmi2_la_LDFLAGS = $(LIB_LDFLAGS) -version-info $(libpmi2_current):$(libpmi2_rev):$(libpmi2_age) \ $(PMI2_OTHER_FLAGS) $(PMI2_VERSION_SCRIPT) : (echo "{ global:"; \ echo " PMI2_*;"; \ echo " PMIX_*;"; \ echo " local: *;"; \ echo "};") > $(PMI2_VERSION_SCRIPT) CLEANFILES = \ $(PMI_VERSION_SCRIPT) DISTCLEANFILES = \ $(PMI_VERSION_SCRIPT) slurm-slurm-15-08-7-1/contribs/pmi2/Makefile.in000066400000000000000000000661641265000126300211010ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ # Makefile for PMI2 client side library. # VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/pmi2 DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am \ $(top_srcdir)/auxdir/depcomp $(noinst_HEADERS) \ $(pkginclude_HEADERS) README ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(libdir)" "$(DESTDIR)$(pkgincludedir)" LTLIBRARIES = $(lib_LTLIBRARIES) libpmi2_la_LIBADD = am_libpmi2_la_OBJECTS = pmi2_api.lo pmi2_util.lo libpmi2_la_OBJECTS = $(am_libpmi2_la_OBJECTS) AM_V_lt = $(am__v_lt_@AM_V@) am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@) am__v_lt_0 = --silent am__v_lt_1 = libpmi2_la_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ $(libpmi2_la_LDFLAGS) $(LDFLAGS) -o $@ AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = DEFAULT_INCLUDES = -I.@am__isrc@ -I$(top_builddir) -I$(top_builddir)/slurm depcomp = $(SHELL) $(top_srcdir)/auxdir/depcomp am__depfiles_maybe = depfiles am__mv = mv -f COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \ $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \ $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \ $(AM_CFLAGS) $(CFLAGS) AM_V_CC = $(am__v_CC_@AM_V@) am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@) am__v_CC_0 = @echo " CC " $@; am__v_CC_1 = CCLD = $(CC) LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ $(AM_LDFLAGS) $(LDFLAGS) -o $@ AM_V_CCLD = $(am__v_CCLD_@AM_V@) am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@) am__v_CCLD_0 = @echo " CCLD " $@; am__v_CCLD_1 = SOURCES = $(libpmi2_la_SOURCES) DIST_SOURCES = $(libpmi2_la_SOURCES) am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac HEADERS = $(noinst_HEADERS) $(pkginclude_HEADERS) am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` ETAGS = etags CTAGS = ctags DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign pkginclude_HEADERS = slurm/pmi2.h noinst_HEADERS = pmi2_util.h @WITH_GNU_LD_TRUE@PMI2_VERSION_SCRIPT = \ @WITH_GNU_LD_TRUE@ pmi2_version.map @WITH_GNU_LD_TRUE@PMI2_OTHER_FLAGS = \ @WITH_GNU_LD_TRUE@ -Wl,--version-script=$(PMI2_VERSION_SCRIPT) libpmi2_current = 0 libpmi2_age = 0 libpmi2_rev = 0 BUILT_SOURCES = $(PMI2_VERSION_SCRIPT) lib_LTLIBRARIES = libpmi2.la libpmi2_la_SOURCES = pmi2_api.c pmi2_util.c slurm/pmi2.h libpmi2_la_LDFLAGS = $(LIB_LDFLAGS) -version-info $(libpmi2_current):$(libpmi2_rev):$(libpmi2_age) \ $(PMI2_OTHER_FLAGS) CLEANFILES = \ $(PMI_VERSION_SCRIPT) DISTCLEANFILES = \ $(PMI_VERSION_SCRIPT) all: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) all-am .SUFFIXES: .SUFFIXES: .c .lo .o .obj $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign contribs/pmi2/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign contribs/pmi2/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): install-libLTLIBRARIES: $(lib_LTLIBRARIES) @$(NORMAL_INSTALL) @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ list2=; for p in $$list; do \ if test -f $$p; then \ list2="$$list2 $$p"; \ else :; fi; \ done; \ test -z "$$list2" || { \ echo " $(MKDIR_P) '$(DESTDIR)$(libdir)'"; \ $(MKDIR_P) "$(DESTDIR)$(libdir)" || exit 1; \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(libdir)'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(libdir)"; \ } uninstall-libLTLIBRARIES: @$(NORMAL_UNINSTALL) @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ for p in $$list; do \ $(am__strip_dir) \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(libdir)/$$f'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(libdir)/$$f"; \ done clean-libLTLIBRARIES: -test -z "$(lib_LTLIBRARIES)" || rm -f $(lib_LTLIBRARIES) @list='$(lib_LTLIBRARIES)'; \ locs=`for p in $$list; do echo $$p; done | \ sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \ sort -u`; \ test -z "$$locs" || { \ echo rm -f $${locs}; \ rm -f $${locs}; \ } libpmi2.la: $(libpmi2_la_OBJECTS) $(libpmi2_la_DEPENDENCIES) $(EXTRA_libpmi2_la_DEPENDENCIES) $(AM_V_CCLD)$(libpmi2_la_LINK) -rpath $(libdir) $(libpmi2_la_OBJECTS) $(libpmi2_la_LIBADD) $(LIBS) mostlyclean-compile: -rm -f *.$(OBJEXT) distclean-compile: -rm -f *.tab.c @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/pmi2_api.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/pmi2_util.Plo@am__quote@ .c.o: @am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $< .c.obj: @am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'` .c.lo: @am__fastdepCC_TRUE@ $(AM_V_CC)$(LTCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LTCOMPILE) -c -o $@ $< mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs install-pkgincludeHEADERS: $(pkginclude_HEADERS) @$(NORMAL_INSTALL) @list='$(pkginclude_HEADERS)'; test -n "$(pkgincludedir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(pkgincludedir)'"; \ $(MKDIR_P) "$(DESTDIR)$(pkgincludedir)" || exit 1; \ fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(pkgincludedir)'"; \ $(INSTALL_HEADER) $$files "$(DESTDIR)$(pkgincludedir)" || exit $$?; \ done uninstall-pkgincludeHEADERS: @$(NORMAL_UNINSTALL) @list='$(pkginclude_HEADERS)'; test -n "$(pkgincludedir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(pkgincludedir)'; $(am__uninstall_files_from_dir) ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-am TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-am CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-am cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) check-am all-am: Makefile $(LTLIBRARIES) $(HEADERS) installdirs: for dir in "$(DESTDIR)$(libdir)" "$(DESTDIR)$(pkgincludedir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: -test -z "$(CLEANFILES)" || rm -f $(CLEANFILES) distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) -test -z "$(DISTCLEANFILES)" || rm -f $(DISTCLEANFILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." -test -z "$(BUILT_SOURCES)" || rm -f $(BUILT_SOURCES) clean: clean-am clean-am: clean-generic clean-libLTLIBRARIES clean-libtool \ mostlyclean-am distclean: distclean-am -rm -rf ./$(DEPDIR) -rm -f Makefile distclean-am: clean-am distclean-compile distclean-generic \ distclean-tags dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-pkgincludeHEADERS install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-libLTLIBRARIES install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -rf ./$(DEPDIR) -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-compile mostlyclean-generic \ mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-libLTLIBRARIES uninstall-pkgincludeHEADERS .MAKE: all check install install-am install-strip .PHONY: CTAGS GTAGS TAGS all all-am check check-am clean clean-generic \ clean-libLTLIBRARIES clean-libtool cscopelist-am ctags \ ctags-am distclean distclean-compile distclean-generic \ distclean-libtool distclean-tags distdir dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-html install-html-am install-info \ install-info-am install-libLTLIBRARIES install-man install-pdf \ install-pdf-am install-pkgincludeHEADERS install-ps \ install-ps-am install-strip installcheck installcheck-am \ installdirs maintainer-clean maintainer-clean-generic \ mostlyclean mostlyclean-compile mostlyclean-generic \ mostlyclean-libtool pdf pdf-am ps ps-am tags tags-am uninstall \ uninstall-am uninstall-libLTLIBRARIES \ uninstall-pkgincludeHEADERS $(PMI2_VERSION_SCRIPT) : (echo "{ global:"; \ echo " PMI2_*;"; \ echo " PMIX_*;"; \ echo " local: *;"; \ echo "};") > $(PMI2_VERSION_SCRIPT) # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/pmi2/README000066400000000000000000000005151265000126300177000ustar00rootroot00000000000000# # Instructions how to compile the example programs. # #export $SLURM_ROOT=slurm_install # for example SLRUM_ROOT=/home/david/clusters/master/linux # # gcc -g -O0 -o testpmi2 testpmi2.c -I$SLURM_ROOT/include $SLURM_ROOT/lib/libpmi2.so # # gcc -g -O0 -o testpmixring testpmixring.c -I$SLURM_ROOT/include $LSURM_ROOT/lib/libpmi2.so # slurm-slurm-15-08-7-1/contribs/pmi2/pmi2.h000066400000000000000000000616601265000126300200500ustar00rootroot00000000000000/* -*- Mode: C; c-basic-offset:4 ; -*- */ /* * (C) 2007 by Argonne National Laboratory. * See COPYRIGHT in top-level directory. */ #ifndef PMI2_H_INCLUDED #define PMI2_H_INCLUDED #ifndef USE_PMI2_API /*#error This header file defines the PMI2 API, but PMI2 was not selected*/ #endif #define PMI2_MAX_KEYLEN 64 #define PMI2_MAX_VALLEN 1024 #define PMI2_MAX_ATTRVALUE 1024 #define PMI2_ID_NULL -1 #define PMII_COMMANDLEN_SIZE 6 #define PMII_MAX_COMMAND_LEN (64*1024) #if defined(__cplusplus) extern "C" { #endif static const char FULLINIT_CMD[] = "fullinit"; static const char FULLINITRESP_CMD[] = "fullinit-response"; static const char FINALIZE_CMD[] = "finalize"; static const char FINALIZERESP_CMD[] = "finalize-response"; static const char ABORT_CMD[] = "abort"; static const char JOBGETID_CMD[] = "job-getid"; static const char JOBGETIDRESP_CMD[] = "job-getid-response"; static const char JOBCONNECT_CMD[] = "job-connect"; static const char JOBCONNECTRESP_CMD[] = "job-connect-response"; static const char JOBDISCONNECT_CMD[] = "job-disconnect"; static const char JOBDISCONNECTRESP_CMD[] = "job-disconnect-response"; static const char KVSPUT_CMD[] = "kvs-put"; static const char KVSPUTRESP_CMD[] = "kvs-put-response"; static const char KVSFENCE_CMD[] = "kvs-fence"; static const char KVSFENCERESP_CMD[] = "kvs-fence-response"; static const char KVSGET_CMD[] = "kvs-get"; static const char KVSGETRESP_CMD[] = "kvs-get-response"; static const char GETNODEATTR_CMD[] = "info-getnodeattr"; static const char GETNODEATTRRESP_CMD[] = "info-getnodeattr-response"; static const char PUTNODEATTR_CMD[] = "info-putnodeattr"; static const char PUTNODEATTRRESP_CMD[] = "info-putnodeattr-response"; static const char GETJOBATTR_CMD[] = "info-getjobattr"; static const char GETJOBATTRRESP_CMD[] = "info-getjobattr-response"; static const char NAMEPUBLISH_CMD[] = "name-publish"; static const char NAMEPUBLISHRESP_CMD[] = "name-publish-response"; static const char NAMEUNPUBLISH_CMD[] = "name-unpublish"; static const char NAMEUNPUBLISHRESP_CMD[] = "name-unpublish-response"; static const char NAMELOOKUP_CMD[] = "name-lookup"; static const char NAMELOOKUPRESP_CMD[] = "name-lookup-response"; static const char PMIJOBID_KEY[] = "pmijobid"; static const char PMIRANK_KEY[] = "pmirank"; static const char SRCID_KEY[] = "srcid"; static const char THREADED_KEY[] = "threaded"; static const char RC_KEY[] = "rc"; static const char ERRMSG_KEY[] = "errmsg"; static const char PMIVERSION_KEY[] = "pmi-version"; static const char PMISUBVER_KEY[] = "pmi-subversion"; static const char RANK_KEY[] = "rank"; static const char SIZE_KEY[] = "size"; static const char APPNUM_KEY[] = "appnum"; static const char SPAWNERJOBID_KEY[] = "spawner-jobid"; static const char DEBUGGED_KEY[] = "debugged"; static const char PMIVERBOSE_KEY[] = "pmiverbose"; static const char ISWORLD_KEY[] = "isworld"; static const char MSG_KEY[] = "msg"; static const char JOBID_KEY[] = "jobid"; static const char KVSCOPY_KEY[] = "kvscopy"; static const char KEY_KEY[] = "key"; static const char VALUE_KEY[] = "value"; static const char FOUND_KEY[] = "found"; static const char WAIT_KEY[] = "wait"; static const char NAME_KEY[] = "name"; static const char PORT_KEY[] = "port"; static const char THRID_KEY[] = "thrid"; static const char INFOKEYCOUNT_KEY[] = "infokeycount"; static const char INFOKEY_KEY[] = "infokey%d"; static const char INFOVAL_KEY[] = "infoval%d"; static const char TRUE_VAL[] = "TRUE"; static const char FALSE_VAL[] = "FALSE"; /* Local types */ /* Parse commands are in this structure. Fields in this structure are dynamically allocated as necessary */ typedef struct PMI2_Keyvalpair { const char *key; const char *value; int valueLen; /* Length of a value (values may contain nulls, so we need this) */ int isCopy; /* The value is a copy (and will need to be freed) if this is true, otherwise, it is a null-terminated string in the original buffer */ } PMI2_Keyvalpair; typedef struct PMI2_Command { int nPairs; /* Number of key=value pairs */ char *command; /* Overall command buffer */ PMI2_Keyvalpair **pairs; /* Array of pointers to pairs */ int complete; } PMI2_Command; /*D PMI2_CONSTANTS - PMI2 definitions Error Codes: + PMI2_SUCCESS - operation completed successfully . PMI2_FAIL - operation failed . PMI2_ERR_NOMEM - input buffer not large enough . PMI2_ERR_INIT - PMI not initialized . PMI2_ERR_INVALID_ARG - invalid argument . PMI2_ERR_INVALID_KEY - invalid key argument . PMI2_ERR_INVALID_KEY_LENGTH - invalid key length argument . PMI2_ERR_INVALID_VAL - invalid val argument . PMI2_ERR_INVALID_VAL_LENGTH - invalid val length argument . PMI2_ERR_INVALID_LENGTH - invalid length argument . PMI2_ERR_INVALID_NUM_ARGS - invalid number of arguments . PMI2_ERR_INVALID_ARGS - invalid args argument . PMI2_ERR_INVALID_NUM_PARSED - invalid num_parsed length argument . PMI2_ERR_INVALID_KEYVALP - invalid keyvalp argument . PMI2_ERR_INVALID_SIZE - invalid size argument - PMI2_ERR_OTHER - other unspecified error D*/ #define PMI2_SUCCESS 0 #define PMI2_FAIL -1 #define PMI2_ERR_INIT 1 #define PMI2_ERR_NOMEM 2 #define PMI2_ERR_INVALID_ARG 3 #define PMI2_ERR_INVALID_KEY 4 #define PMI2_ERR_INVALID_KEY_LENGTH 5 #define PMI2_ERR_INVALID_VAL 6 #define PMI2_ERR_INVALID_VAL_LENGTH 7 #define PMI2_ERR_INVALID_LENGTH 8 #define PMI2_ERR_INVALID_NUM_ARGS 9 #define PMI2_ERR_INVALID_ARGS 10 #define PMI2_ERR_INVALID_NUM_PARSED 11 #define PMI2_ERR_INVALID_KEYVALP 12 #define PMI2_ERR_INVALID_SIZE 13 #define PMI2_ERR_OTHER 14 /* This is here to allow spawn multiple functions to compile. This needs to be removed once those functions are fixed for pmi2 */ /* typedef struct PMI_keyval_t { char * key; char * val; } PMI_keyval_t; */ /*@ PMI2_Connect_comm_t - connection structure used when connecting to other jobs Fields: + read - Read from a connection to the leader of the job to which this process will be connecting. Returns 0 on success or an MPI error code on failure. . write - Write to a connection to the leader of the job to which this process will be connecting. Returns 0 on success or an MPI error code on failure. . ctx - An anonymous pointer to data that may be used by the read and write members. - isMaster - Indicates which process is the "master"; may have the values 1 (is the master), 0 (is not the master), or -1 (neither is designated as the master). The two processes must agree on which process is the master, or both must select -1 (neither is the master). Notes: A typical implementation of these functions will use the read and write calls on a pre-established file descriptor (fd) between the two leading processes. This will be needed only if the PMI server cannot access the KVS spaces of another job (this may happen, for example, if each mpiexec creates the KVS spaces for the processes that it manages). @*/ typedef struct PMI2_Connect_comm { int (*read)( void *buf, int maxlen, void *ctx ); int (*write)( const void *buf, int len, void *ctx ); void *ctx; int isMaster; } PMI2_Connect_comm_t; /*S MPID_Info - Structure of an MPID info Notes: There is no reference count because 'MPI_Info' values, unlike other MPI objects, may be changed after they are passed to a routine without changing the routine''s behavior. In other words, any routine that uses an 'MPI_Info' object must make a copy or otherwise act on any info value that it needs. A linked list is used because the typical 'MPI_Info' list will be short and a simple linked list is easy to implement and to maintain. Similarly, a single structure rather than separate header and element structures are defined for simplicity. No separate thread lock is provided because info routines are not performance critical; they may use the single critical section lock in the 'MPIR_Process' structure when they need a thread lock. This particular form of linked list (in particular, with this particular choice of the first two members) is used because it allows us to use the same routines to manage this list as are used to manage the list of free objects (in the file 'src/util/mem/handlemem.c'). In particular, if lock-free routines for updating a linked list are provided, they can be used for managing the 'MPID_Info' structure as well. The MPI standard requires that keys can be no less that 32 characters and no more than 255 characters. There is no mandated limit on the size of values. Module: Info-DS S*/ typedef struct MPID_Info { int handle; int pobj_mutex; int ref_count; struct MPID_Info *next; char *key; char *value; } MPID_Info; #define PMI2U_Info MPID_Info /*@ PMI2_Init - initialize the Process Manager Interface Output Parameter: + spawned - spawned flag . size - number of processes in the job . rank - rank of this process in the job - appnum - which executable is this on the mpiexec commandline Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: Initialize PMI for this process group. The value of spawned indicates whether this process was created by 'PMI2_Spawn_multiple'. 'spawned' will be non-zero iff this process group has a parent. @*/ int PMI2_Init(int *spawned, int *size, int *rank, int *appnum); /*@ PMI2_Finalize - finalize the Process Manager Interface Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: Finalize PMI for this job. @*/ int PMI2_Finalize(void); /*@ PMI2_Initialized - check if PMI has been initialized Return values: Non-zero if PMI2_Initialize has been called successfully, zero otherwise. @*/ int PMI2_Initialized(void); /*@ PMI2_Abort - abort the process group associated with this process Input Parameters: + flag - non-zero if all processes in this job should abort, zero otherwise - error_msg - error message to be printed Return values: If the abort succeeds this function will not return. Returns an MPI error code otherwise. @*/ int PMI2_Abort(int flag, const char msg[]); /*@ PMI2_Spawn - spawn a new set of processes Input Parameters: + count - count of commands . cmds - array of command strings . argcs - size of argv arrays for each command string . argvs - array of argv arrays for each command string . maxprocs - array of maximum processes to spawn for each command string . info_keyval_sizes - array giving the number of elements in each of the 'info_keyval_vectors' . info_keyval_vectors - array of keyval vector arrays . preput_keyval_size - Number of elements in 'preput_keyval_vector' . preput_keyval_vector - array of keyvals to be pre-put in the spawned keyval space - jobIdSize - size of the buffer provided in jobId Output Parameter: + jobId - job id of the spawned processes - errors - array of errors for each command Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: This function spawns a set of processes into a new job. The 'count' field refers to the size of the array parameters - 'cmd', 'argvs', 'maxprocs', 'info_keyval_sizes' and 'info_keyval_vectors'. The 'preput_keyval_size' refers to the size of the 'preput_keyval_vector' array. The 'preput_keyval_vector' contains keyval pairs that will be put in the keyval space of the newly created job before the processes are started. The 'maxprocs' array specifies the desired number of processes to create for each 'cmd' string. The actual number of processes may be less than the numbers specified in maxprocs. The acceptable number of processes spawned may be controlled by ``soft'' keyvals in the info arrays. The ``soft'' option is specified by mpiexec in the MPI-2 standard. Environment variables may be passed to the spawned processes through PMI implementation specific 'info_keyval' parameters. @*/ int PMI2_Job_Spawn(int count, const char * cmds[], int argcs[], const char ** argvs[], const int maxprocs[], const int info_keyval_sizes[], const struct MPID_Info *info_keyval_vectors[], int preput_keyval_size, const struct MPID_Info *preput_keyval_vector[], char jobId[], int jobIdSize, int errors[]); /*@ PMI2_Job_GetId - get job id of this job Input parameters: . jobid_size - size of buffer provided in jobid Output parameters: . jobid - the job id of this job Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Job_GetId(char jobid[], int jobid_size); /*@ PMI2_Job_GetRank - get rank of this job Output parameters: . rank - the rank of this job Return values: Returns 'PMI2_SUCCESS' on success and an PMI error code on failure. @*/ int PMI2_Job_GetRank(int* rank); /*@ PMI2_Info_GetSize - get the number of processes on the node Output parameters: . size - the number of processes on the node Return values: Returns 'PMI2_SUCCESS' on success and an PMI error code on failure. @*/ int PMI2_Info_GetSize(int* size); /*@ PMI2_Job_Connect - connect to the parallel job with ID jobid Input parameters: . jobid - job id of the job to connect to Output parameters: . conn - connection structure used to establish communication with the remote job Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: This just "registers" the other parallel job as part of a parallel program, and is used in the PMI2_KVS_xxx routines (see below). This is not a collective call and establishes a connection between all processes that are connected to the calling processes (on the one side) and that are connected to the named jobId on the other side. Processes that are already connected may call this routine. @*/ int PMI2_Job_Connect(const char jobid[], PMI2_Connect_comm_t *conn); /*@ PMI2_Job_Disconnect - disconnects from the job with ID jobid Input parameters: . jobid - job id of the job to connect to Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Job_Disconnect(const char jobid[]); /*@ PMI2_KVS_Put - put a key/value pair in the keyval space for this job Input Parameters: + key - key - value - value Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: If multiple PMI2_KVS_Put calls are made with the same key between calls to PMI2_KVS_Fence, the behavior is undefined. That is, the value returned by PMI2_KVS_Get for that key after the PMI2_KVS_Fence is not defined. @*/ int PMI2_KVS_Put(const char key[], const char value[]); /*@ PMI2_KVS_Fence - commit all PMI2_KVS_Put calls made before this fence Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: This is a collective call across the job. It has semantics that are similar to those for MPI_Win_fence and hence is most easily implemented as a barrier across all of the processes in the job. Specifically, all PMI2_KVS_Put operations performed by any process in the same job must be visible to all processes (by using PMI2_KVS_Get) after PMI2_KVS_Fence completes. However, a PMI implementation could make this a lazy operation by not waiting for all processes to enter their corresponding PMI2_KVS_Fence until some process issues a PMI2_KVS_Get. This might be appropriate for some wide-area implementations. @*/ int PMI2_KVS_Fence(void); /*@ PMI2_KVS_Get - returns the value associated with key in the key-value space associated with the job ID jobid Input Parameters: + jobid - the job id identifying the key-value space in which to look for key. If jobid is NULL, look in the key-value space of this job. . src_pmi_id - the pmi id of the process which put this keypair. This is just a hint to the server. PMI2_ID_NULL should be passed if no hint is provided. . key - key - maxvalue - size of the buffer provided in value Output Parameters: + value - value associated with key - vallen - length of the returned value, or, if the length is longer than maxvalue, the negative of the required length is returned Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_KVS_Get(const char *jobid, int src_pmi_id, const char key[], char value [], int maxvalue, int *vallen); /*@ PMI2_Info_GetNodeAttr - returns the value of the attribute associated with this node Input Parameters: + name - name of the node attribute . valuelen - size of the buffer provided in value - waitfor - if non-zero, the function will not return until the attribute is available Output Parameters: + value - value of the attribute - found - non-zero indicates that the attribute was found Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: This provides a way, when combined with PMI2_Info_PutNodeAttr, for processes on the same node to share information without requiring a more general barrier across the entire job. If waitfor is non-zero, the function will never return with found set to zero. Predefined attributes: + memPoolType - If the process manager allocated a shared memory pool for the MPI processes in this job and on this node, return the type of that pool. Types include sysv, anonmmap and ntshm. . memSYSVid - Return the SYSV memory segment id if the memory pool type is sysv. Returned as a string. . memAnonMMAPfd - Return the FD of the anonymous mmap segment. The FD is returned as a string. - memNTName - Return the name of the Windows NT shared memory segment, file mapping object backed by system paging file. Returned as a string. @*/ int PMI2_Info_GetNodeAttr(const char name[], char value[], int valuelen, int *found, int waitfor); /*@ PMI2_Info_GetNodeAttrIntArray - returns the value of the attribute associated with this node. The value must be an array of integers. Input Parameters: + name - name of the node attribute - arraylen - number of elements in array Output Parameters: + array - value of attribute . outlen - number of elements returned - found - non-zero if attribute was found Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: Notice that, unlike PMI2_Info_GetNodeAttr, this function does not have a waitfor parameter, and will return immediately with found=0 if the attribute was not found. Predefined array attribute names: + localRanksCount - Return the number of local ranks that will be returned by the key localRanks. . localRanks - Return the ranks in MPI_COMM_WORLD of the processes that are running on this node. - cartCoords - Return the Cartesian coordinates of this process in the underlying network topology. The coordinates are indexed from zero. Value only if the Job attribute for physTopology includes cartesian. @*/ int PMI2_Info_GetNodeAttrIntArray(const char name[], int array[], int arraylen, int *outlen, int *found); /*@ PMI2_Info_PutNodeAttr - stores the value of the named attribute associated with this node Input Parameters: + name - name of the node attribute - value - the value of the attribute Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: For example, it might be used to share segment ids with other processes on the same SMP node. @*/ int PMI2_Info_PutNodeAttr(const char name[], const char value[]); /*@ PMI2_Info_GetJobAttr - returns the value of the attribute associated with this job Input Parameters: + name - name of the job attribute - valuelen - size of the buffer provided in value Output Parameters: + value - value of the attribute - found - non-zero indicates that the attribute was found Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Info_GetJobAttr(const char name[], char value[], int valuelen, int *found); /*@ PMI2_Info_GetJobAttrIntArray - returns the value of the attribute associated with this job. The value must be an array of integers. Input Parameters: + name - name of the job attribute - arraylen - number of elements in array Output Parameters: + array - value of attribute . outlen - number of elements returned - found - non-zero if attribute was found Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Predefined array attribute names: + universeSize - The size of the "universe" (defined for the MPI attribute MPI_UNIVERSE_SIZE . hasNameServ - The value hasNameServ is true if the PMI2 environment supports the name service operations (publish, lookup, and unpublish). . physTopology - Return the topology of the underlying network. The valid topology types include cartesian, hierarchical, complete, kautz, hypercube; additional types may be added as necessary. If the type is hierarchical, then additional attributes may be queried to determine the details of the topology. For example, a typical cluster has a hierarchical physical topology, consisting of two levels of complete networks - the switched Ethernet or Infiniband and the SMP nodes. Other systems, such as IBM BlueGene, have one level that is cartesian (and in virtual node mode, have a single-level physical topology). . physTopologyLevels - Return a string describing the topology type for each level of the underlying network. Only valid if the physTopology is hierarchical. The value is a comma-separated list of physical topology types (except for hierarchical). The levels are ordered starting at the top, with the network closest to the processes last. The lower level networks may connect only a subset of processes. For example, for a cartesian mesh of SMPs, the value is cartesian,complete. All processes are connected by the cartesian part of this, but for each complete network, only the processes on the same node are connected. . cartDims - Return a string of comma-separated values describing the dimensions of the Cartesian topology. This must be consistent with the value of cartCoords that may be returned by PMI2_Info_GetNodeAttrIntArray. These job attributes are just a start, but they provide both an example of the sort of external data that is available through the PMI interface and how extensions can be added within the same API and wire protocol. For example, adding more complex network topologies requires only adding new keys, not new routines. . isHeterogeneous - The value isHeterogeneous is true if the processes belonging to the job are running on nodes with different underlying data models. @*/ int PMI2_Info_GetJobAttrIntArray(const char name[], int array[], int arraylen, int *outlen, int *found); /*@ PMI2_Nameserv_publish - publish a name Input parameters: + service_name - string representing the service being published . info_ptr - - port - string representing the port on which to contact the service Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Nameserv_publish(const char service_name[], const struct MPID_Info *info_ptr, const char port[]); /*@ PMI2_Nameserv_lookup - lookup a service by name Input parameters: + service_name - string representing the service being published . info_ptr - - portLen - size of buffer provided in port Output parameters: . port - string representing the port on which to contact the service Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Nameserv_lookup(const char service_name[], const struct MPID_Info *info_ptr, char port[], int portLen); /*@ PMI2_Nameserv_unpublish - unpublish a name Input parameters: + service_name - string representing the service being unpublished - info_ptr - Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Nameserv_unpublish(const char service_name[], const struct MPID_Info *info_ptr); #if defined(__cplusplus) } #endif #endif /* PMI2_H_INCLUDED */ slurm-slurm-15-08-7-1/contribs/pmi2/pmi2_api.c000066400000000000000000001751421265000126300206750ustar00rootroot00000000000000/* -*- Mode: C; c-basic-offset:4 ; -*- */ /* * (C) 2007 by Argonne National Laboratory. * See COPYRIGHT in top-level directory. * Copyright (C) 2013 Intel, Inc. */ #include "pmi2_util.h" #include "slurm/pmi2.h" #include #include #include #include #include #include #include #ifndef MAXHOSTNAME #define MAXHOSTNAME 256 #endif #define PMII_EXIT_CODE -1 #define PMI_VERSION 2 #define PMI_SUBVERSION 0 #define MAX_INT_STR_LEN 11 /* number of digits in MAX_UINT + 1 */ typedef enum { PMI2_UNINITIALIZED = 0, SINGLETON_INIT_BUT_NO_PM = 1, NORMAL_INIT_WITH_PM, SINGLETON_INIT_WITH_PM } PMI2State; static PMI2State PMI2_initialized = PMI2_UNINITIALIZED; static int PMI2_debug = 0; static int PMI2_fd = -1; static int PMI2_size = 1; static int PMI2_rank = 0; /* XXX DJG the "const"s on both of these functions and the Keyvalpair * struct are wrong in the isCopy==TRUE case! */ /* init_kv_str -- fills in keyvalpair. val is required to be a null-terminated string. isCopy is set to FALSE, so caller must free key and val memory, if necessary. */ static void init_kv_str(PMI2_Keyvalpair *kv, const char key[], const char val[]) { kv->key = key; kv->value = val; kv->valueLen = strlen(val); kv->isCopy = 0/*FALSE*/; } /* same as init_kv_str, but strdup's the key and val first, and sets isCopy=TRUE */ static void init_kv_strdup(PMI2_Keyvalpair *kv, const char key[], const char val[]) { /* XXX DJG could be slightly more efficient */ init_kv_str(kv, strdup(key), strdup(val)); kv->isCopy = 1/*TRUE*/; } /* same as init_kv_strdup, but converts val into a string first */ /* XXX DJG could be slightly more efficient */ static void init_kv_strdup_int(PMI2_Keyvalpair *kv, const char key[], int val) { char tmpbuf[32] = {0}; int rc = PMI2_SUCCESS; rc = snprintf(tmpbuf, sizeof(tmpbuf), "%d", val); PMI2U_Assert(rc >= 0); init_kv_strdup(kv, key, tmpbuf); } /* initializes the key with ("%s%d", key_prefix, suffix), uses a string value */ /* XXX DJG could be slightly more efficient */ static void init_kv_strdup_intsuffix(PMI2_Keyvalpair *kv, const char key_prefix[], int suffix, const char val[]) { char tmpbuf[256/*XXX HACK*/] = {0}; int rc = PMI2_SUCCESS; rc = snprintf(tmpbuf, sizeof(tmpbuf), "%s%d", key_prefix, suffix); PMI2U_Assert(rc >= 0); init_kv_strdup(kv, tmpbuf, val); } static int getPMIFD(void); static int PMIi_ReadCommandExp( int fd, PMI2_Command *cmd, const char *exp, int* rc, const char **errmsg ); static int PMIi_ReadCommand( int fd, PMI2_Command *cmd ); static int PMIi_WriteSimpleCommand( int fd, PMI2_Command *resp, const char cmd[], PMI2_Keyvalpair *pairs[], int npairs); static int PMIi_WriteSimpleCommandStr( int fd, PMI2_Command *resp, const char cmd[], ...); static int PMIi_InitIfSingleton(void); static int PMII_singinit(void); static void freepairs(PMI2_Keyvalpair** pairs, int npairs); static int getval(PMI2_Keyvalpair *const pairs[], int npairs, const char *key, const char **value, int *vallen); static int getvalint(PMI2_Keyvalpair *const pairs[], int npairs, const char *key, int *val); static int getvalptr(PMI2_Keyvalpair *const pairs[], int npairs, const char *key, void *val); static int getvalbool(PMI2_Keyvalpair *const pairs[], int npairs, const char *key, int *val); static int accept_one_connection(int list_sock); static int GetResponse(const char request[], const char expectedCmd[], int checkRc); static void dump_PMI2_Command(PMI2_Command *cmd); static void dump_PMI2_Keyvalpair(PMI2_Keyvalpair *kv); static void phony(void); typedef struct pending_item { struct pending_item *next; PMI2_Command *cmd; } pending_item_t; pending_item_t *pendingq_head = NULL; pending_item_t *pendingq_tail = NULL; /* phony() * Collect unused functions which make the * gcc complain ;defined but not used' */ static void phony(void) { if (0) { accept_one_connection(0); GetResponse(NULL, NULL, 0); dump_PMI2_Command(NULL); PMII_singinit(); } } static inline void ENQUEUE(PMI2_Command *cmd) { pending_item_t *pi = malloc(sizeof(pending_item_t)); pi->next = NULL; pi->cmd = cmd; if (pendingq_head == NULL) { pendingq_head = pendingq_tail = pi; } else { pendingq_tail->next = pi; pendingq_tail = pi; } } static inline int SEARCH_REMOVE(PMI2_Command *cmd) { pending_item_t *pi, *prev; pi = pendingq_head; if (pi->cmd == cmd) { pendingq_head = pi->next; if (pendingq_head == NULL) pendingq_tail = NULL; free(pi); return 1; } prev = pi; pi = pi->next; for ( ; pi ; pi = pi->next) { if (pi->cmd == cmd) { prev->next = pi->next; if (prev->next == NULL) pendingq_tail = prev; free(pi); return 1; } } return 0; } /* ------------------------------------------------------------------------- */ /* PMI-2 API Routines */ /* ------------------------------------------------------------------------- */ int PMI2_Init(int *spawned, int *size, int *rank, int *appnum) { int pmi2_errno = PMI2_SUCCESS; char *p; char buf[PMI2_MAXLINE], cmdline[PMI2_MAXLINE]; char *jobid; char *pmiid; int ret; PMI2U_printf("[BEGIN]"); /* Get the value of PMI2_DEBUG from the environment if possible, since we may have set it to help debug the setup process */ p = getenv("PMI2_DEBUG"); if (p) PMI2_debug = atoi(p); /* Get the fd for PMI commands; if none, we're a singleton */ pmi2_errno = getPMIFD(); if (pmi2_errno) PMI2U_ERR_POP(pmi2_errno); if (PMI2_fd == -1) { /* Singleton init: Process not started with mpiexec, so set size to 1, rank to 0 */ PMI2_size = 1; PMI2_rank = 0; *spawned = 0; *size = PMI2_size; *rank = PMI2_rank; *appnum = -1; PMI2_initialized = SINGLETON_INIT_BUT_NO_PM; goto fn_exit; } /* do initial PMI1 init */ ret = snprintf(buf, PMI2_MAXLINE, "cmd=init pmi_version=%d pmi_subversion=%d\n", PMI_VERSION, PMI_SUBVERSION); PMI2U_ERR_CHKANDJUMP(ret < 0, pmi2_errno, PMI2_ERR_OTHER, "**intern %s", "failed to generate init line"); ret = PMI2U_writeline(PMI2_fd, buf); PMI2U_ERR_CHKANDJUMP(ret < 0, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_init_send"); ret = PMI2U_readline(PMI2_fd, buf, PMI2_MAXLINE); PMI2U_ERR_CHKANDJUMP(ret < 0, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_initack %s", strerror(pmi2_errno)); PMI2U_parse_keyvals(buf); cmdline[0] = 0; PMI2U_getval("cmd", cmdline, PMI2_MAXLINE); PMI2U_ERR_CHKANDJUMP(strncmp(cmdline, "response_to_init", PMI2_MAXLINE) != 0, pmi2_errno, PMI2_ERR_OTHER, "**bad_cmd"); PMI2U_getval("rc", buf, PMI2_MAXLINE); if (strncmp(buf, "0", PMI2_MAXLINE) != 0) { char buf1[PMI2_MAXLINE]; PMI2U_getval("pmi_version", buf, PMI2_MAXLINE); PMI2U_getval("pmi_subversion", buf1, PMI2_MAXLINE); PMI2U_ERR_SETANDJUMP(pmi2_errno, PMI2_ERR_OTHER, "**pmi2_version %s %s %d %d", buf, buf1, PMI_VERSION, PMI_SUBVERSION); } PMI2U_printf("do full PMI2 init ..."); /* do full PMI2 init */ { PMI2_Keyvalpair pairs[3]; PMI2_Keyvalpair *pairs_p[] = { pairs, pairs+1, pairs+2 }; int npairs = 0; int isThreaded = 0; const char *errmsg; int rc; int found; int version, subver; const char *spawner_jobid; int spawner_jobid_len; PMI2_Command cmd = {0}; int debugged; int PMI2_pmiverbose; jobid = getenv("PMI_JOBID"); if (jobid) { init_kv_str(&pairs[npairs], PMIJOBID_KEY, jobid); ++npairs; } pmiid = getenv("PMI_ID"); if (pmiid) { init_kv_str(&pairs[npairs], SRCID_KEY, pmiid); ++npairs; } else { pmiid = getenv("PMI_RANK"); if (pmiid) { init_kv_str(&pairs[npairs], PMIRANK_KEY, pmiid); PMI2_rank = strtol(pmiid, NULL, 10); ++npairs; } } init_kv_str(&pairs[npairs], THREADED_KEY, isThreaded ? "TRUE" : "FALSE"); ++npairs; pmi2_errno = PMIi_WriteSimpleCommand(PMI2_fd, 0, FULLINIT_CMD, pairs_p, npairs); /* don't pass in thread id for init */ if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommand"); /* Read auth-response */ /* Send auth-response-complete */ /* Read fullinit-response */ pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, FULLINITRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_fullinit %s", errmsg ? errmsg : "unknown"); found = getvalint(cmd.pairs, cmd.nPairs, PMIVERSION_KEY, &version); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); found = getvalint(cmd.pairs, cmd.nPairs, PMISUBVER_KEY, &subver); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); found = getvalint(cmd.pairs, cmd.nPairs, RANK_KEY, rank); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); found = getvalint(cmd.pairs, cmd.nPairs, SIZE_KEY, size); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); PMI2_size = *size; found = getvalint(cmd.pairs, cmd.nPairs, APPNUM_KEY, appnum); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); found = getval(cmd.pairs, cmd.nPairs, SPAWNERJOBID_KEY, &spawner_jobid, &spawner_jobid_len); PMI2U_ERR_CHKANDJUMP(found == -1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); if (found) *spawned = TRUE; else *spawned = FALSE; debugged = 0; found = getvalbool(cmd.pairs, cmd.nPairs, DEBUGGED_KEY, &debugged); PMI2U_ERR_CHKANDJUMP(found == -1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); PMI2_debug |= debugged; PMI2_pmiverbose = 0; found = getvalbool(cmd.pairs, cmd.nPairs, PMIVERBOSE_KEY, &PMI2_pmiverbose); PMI2U_ERR_CHKANDJUMP(found == -1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); } if (! PMI2_initialized) { PMI2_initialized = NORMAL_INIT_WITH_PM; pmi2_errno = PMI2_SUCCESS; } phony(); fn_exit: PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Finalize(void) { int pmi2_errno = PMI2_SUCCESS; int rc; const char *errmsg; PMI2_Command cmd = {0}; PMI2U_printf("[BEGIN]"); if (PMI2_initialized > SINGLETON_INIT_BUT_NO_PM) { pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, FINALIZE_CMD, NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, FINALIZERESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_finalize %s", errmsg ? errmsg : "unknown"); free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); shutdown(PMI2_fd, SHUT_RDWR); close(PMI2_fd); } fn_exit: PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Initialized(void) { /* Turn this into a logical value (1 or 0). This allows us to use PMI2_initialized to distinguish between initialized with an PMI service (e.g., via mpiexec) and the singleton init, which has no PMI service */ return (PMI2_initialized != 0); } int PMI2_Abort(int flag, const char msg[]) { if (msg) PMI2U_printf("aborting job:\n%s", msg); PMIi_WriteSimpleCommandStr(PMI2_fd, NULL, ABORT_CMD, ISWORLD_KEY, flag ? TRUE_VAL : FALSE_VAL, MSG_KEY, ((msg == NULL) ? "": msg), NULL); exit(flag); return PMI2_SUCCESS; } int PMI2_Job_Spawn(int count, const char * cmds[], int argcs[], const char ** argvs[], const int maxprocs[], const int info_keyval_sizes[], const struct MPID_Info *info_keyval_vectors[], int preput_keyval_size, const struct MPID_Info *preput_keyval_vector[], char jobId[], int jobIdSize, int errors[]) { int i,rc,spawncnt,total_num_processes,num_errcodes_found; int found; const char *jid; int jidlen; char tempbuf[PMI2_MAXLINE]; char *lead, *lag; int spawn_rc; const char *errmsg = NULL; PMI2_Command resp_cmd = {0}; int pmi2_errno = 0; PMI2_Keyvalpair **pairs_p = NULL; int npairs = 0; int total_pairs = 0; PMI2U_printf("[BEGIN]"); /* Connect to the PM if we haven't already */ if (PMIi_InitIfSingleton() != 0) return -1; total_num_processes = 0; /* XXX DJG from Pavan's email: cmd=spawn;thrid=string;ncmds=count;preputcount=n;ppkey0=name;ppval0=string;...;\ subcmd=spawn-exe1;maxprocs=n;argc=narg;argv0=name;\ argv1=name;...;infokeycount=n;infokey0=key;\ infoval0=string;...;\ (... one subcmd for each executable ...) */ /* FIXME overall need a better interface for building commands! * Need to be able to append commands, and to easily accept integer * valued arguments. Memory management should stay completely out * of mind when writing a new PMI command impl like this! */ /* Calculate the total number of keyval pairs that we need. * * The command writing utility adds "cmd" and "thrid" fields for us, * don't include them in our count. */ total_pairs = 2; /* ncmds,preputcount */ total_pairs += (3 * count); /* subcmd,maxprocs,argc */ total_pairs += (2 * preput_keyval_size); /* ppkeyN,ppvalN */ for (spawncnt = 0; spawncnt < count; ++spawncnt) { total_pairs += argcs[spawncnt]; /* argvN */ if (info_keyval_sizes) { total_pairs += 1; /* infokeycount */ total_pairs += 2 * info_keyval_sizes[spawncnt]; /* infokeyN,infovalN */ } } pairs_p = malloc(total_pairs * sizeof(PMI2_Keyvalpair*)); /* individiually allocating instead of batch alloc b/c freepairs assumes it */ for (i = 0; i < total_pairs; ++i) { /* FIXME we are somehow still leaking some of this memory */ pairs_p[i] = malloc(sizeof(PMI2_Keyvalpair)); PMI2U_Assert(pairs_p[i]); } init_kv_strdup_int(pairs_p[npairs++], "ncmds", count); init_kv_strdup_int(pairs_p[npairs++], "preputcount", preput_keyval_size); for (i = 0; i < preput_keyval_size; ++i) { init_kv_strdup_intsuffix(pairs_p[npairs++], "ppkey", i, preput_keyval_vector[i]->key); init_kv_strdup_intsuffix(pairs_p[npairs++], "ppval", i, preput_keyval_vector[i]->value); } for (spawncnt = 0; spawncnt < count; ++spawncnt) { total_num_processes += maxprocs[spawncnt]; init_kv_strdup(pairs_p[npairs++], "subcmd", cmds[spawncnt]); init_kv_strdup_int(pairs_p[npairs++], "maxprocs", maxprocs[spawncnt]); init_kv_strdup_int(pairs_p[npairs++], "argc", argcs[spawncnt]); for (i = 0; i < argcs[spawncnt]; ++i) { init_kv_strdup_intsuffix(pairs_p[npairs++], "argv", i, argvs[spawncnt][i]); } if (info_keyval_sizes) { init_kv_strdup_int(pairs_p[npairs++], "infokeycount", info_keyval_sizes[spawncnt]); for (i = 0; i < info_keyval_sizes[spawncnt]; ++i) { init_kv_strdup_intsuffix(pairs_p[npairs++], "infokey", i, info_keyval_vectors[spawncnt][i].key); init_kv_strdup_intsuffix(pairs_p[npairs++], "infoval", i, info_keyval_vectors[spawncnt][i].value); } } } if (npairs < total_pairs) { PMI2U_printf("about to fail assertion, npairs=%d total_pairs=%d", npairs, total_pairs); } PMI2U_Assert(npairs == total_pairs); pmi2_errno = PMIi_WriteSimpleCommand(PMI2_fd, &resp_cmd, "spawn", pairs_p, npairs); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommand"); freepairs(pairs_p, npairs); pairs_p = NULL; /* XXX DJG TODO release any upper level MPICH2 critical sections */ rc = PMIi_ReadCommandExp(PMI2_fd, &resp_cmd, "spawn-response", &spawn_rc, &errmsg); if (rc != 0) { return PMI2_FAIL; } /* XXX DJG TODO deal with the response */ PMI2U_Assert(errors != NULL); if (jobId && jobIdSize) { found = getval(resp_cmd.pairs, resp_cmd.nPairs, JOBID_KEY, &jid, &jidlen); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); MPIU_Strncpy(jobId, jid, jobIdSize); } if (PMI2U_getval("errcodes", tempbuf, PMI2_MAXLINE)) { num_errcodes_found = 0; lag = &tempbuf[0]; do { lead = strchr(lag, ','); if (lead) *lead = '\0'; errors[num_errcodes_found++] = atoi(lag); lag = lead + 1; /* move past the null char */ PMI2U_Assert(num_errcodes_found <= total_num_processes); } while (lead != NULL); PMI2U_Assert(num_errcodes_found == total_num_processes); } else { /* gforker doesn't return errcodes, so we'll just pretend that means that it was going to send all `0's. */ for (i = 0; i < total_num_processes; ++i) { errors[i] = 0; } } fn_fail: free(resp_cmd.command); freepairs(resp_cmd.pairs, resp_cmd.nPairs); if (pairs_p) freepairs(pairs_p, npairs); PMI2U_printf("[END]"); return pmi2_errno; } int PMI2_Job_GetId(char jobid[], int jobid_size) { int pmi2_errno = PMI2_SUCCESS; int found; const char *jid; int jidlen; int rc; const char *errmsg; PMI2_Command cmd = {0}; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, JOBGETID_CMD, NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, JOBGETIDRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_jobgetid %s", errmsg ? errmsg : "unknown"); found = getval(cmd.pairs, cmd.nPairs, JOBID_KEY, &jid, &jidlen); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); MPIU_Strncpy(jobid, jid, jobid_size); fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Job_GetRank(int* rank) { *rank = PMI2_rank; return PMI2_SUCCESS; } int PMI2_Info_GetSize(int* size) { *size = PMI2_size; return PMI2_SUCCESS; } #undef FUNCNAME #define FUNCNAME PMI2_Job_Connect #undef FCNAME #define FCNAME PMI2DI_QUOTE(FUNCNAME) int PMI2_Job_Connect(const char jobid[], PMI2_Connect_comm_t *conn) { int pmi2_errno = PMI2_SUCCESS; PMI2_Command cmd = {0}; int found; int kvscopy; int rc; const char *errmsg; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, JOBCONNECT_CMD, JOBID_KEY, jobid, NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, JOBCONNECTRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_jobconnect %s", errmsg ? errmsg : "unknown"); found = getvalbool(cmd.pairs, cmd.nPairs, KVSCOPY_KEY, &kvscopy); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); PMI2U_ERR_CHKANDJUMP(kvscopy, pmi2_errno, PMI2_ERR_OTHER, "**notimpl"); fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Job_Disconnect(const char jobid[]) { int pmi2_errno = PMI2_SUCCESS; PMI2_Command cmd = {0}; int rc; const char *errmsg; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, JOBDISCONNECT_CMD, JOBID_KEY, jobid, NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, JOBDISCONNECTRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_jobdisconnect %s", errmsg ? errmsg : "unknown"); fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMIX_Ring(const char value[], int *rank, int *ranks, char left[], char right[], int maxvalue) { int pmi2_errno = PMI2_SUCCESS; PMI2_Command cmd = {0}; int rc; const char *errmsg; int found; const char *kvsvalue; int kvsvallen; PMI2U_printf("[BEGIN PMI2_Ring]"); /* send message: cmd=ring_in, count=1, left=value, right=value */ pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, RING_CMD, RING_COUNT_KEY, "1", RING_LEFT_KEY, value, RING_RIGHT_KEY, value, NULL); if (pmi2_errno) PMI2U_ERR_POP(pmi2_errno); /* wait for reply: cmd=ring_out, rc=0|1, count=rank, left=leftval, right=rightval */ pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, RINGRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_ring %s", errmsg ? errmsg : "unknown"); /* get our rank from the count key */ found = getvalint(cmd.pairs, cmd.nPairs, RING_COUNT_KEY, rank); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); /* set size of ring (just number of procs in job) */ *ranks = PMI2_size; /* lookup left value and copy to caller's buffer */ found = getval(cmd.pairs, cmd.nPairs, RING_LEFT_KEY, &kvsvalue, &kvsvallen); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); MPIU_Strncpy(left, kvsvalue, maxvalue); /* lookup right value and copy to caller's buffer */ found = getval(cmd.pairs, cmd.nPairs, RING_RIGHT_KEY, &kvsvalue, &kvsvallen); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); MPIU_Strncpy(right, kvsvalue, maxvalue); fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END PMI2_Ring]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_KVS_Put(const char key[], const char value[]) { int pmi2_errno = PMI2_SUCCESS; PMI2_Command cmd = {0}; int rc; const char *errmsg; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, KVSPUT_CMD, KEY_KEY, key, VALUE_KEY, value, NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, KVSPUTRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_kvsput %s", errmsg ? errmsg : "unknown"); fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_KVS_Fence(void) { int pmi2_errno = PMI2_SUCCESS; PMI2_Command cmd = {0}; int rc; const char *errmsg; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, KVSFENCE_CMD, NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, KVSFENCERESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_kvsfence %s", errmsg ? errmsg : "unknown"); fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_KVS_Get(const char *jobid, int src_pmi_id, const char key[], char value [], int maxValue, int *valLen) { int pmi2_errno = PMI2_SUCCESS; int found, keyfound; const char *kvsvalue; int kvsvallen; PMI2_Command cmd = {0}; int rc; int ret; char src_pmi_id_str[256]; const char *errmsg; PMI2U_printf("[BEGIN]"); snprintf(src_pmi_id_str, sizeof(src_pmi_id_str), "%d", src_pmi_id); pmi2_errno = PMIi_InitIfSingleton(); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_InitIfSingleton"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, KVSGET_CMD, JOBID_KEY, jobid, SRCID_KEY, src_pmi_id_str, KEY_KEY, key, NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, KVSGETRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_kvsget %s", errmsg ? errmsg : "unknown"); found = getvalbool(cmd.pairs, cmd.nPairs, FOUND_KEY, &keyfound); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); PMI2U_ERR_CHKANDJUMP(!keyfound, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_kvsget_notfound"); found = getval(cmd.pairs, cmd.nPairs, VALUE_KEY, &kvsvalue, &kvsvallen); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); ret = MPIU_Strncpy(value, kvsvalue, maxValue); *valLen = ret ? -kvsvallen : kvsvallen; fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Info_GetNodeAttr(const char name[], char value[], int valuelen, int *flag, int waitfor) { int pmi2_errno = PMI2_SUCCESS; int found; const char *kvsvalue; int kvsvallen; PMI2_Command cmd = {0}; int rc; const char *errmsg; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_InitIfSingleton(); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_InitIfSingleton"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, GETNODEATTR_CMD, KEY_KEY, name, WAIT_KEY, waitfor ? "TRUE" : "FALSE", NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, GETNODEATTRRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_getnodeattr %s", errmsg ? errmsg : "unknown"); found = getvalbool(cmd.pairs, cmd.nPairs, FOUND_KEY, flag); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); if (*flag) { found = getval(cmd.pairs, cmd.nPairs, VALUE_KEY, &kvsvalue, &kvsvallen); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); MPIU_Strncpy(value, kvsvalue, valuelen); } fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Info_GetNodeAttrIntArray(const char name[], int array[], int arraylen, int *outlen, int *flag) { int pmi2_errno = PMI2_SUCCESS; int found; const char *kvsvalue; int kvsvallen; PMI2_Command cmd = {0}; int rc; const char *errmsg; int i; const char *valptr; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_InitIfSingleton(); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_InitIfSingleton"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, GETNODEATTR_CMD, KEY_KEY, name, WAIT_KEY, "FALSE", NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, GETNODEATTRRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_getnodeattr %s", errmsg ? errmsg : "unknown"); found = getvalbool(cmd.pairs, cmd.nPairs, FOUND_KEY, flag); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); if (*flag) { found = getval(cmd.pairs, cmd.nPairs, VALUE_KEY, &kvsvalue, &kvsvallen); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); valptr = kvsvalue; i = 0; rc = sscanf(valptr, "%d", &array[i]); PMI2U_ERR_CHKANDJUMP(rc != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern %s", "unable to parse intarray"); ++i; while ((valptr = strchr(valptr, ',')) && i < arraylen) { ++valptr; /* skip over the ',' */ rc = sscanf(valptr, "%d", &array[i]); PMI2U_ERR_CHKANDJUMP(rc != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern %s", "unable to parse intarray"); ++i; } *outlen = i; } fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Info_PutNodeAttr(const char name[], const char value[]) { int pmi2_errno = PMI2_SUCCESS; PMI2_Command cmd = {0}; int rc; const char *errmsg; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, PUTNODEATTR_CMD, KEY_KEY, name, VALUE_KEY, value, NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, PUTNODEATTRRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_putnodeattr %s", errmsg ? errmsg : "unknown"); fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Info_GetJobAttr(const char name[], char value[], int valuelen, int *flag) { int pmi2_errno = PMI2_SUCCESS; int found; const char *kvsvalue; int kvsvallen; PMI2_Command cmd = {0}; int rc; const char *errmsg; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_InitIfSingleton(); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_InitIfSingleton"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, GETJOBATTR_CMD, KEY_KEY, name, NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, GETJOBATTRRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_getjobattr %s", errmsg ? errmsg : "unknown"); found = getvalbool(cmd.pairs, cmd.nPairs, FOUND_KEY, flag); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); if (*flag) { found = getval(cmd.pairs, cmd.nPairs, VALUE_KEY, &kvsvalue, &kvsvallen); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); MPIU_Strncpy(value, kvsvalue, valuelen); } fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Info_GetJobAttrIntArray(const char name[], int array[], int arraylen, int *outlen, int *flag) { int pmi2_errno = PMI2_SUCCESS; int found; const char *kvsvalue; int kvsvallen; PMI2_Command cmd = {0}; int rc; const char *errmsg; int i; const char *valptr; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_InitIfSingleton(); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_InitIfSingleton"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, GETJOBATTR_CMD, KEY_KEY, name, NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, GETJOBATTRRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_getjobattr %s", errmsg ? errmsg : "unknown"); found = getvalbool(cmd.pairs, cmd.nPairs, FOUND_KEY, flag); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); if (*flag) { found = getval(cmd.pairs, cmd.nPairs, VALUE_KEY, &kvsvalue, &kvsvallen); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); valptr = kvsvalue; i = 0; rc = sscanf(valptr, "%d", &array[i]); PMI2U_ERR_CHKANDJUMP(rc != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern %s", "unable to parse intarray"); ++i; while ((valptr = strchr(valptr, ',')) && i < arraylen) { ++valptr; /* skip over the ',' */ rc = sscanf(valptr, "%d", &array[i]); PMI2U_ERR_CHKANDJUMP(rc != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern %s", "unable to parse intarray"); ++i; } *outlen = i; } fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Nameserv_publish(const char service_name[], const PMI2U_Info *info_ptr, const char port[]) { int pmi2_errno = PMI2_SUCCESS; PMI2_Command cmd = {0}; int rc; const char *errmsg; PMI2U_printf("[BEGIN]"); /* ignoring infokey functionality for now */ pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, NAMEPUBLISH_CMD, NAME_KEY, service_name, PORT_KEY, port, INFOKEYCOUNT_KEY, "0", NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, NAMEPUBLISHRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_nameservpublish %s", errmsg ? errmsg : "unknown"); fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Nameserv_lookup(const char service_name[], const PMI2U_Info *info_ptr, char port[], int portLen) { int pmi2_errno = PMI2_SUCCESS; int found; int rc; PMI2_Command cmd = {0}; int plen; const char *errmsg; const char *found_port; PMI2U_printf("[BEGIN]"); /* ignoring infos for now */ pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, NAMELOOKUP_CMD, NAME_KEY, service_name, INFOKEYCOUNT_KEY, "0", NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, NAMELOOKUPRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_nameservlookup %s", errmsg ? errmsg : "unknown"); found = getval(cmd.pairs, cmd.nPairs, VALUE_KEY, &found_port, &plen); PMI2U_ERR_CHKANDJUMP(!found, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_nameservlookup %s", "not found"); MPIU_Strncpy(port, found_port, portLen); fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMI2_Nameserv_unpublish(const char service_name[], const PMI2U_Info *info_ptr) { int pmi2_errno = PMI2_SUCCESS; int rc; PMI2_Command cmd = {0}; const char *errmsg; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_WriteSimpleCommandStr(PMI2_fd, &cmd, NAMEUNPUBLISH_CMD, NAME_KEY, service_name, INFOKEYCOUNT_KEY, "0", NULL); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_WriteSimpleCommandStr"); pmi2_errno = PMIi_ReadCommandExp(PMI2_fd, &cmd, NAMEUNPUBLISHRESP_CMD, &rc, &errmsg); if (pmi2_errno) PMI2U_ERR_SETANDJUMP(1, pmi2_errno, "PMIi_ReadCommandExp"); PMI2U_ERR_CHKANDJUMP(rc, pmi2_errno, PMI2_ERR_OTHER, "**pmi2_nameservunpublish %s", errmsg ? errmsg : "unknown"); fn_exit: free(cmd.command); freepairs(cmd.pairs, cmd.nPairs); PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } /* ------------------------------------------------------------------------- */ /* Service Routines */ /* ------------------------------------------------------------------------- */ /* ------------------------------------------------------------------------- */ /* * PMIi_ReadCommand - Reads an entire command from the PMI socket. This * routine blocks the thread until the command is read. * * PMIi_WriteSimpleCommand - Write a simple command to the PMI socket; this * allows printf - style arguments. This blocks the thread until the buffer * has been written (for fault-tolerance, we may want to keep it around * in case of PMI failure). * * PMIi_WaitFor - Wait for a particular PMI command request to complete. */ /* ------------------------------------------------------------------------- */ /* frees all of the keyvals pointed to by a keyvalpair* array and the array iteself*/ static void freepairs(PMI2_Keyvalpair** pairs, int npairs) { int i; if (!pairs) return; for (i = 0; i < npairs; ++i) if (pairs[i]->isCopy) { /* FIXME casts are here to suppress legitimate constness warnings */ free((void *)pairs[i]->key); free((void *)pairs[i]->value); free(pairs[i]); } free(pairs); } /* getval & friends -- these functions search the pairs list for a * matching key, set val appropriately and return 1. If no matching * key is found, 0 is returned. If the value is invalid, -1 is returned */ static int getval(PMI2_Keyvalpair *const pairs[], int npairs, const char *key, const char **value, int *vallen) { int i; for (i = 0; i < npairs; ++i) if (strncmp(key, pairs[i]->key, PMI2_MAX_KEYLEN) == 0) { *value = pairs[i]->value; *vallen = pairs[i]->valueLen; return 1; } return 0; } static int getvalint(PMI2_Keyvalpair *const pairs[], int npairs, const char *key, int *val) { int found; const char *value; int vallen; int ret; /* char *endptr; */ found = getval(pairs, npairs, key, &value, &vallen); if (found != 1) return found; if (vallen == 0) return -1; ret = sscanf(value, "%d", val); if (ret != 1) return -1; /* *val = strtoll(value, &endptr, 0); */ /* if (endptr - value != vallen) */ /* return -1; */ return 1; } static int getvalptr(PMI2_Keyvalpair *const pairs[], int npairs, const char *key, void *val) { int found; const char *value; int vallen; int ret; void **val_ = val; /* char *endptr; */ found = getval(pairs, npairs, key, &value, &vallen); if (found != 1) return found; if (vallen == 0) return -1; ret = sscanf(value, "%p", val_); if (ret != 1) return -1; /* *val_ = (void *)(PMI2R_Upint)strtoll(value, &endptr, 0); */ /* if (endptr - value != vallen) */ /* return -1; */ return 1; } static int getvalbool(PMI2_Keyvalpair *const pairs[], int npairs, const char *key, int *val) { int found; const char *value; int vallen; found = getval(pairs, npairs, key, &value, &vallen); if (found != 1) return found; if (strlen("TRUE") == vallen && !strncmp(value, "TRUE", vallen)) *val = 1/*TRUE*/; else if (strlen("FALSE") == vallen && !strncmp(value, "FALSE", vallen)) *val = 0/*FALSE*/; else return -1; return 1; } /* parse_keyval(cmdptr, len, key, val, vallen) Scans through buffer specified by cmdptr looking for the first key and value. IN/OUT cmdptr - IN: pointer to buffer; OUT: pointer to byte after the ';' terminating the value IN/OUT len - IN: length of buffer; OUT: length of buffer not read OUT key - pointer to null-terminated string containing the key OUT val - pointer to string containing the value OUT vallen - length of the value string This function will modify the buffer passed through cmdptr to insert '\0' following the key, and to replace escaped ';;' with ';'. */ static int parse_keyval(char **cmdptr, int *len, char **key, char **val, int *vallen) { int pmi2_errno = PMI2_SUCCESS; char *c = *cmdptr; char *d; /*PMI2U_printf("[BEGIN]");*/ /* find key */ *key = c; /* key is at the start of the buffer */ while (*len && *c != '=') { --*len; ++c; } PMI2U_ERR_CHKANDJUMP(*len == 0, pmi2_errno, PMI2_ERR_OTHER, "**bad_keyval"); PMI2U_ERR_CHKANDJUMP((c - *key) > PMI2_MAX_KEYLEN, pmi2_errno, PMI2_ERR_OTHER, "**bad_keyval"); *c = '\0'; /* terminate the key string */ /* skip over the '=' */ --*len; ++c; /* find val */ *val = d = c; /* val is next */ while (*len) { if (*c == ';') { /* handle escaped ';' */ if (*(c+1) != ';') break; else { --*len; ++c; } } --*len; *(d++) = *(c++); } PMI2U_ERR_CHKANDJUMP(*len == 0, pmi2_errno, PMI2_ERR_OTHER, "**bad_keyval"); PMI2U_ERR_CHKANDJUMP((d - *val) > PMI2_MAX_VALLEN, pmi2_errno, PMI2_ERR_OTHER, "**bad_keyval"); *c = '\0'; /* terminate the val string */ *vallen = d - *val; *cmdptr = c+1; /* skip over the ';' */ --*len; fn_exit: /*PMI2U_printf("[END]");*/ return pmi2_errno; fn_fail: goto fn_exit; } static int create_keyval(PMI2_Keyvalpair **kv, const char *key, const char *val, int vallen) { int pmi2_errno = PMI2_SUCCESS; int key_len = strlen(key); char *key_p; char *value_p; PMI2U_CHKMEM_DECL(3); /*PMI2U_printf("[BEGIN]");*/ /*PMI2U_printf("[BEGIN] create_keyval(%p, %s, %s, %d)", kv, key, val, vallen);*/ PMI2U_CHKMEM_MALLOC(*kv, PMI2_Keyvalpair *, sizeof(PMI2_Keyvalpair), pmi2_errno, "pair"); PMI2U_CHKMEM_MALLOC(key_p, char *, key_len+1, pmi2_errno, "key"); MPIU_Strncpy(key_p, key, key_len+1); key_p[key_len] = '\0'; PMI2U_CHKMEM_MALLOC(value_p, char *, vallen+1, pmi2_errno, "value"); memcpy(value_p, val, vallen); value_p[vallen] = '\0'; (*kv)->key = key_p; (*kv)->value = value_p; (*kv)->valueLen = vallen; (*kv)->isCopy = 1/*TRUE*/; fn_exit: PMI2U_CHKMEM_COMMIT(); /*PMI2U_printf("[END]");*/ return pmi2_errno; fn_fail: PMI2U_CHKMEM_REAP(); goto fn_exit; } /* Note that we fill in the fields in a command that is provided. We may want to share these routines with the PMI version 2 server */ int PMIi_ReadCommand( int fd, PMI2_Command *cmd ) { int pmi2_errno = PMI2_SUCCESS; char cmd_len_str[PMII_COMMANDLEN_SIZE+1]; int cmd_len, remaining_len, vallen = 0; char *c, *cmd_buf = NULL; char *key, *val = NULL; ssize_t nbytes; ssize_t offset; int num_pairs; int pair_index; char *command = NULL; int nPairs; int found; PMI2_Keyvalpair **pairs = NULL; PMI2_Command *target_cmd; PMI2U_printf("[BEGIN]"); memset(cmd_len_str, 0, sizeof(cmd_len_str)); #ifdef MPICH_IS_THREADED MPIU_THREAD_CHECK_BEGIN; { MPID_Thread_mutex_lock(&mutex); while (blocked && !cmd->complete) MPID_Thread_cond_wait(&cond, &mutex); if (cmd->complete) { MPID_Thread_mutex_unlock(&mutex); goto fn_exit; } blocked = 1/*TRUE*/; MPID_Thread_mutex_unlock(&mutex); } MPIU_THREAD_CHECK_END; #endif do { /* get length of cmd */ offset = 0; do { do { nbytes = read(fd, &cmd_len_str[offset], PMII_COMMANDLEN_SIZE - offset); } while (nbytes == -1 && errno == EINTR); PMI2U_ERR_CHKANDJUMP(nbytes <= 0, pmi2_errno, PMI2_ERR_OTHER, "**read %s", strerror(errno)); offset += nbytes; } while (offset < PMII_COMMANDLEN_SIZE); cmd_len = atoi(cmd_len_str); cmd_buf = malloc(cmd_len+1); if (!cmd_buf) PMI2U_CHKMEM_SETERR(pmi2_errno, cmd_len+1, "cmd_buf"); memset(cmd_buf, 0, cmd_len+1); /* get command */ offset = 0; do { do { nbytes = read(fd, &cmd_buf[offset], cmd_len - offset); } while (nbytes == -1 && errno == EINTR); PMI2U_ERR_CHKANDJUMP(nbytes <= 0, pmi2_errno, PMI2_ERR_OTHER, "**read %s", strerror(errno)); offset += nbytes; } while (offset < cmd_len); PMI2U_printf("PMI received (cmdlen %d): %s", cmd_len, cmd_buf); /* count number of "key=val;" */ c = cmd_buf; remaining_len = cmd_len; num_pairs = 0; while (remaining_len > 0) { while (remaining_len && *c != ';') { --remaining_len; ++c; } if (*c == ';' && *(c+1) == ';') { remaining_len -= 2; c += 2; } else { ++num_pairs; --remaining_len; ++c; } } c = cmd_buf; remaining_len = cmd_len; pmi2_errno = parse_keyval(&c, &remaining_len, &key, &val, &vallen); if (pmi2_errno) PMI2U_ERR_POP(pmi2_errno); PMI2U_ERR_CHKANDJUMP(strncmp(key, "cmd", PMI2_MAX_KEYLEN) != 0, pmi2_errno, PMI2_ERR_OTHER, "**bad_cmd"); command = malloc(vallen+1); if (!command) PMI2U_CHKMEM_SETERR(pmi2_errno, vallen+1, "command"); memcpy(command, val, vallen); val[vallen] = '\0'; nPairs = num_pairs-1; /* num_pairs-1 because the first pair is the command */ pairs = malloc(sizeof(PMI2_Keyvalpair *) * nPairs); if (!pairs) PMI2U_CHKMEM_SETERR(pmi2_errno, sizeof(PMI2_Keyvalpair *) * nPairs, "pairs"); pair_index = 0; while (remaining_len > 0) { PMI2_Keyvalpair *pair; pmi2_errno = parse_keyval(&c, &remaining_len, &key, &val, &vallen); if (pmi2_errno) PMI2U_ERR_POP(pmi2_errno); pmi2_errno = create_keyval(&pair, key, val, vallen); if (pmi2_errno) PMI2U_ERR_POP(pmi2_errno); pairs[pair_index] = pair; ++pair_index; } found = getvalptr(pairs, nPairs, THRID_KEY, &target_cmd); if (!found) /* if there's no thrid specified, assume it's for you */ target_cmd = cmd; else if (PMI2_debug && SEARCH_REMOVE(target_cmd) == 0) { int i; PMI2U_printf("command=%s", command); for (i = 0; i < nPairs; ++i) dump_PMI2_Keyvalpair(pairs[i]); } target_cmd->command = command; target_cmd->nPairs = nPairs; target_cmd->pairs = pairs; target_cmd->complete = 1/*TRUE*/; #ifdef MPICH_IS_THREADED target_cmd->complete = 1/*TRUE*/; #endif if (cmd_buf) free(cmd_buf); cmd_buf = NULL; } while (!cmd->complete); #ifdef MPICH_IS_THREADED MPIU_THREAD_CHECK_BEGIN; { MPID_Thread_mutex_lock(&mutex); blocked = 0/*FALSE*/; MPID_Thread_cond_broadcast(&cond); MPID_Thread_mutex_unlock(&mutex); } MPIU_THREAD_CHECK_END; #endif fn_exit: PMI2U_printf("[END]"); return pmi2_errno; fn_fail: if (cmd_buf) free(cmd_buf); goto fn_exit; } /* PMIi_ReadCommandExp -- reads a command checks that it matches the * expected command string exp, and parses the return code */ int PMIi_ReadCommandExp( int fd, PMI2_Command *cmd, const char *exp, int* rc, const char **errmsg ) { int pmi2_errno = PMI2_SUCCESS; int found; int msglen; PMI2U_printf("[BEGIN]"); pmi2_errno = PMIi_ReadCommand(fd, cmd); if (pmi2_errno) PMI2U_ERR_POP(pmi2_errno); PMI2U_ERR_CHKANDJUMP(strncmp(cmd->command, exp, strlen(exp)) != 0, pmi2_errno, PMI2_ERR_OTHER, "**bad_cmd"); found = getvalint(cmd->pairs, cmd->nPairs, RC_KEY, rc); PMI2U_ERR_CHKANDJUMP(found != 1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); found = getval(cmd->pairs, cmd->nPairs, ERRMSG_KEY, errmsg, &msglen); PMI2U_ERR_CHKANDJUMP(found == -1, pmi2_errno, PMI2_ERR_OTHER, "**intern"); if (!found) *errmsg = NULL; fn_exit: PMI2U_printf("[END]"); return pmi2_errno; fn_fail: goto fn_exit; } int PMIi_WriteSimpleCommand( int fd, PMI2_Command *resp, const char cmd[], PMI2_Keyvalpair *pairs[], int npairs) { int pmi2_errno = PMI2_SUCCESS; char cmdbuf[PMII_MAX_COMMAND_LEN]; char cmdlenbuf[PMII_COMMANDLEN_SIZE+1]; char *c = cmdbuf; int ret; int remaining_len = PMII_MAX_COMMAND_LEN; int cmdlen; int i; ssize_t nbytes; ssize_t offset; int pair_index; PMI2U_printf("[BEGIN]"); /* leave space for length field */ memset(c, ' ', PMII_COMMANDLEN_SIZE); c += PMII_COMMANDLEN_SIZE; PMI2U_ERR_CHKANDJUMP(strlen(cmd) > PMI2_MAX_VALLEN, pmi2_errno, PMI2_ERR_OTHER, "**cmd_too_long"); /* Subtract the PMII_COMMANDLEN_SIZE to prevent * certain implementation of snprintf() to * segfault when zero out the buffer. * PMII_COMMANDLEN_SIZE must be added later on * back again to send out the right protocol * message size. */ remaining_len -= PMII_COMMANDLEN_SIZE; ret = snprintf(c, remaining_len, "cmd=%s;", cmd); PMI2U_ERR_CHKANDJUMP(ret >= remaining_len, pmi2_errno, PMI2_ERR_OTHER, "**intern %s", "Ran out of room for command"); c += ret; remaining_len -= ret; #ifdef MPICH_IS_THREADED MPIU_THREAD_CHECK_BEGIN; if (resp) { ret = snprintf(c, remaining_len, "thrid=%p;", resp); PMI2U_ERR_CHKANDJUMP(ret >= remaining_len, pmi2_errno, PMI2_ERR_OTHER, "**intern %s", "Ran out of room for command"); c += ret; remaining_len -= ret; } MPIU_THREAD_CHECK_END; #endif for (pair_index = 0; pair_index < npairs; ++pair_index) { /* write key= */ PMI2U_ERR_CHKANDJUMP(strlen(pairs[pair_index]->key) > PMI2_MAX_KEYLEN, pmi2_errno, PMI2_ERR_OTHER, "**key_too_long"); ret = snprintf(c, remaining_len, "%s=", pairs[pair_index]->key); PMI2U_ERR_CHKANDJUMP(ret >= remaining_len, pmi2_errno, PMI2_ERR_OTHER, "**intern %s", "Ran out of room for command"); c += ret; remaining_len -= ret; /* write value and escape ;'s as ;; */ PMI2U_ERR_CHKANDJUMP(pairs[pair_index]->valueLen > PMI2_MAX_VALLEN, pmi2_errno, PMI2_ERR_OTHER, "**val_too_long"); for (i = 0; i < pairs[pair_index]->valueLen; ++i) { if (pairs[pair_index]->value[i] == ';') { *c = ';'; ++c; --remaining_len; } *c = pairs[pair_index]->value[i]; ++c; --remaining_len; } /* append ; */ *c = ';'; ++c; --remaining_len; } /* prepend the buffer length stripping off the trailing '\0' * Add back the PMII_COMMANDLEN_SIZE to get the correct * protocol size. */ cmdlen = PMII_MAX_COMMAND_LEN - (remaining_len + PMII_COMMANDLEN_SIZE); ret = snprintf(cmdlenbuf, sizeof(cmdlenbuf), "%d", cmdlen); PMI2U_ERR_CHKANDJUMP(ret >= PMII_COMMANDLEN_SIZE, pmi2_errno, PMI2_ERR_OTHER, "**intern %s", "Command length won't fit in length buffer"); memcpy(cmdbuf, cmdlenbuf, ret); cmdbuf[cmdlen+PMII_COMMANDLEN_SIZE] = '\0'; /* silence valgrind warnings in PMI2U_printf */ PMI2U_printf("PMI sending: %s", cmdbuf); #ifdef MPICH_IS_THREADED MPIU_THREAD_CHECK_BEGIN; { MPID_Thread_mutex_lock(&mutex); while (blocked) MPID_Thread_cond_wait(&cond, &mutex); blocked = 1/*TRUE*/; MPID_Thread_mutex_unlock(&mutex); } MPIU_THREAD_CHECK_END; #endif if (PMI2_debug) ENQUEUE(resp); offset = 0; do { do { nbytes = write(fd, &cmdbuf[offset], cmdlen + PMII_COMMANDLEN_SIZE - offset); } while (nbytes == -1 && errno == EINTR); PMI2U_ERR_CHKANDJUMP(nbytes <= 0, pmi2_errno, PMI2_ERR_OTHER, "**write %s", strerror(errno)); offset += nbytes; } while (offset < cmdlen + PMII_COMMANDLEN_SIZE); #ifdef MPICH_IS_THREADED MPIU_THREAD_CHECK_BEGIN; { MPID_Thread_mutex_lock(&mutex); blocked = 0/*FALSE*/; MPID_Thread_cond_broadcast(&cond); MPID_Thread_mutex_unlock(&mutex); } MPIU_THREAD_CHECK_END; #endif fn_fail: goto fn_exit; fn_exit: PMI2U_printf("[END]"); return pmi2_errno; } int PMIi_WriteSimpleCommandStr(int fd, PMI2_Command *resp, const char cmd[], ...) { int pmi2_errno = PMI2_SUCCESS; va_list ap; PMI2_Keyvalpair *pairs; PMI2_Keyvalpair **pairs_p; int npairs; int i; const char *key; const char *val; PMI2U_CHKMEM_DECL(2); PMI2U_printf("[BEGIN]"); npairs = 0; va_start(ap, cmd); while ((key = va_arg(ap, const char*))) { val = va_arg(ap, const char*); ++npairs; } va_end(ap); /* allocates n+1 pairs in case npairs is 0, avoiding unnecessary warning logs */ PMI2U_CHKMEM_MALLOC(pairs, PMI2_Keyvalpair*, (sizeof(PMI2_Keyvalpair) * (npairs+1)), pmi2_errno, "pairs"); PMI2U_CHKMEM_MALLOC(pairs_p, PMI2_Keyvalpair**, (sizeof(PMI2_Keyvalpair*) * (npairs+1)), pmi2_errno, "pairs_p"); i = 0; va_start(ap, cmd); while ((key = va_arg(ap, const char *))) { val = va_arg(ap, const char *); pairs_p[i] = &pairs[i]; pairs[i].key = key; pairs[i].value = val; if (val == NULL) pairs[i].valueLen = 0; else pairs[i].valueLen = strlen(val); pairs[i].isCopy = 0/*FALSE*/; ++i; } va_end(ap); pmi2_errno = PMIi_WriteSimpleCommand(fd, resp, cmd, pairs_p, npairs); if (pmi2_errno) PMI2U_ERR_POP(pmi2_errno); fn_exit: PMI2U_printf("[END]"); PMI2U_CHKMEM_FREEALL(); return pmi2_errno; fn_fail: goto fn_exit; } /* * This code allows a program to contact a host/port for the PMI socket. */ #include #include /* sockaddr_in (Internet) */ #include /* TCP_NODELAY */ #include /* sockaddr_un (Unix) */ #include /* defs of gethostbyname */ #include /* fcntl, F_GET/SETFL */ #include /* This is really IP!? */ #ifndef TCP #define TCP 0 #endif /* stub for connecting to a specified host/port instead of using a specified fd inherited from a parent process */ static int PMII_Connect_to_pm( char *hostname, int portnum ) { struct hostent *hp; struct sockaddr_in sa; int fd; int optval = 1; int q_wait = 1; hp = gethostbyname( hostname ); if (!hp) { PMI2U_printf("Unable to get host entry for %s", hostname ); return -1; } memset( (void *)&sa, 0, sizeof(sa) ); /* POSIX might define h_addr_list only and node define h_addr */ #ifdef HAVE_H_ADDR_LIST memcpy( (void *)&sa.sin_addr, (void *)hp->h_addr_list[0], hp->h_length); #else memcpy( (void *)&sa.sin_addr, (void *)hp->h_addr, hp->h_length); #endif sa.sin_family = hp->h_addrtype; sa.sin_port = htons( (unsigned short) portnum ); fd = socket( AF_INET, SOCK_STREAM, TCP ); if (fd < 0) { PMI2U_printf("Unable to get AF_INET socket" ); return -1; } if (setsockopt( fd, IPPROTO_TCP, TCP_NODELAY, (char *)&optval, sizeof(optval) )) { perror( "Error calling setsockopt:" ); } /* We wait here for the connection to succeed */ if (connect( fd, (struct sockaddr *)&sa, sizeof(sa) ) < 0) { switch (errno) { case ECONNREFUSED: PMI2U_printf("connect failed with connection refused" ); /* (close socket, get new socket, try again) */ if (q_wait) close(fd); return -1; case EINPROGRESS: /* (nonblocking) - select for writing. */ break; case EISCONN: /* (already connected) */ break; case ETIMEDOUT: /* timed out */ PMI2U_printf("connect failed with timeout" ); return -1; default: PMI2U_printf("connect failed with errno %d", errno ); return -1; } } return fd; } /* ------------------------------------------------------------------------- */ /* * Singleton Init. * * MPI-2 allows processes to become MPI processes and then make MPI calls, * such as MPI_Comm_spawn, that require a process manager (this is different * than the much simpler case of allowing MPI programs to run with an * MPI_COMM_WORLD of size 1 without an mpiexec or process manager). * * The process starts when either the client or the process manager contacts * the other. If the client starts, it sends a singinit command and * waits for the server to respond with its own singinit command. * If the server start, it send a singinit command and waits for the * client to respond with its own singinit command * * client sends singinit with these required values * pmi_version= * pmi_subversion= * * and these optional values * stdio=[yes|no] * authtype=[none|shared|] * authstring= * * server sends singinit with the same required and optional values as * above. * * At this point, the protocol is now the same in both cases, and has the * following components: * * server sends singinit_info with these required fields * versionok=[yes|no] * stdio=[yes|no] * kvsname= * * The client then issues the init command (see PMII_getmaxes) * * cmd=init pmi_version= pmi_subversion= * * and expects to receive a * * cmd=response_to_init rc=0 pmi_version= pmi_subversion= * * (This is the usual init sequence). * */ /* ------------------------------------------------------------------------- */ /* This is a special routine used to re-initialize PMI when it is in the singleton init case. That is, the executable was started without mpiexec, and PMI2_Init returned as if there was only one process. Note that PMI routines should not call PMII_singinit; they should call PMIi_InitIfSingleton(), which both connects to the process mangager and sets up the initial KVS connection entry. */ static int PMII_singinit(void) { return 0; } /* Promote PMI to a fully initialized version if it was started as a singleton init */ static int PMIi_InitIfSingleton(void) { return 0; } static int accept_one_connection(int list_sock) { int gotit, new_sock; struct sockaddr_in from; socklen_t len; len = sizeof(from); gotit = 0; while ( ! gotit ) { new_sock = accept(list_sock, (struct sockaddr *)&from, &len); if (new_sock == -1) { if (errno == EINTR) continue; /* interrupted? If so, try again */ else { PMI2U_printf("accept failed in accept_one_connection"); exit (-1); } } else gotit = 1; } return new_sock; } /* Get the FD to use for PMI operations. If a port is used, rather than a pre-established FD (i.e., via pipe), this routine will handle the initial handshake. */ static int getPMIFD(void) { int pmi2_errno = PMI2_SUCCESS; char *p; /* Set the default */ PMI2_fd = -1; p = getenv("PMI_FD"); if (p) { PMI2_fd = atoi(p); goto fn_exit; } p = getenv( "PMI_PORT" ); if (p) { int portnum; char hostname[MAXHOSTNAME+1]; char *pn, *ph; /* Connect to the indicated port (in format hostname:portnumber) and get the fd for the socket */ /* Split p into host and port */ pn = p; ph = hostname; while (*pn && *pn != ':' && (ph - hostname) < MAXHOSTNAME) { *ph++ = *pn++; } *ph = 0; PMI2U_ERR_CHKANDJUMP(*pn != ':', pmi2_errno, PMI2_ERR_OTHER, "**pmi2_port %s", p); portnum = atoi( pn+1 ); /* FIXME: Check for valid integer after : */ /* This routine only gets the fd to use to talk to the process manager. The handshake below is used to setup the initial values */ PMI2_fd = PMII_Connect_to_pm( hostname, portnum ); PMI2U_ERR_CHKANDJUMP(PMI2_fd < 0, pmi2_errno, PMI2_ERR_OTHER, "**connect_to_pm %s %d", hostname, portnum); } /* OK to return success for singleton init */ fn_exit: return pmi2_errno; fn_fail: goto fn_exit; } /* ----------------------------------------------------------------------- */ /* * This function is used to request information from the server and check * that the response uses the expected command name. On a successful * return from this routine, additional PMI2U_getval calls may be used * to access information about the returned value. * * If checkRc is true, this routine also checks that the rc value returned * was 0. If not, it uses the "msg" value to report on the reason for * the failure. */ static int GetResponse( const char request[], const char expectedCmd[], int checkRc ) { int err = 0; return err; } static void dump_PMI2_Keyvalpair(PMI2_Keyvalpair *kv) { PMI2U_printf("key = %s", kv->key); PMI2U_printf("value = %s", kv->value); PMI2U_printf("valueLen = %d", kv->valueLen); PMI2U_printf("isCopy = %s", kv->isCopy ? "TRUE" : "FALSE"); } static void dump_PMI2_Command(PMI2_Command *cmd) { int i; PMI2U_printf("cmd = %s", cmd->command); PMI2U_printf("nPairs = %d", cmd->nPairs); for (i = 0; i < cmd->nPairs; ++i) dump_PMI2_Keyvalpair(cmd->pairs[i]); } #if 0 /* Currently disabled * *_connect_to_stepd() * * If the user requests PMI2_CONNECT_TO_SERVER do * connect over the PMI2_SUN_PATH unix socket. */ static int _connect_to_stepd(int s) { struct sockaddr_un addr; int cc; char *usock; char *p; int myrank; int n; usock = getenv("PMI2_SUN_PATH"); if (usock == NULL) return -1; cc = socket(PF_UNIX, SOCK_STREAM, 0); if (cc < 0) { perror("socket()"); return -1; } memset(&addr, 0, sizeof(struct sockaddr_un)); addr.sun_family = AF_UNIX; sprintf(addr.sun_path, "%s", usock); if (connect(cc, (struct sockaddr *)&addr, sizeof(struct sockaddr_un)) != 0) { perror("connect()"); close(cc); return -1; } /* The very first thing we have to tell the pmi * server is our rank, so he can associate our * file descriptor with our rank. */ p = getenv("PMI_RANK"); if (p == NULL) { fprintf(stderr, "%s: failed to get PMI_RANK from env\n", __func__); close(cc); return -1; } myrank = atoi(p); n = write(cc, &myrank, sizeof(int)); if (n != sizeof(int)) { perror("write()"); close(cc); return -1; } /* close() all socket and return * the new. */ close(s); return cc; } #endif slurm-slurm-15-08-7-1/contribs/pmi2/pmi2_util.c000066400000000000000000000201261265000126300210700ustar00rootroot00000000000000/* -*- Mode: C; c-basic-offset:4 ; -*- */ /* * (C) 2001 by Argonne National Laboratory. * See COPYRIGHT in top-level directory. */ /* Allow fprintf to logfile */ /* style: allow:fprintf:1 sig:0 */ /* Utility functions associated with PMI implementation, but not part of the PMI interface itself. Reading and writing on pipes, signals, and parsing key=value messages */ #include #include #include #include #include #include "pmi2_util.h" #define MAXVALLEN 1024 #define MAXKEYLEN 32 /* These are not the keyvals in the keyval space that is part of the PMI specification. They are just part of this implementation's internal utilities. */ struct PMI2U_keyval_pairs { char key[MAXKEYLEN]; char value[MAXVALLEN]; }; static struct PMI2U_keyval_pairs PMI2U_keyval_tab[64] = { { { 0 }, { 0 } } }; static int PMI2U_keyval_tab_idx = 0; /* This is used to prepend printed output. Set the initial value to "unset" */ static char PMI2U_print_id[PMI2_IDSIZE] = "unset"; void PMI2U_Set_rank(int PMI_rank) { snprintf(PMI2U_print_id, PMI2_IDSIZE, "cli_%d", PMI_rank); } void PMI2U_SetServer(void) { strncpy(PMI2U_print_id, "server", PMI2_IDSIZE); } #define MAX_READLINE 1024 /* * Return the next newline-terminated string of maximum length maxlen. * This is a buffered version, and reads from fd as necessary. A */ int PMI2U_readline(int fd, char *buf, int maxlen) { static char readbuf[MAX_READLINE]; static char *nextChar = 0, *lastChar = 0; /* lastChar is really one past last char */ int curlen, n; char *p, ch; /* Note: On the client side, only one thread at a time should be calling this, and there should only be a single fd. Server side code should not use this routine (see the replacement version in src/pm/util/pmiserv.c) */ /*PMI2U_Assert(nextChar == lastChar || fd == lastfd);*/ p = buf; curlen = 1; /* Make room for the null */ while (curlen < maxlen) { if (nextChar == lastChar) { do { n = read(fd, readbuf, sizeof(readbuf) - 1); } while (n == -1 && errno == EINTR); if (n == 0) { /* EOF */ break; } else if (n < 0) { if (curlen == 1) { curlen = 0; } break; } nextChar = readbuf; lastChar = readbuf + n; /* Add a null at the end just to make it easier to print the read buffer */ readbuf[n] = 0; /* FIXME: Make this an optional output */ /* printf( "Readline %s\n", readbuf ); */ } ch = *nextChar++; *p++ = ch; curlen++; if (ch == '\n') break; } /* We null terminate the string for convenience in printing */ *p = 0; PMI2U_printf("PMI received: %s", buf); /* Return the number of characters, not counting the null */ return curlen - 1; } int PMI2U_writeline(int fd, char *buf) { int size = strlen(buf), n; if (buf[size - 1] != '\n') /* error: no newline at end */ PMI2U_printf("write_line: message string doesn't end in newline: :%s:", buf); else { PMI2U_printf("PMI sending: %s", buf); do { n = write(fd, buf, size); } while (n == -1 && errno == EINTR); if (n < 0) { PMI2U_printf("write_line error; fd=%d buf=:%s:", fd, buf); return (-1); } if (n < size) PMI2U_printf("write_line failed to write entire message"); } return 0; } /* * Given an input string st, parse it into internal storage that can be * queried by routines such as PMI2U_getval. */ int PMI2U_parse_keyvals(char *st) { char *p, *keystart, *valstart; int offset; if (!st) return (-1); PMI2U_keyval_tab_idx = 0; p = st; while (1) { while (*p == ' ') p++; /* got non-blank */ if (*p == '=') { PMI2U_printf("PMI2U_parse_keyvals: unexpected = at character %ld in %s", (long int) (p - st), st); return (-1); } if (*p == '\n' || *p == '\0') return (0); /* normal exit */ /* got normal character */ keystart = p; /* remember where key started */ while (*p != ' ' && *p != '=' && *p != '\n' && *p != '\0') p++; if (*p == ' ' || *p == '\n' || *p == '\0') { PMI2U_printf("PMI2U_parse_keyvals: unexpected key delimiter at character %ld in %s", (long int) (p - st), st); return (-1); } /* Null terminate the key */ *p = 0; /* store key */ strncpy(PMI2U_keyval_tab[PMI2U_keyval_tab_idx].key, keystart, MAXKEYLEN); PMI2U_keyval_tab[PMI2U_keyval_tab_idx].key[MAXKEYLEN-1] = '\0'; valstart = ++p; /* start of value */ while (*p != ' ' && *p != '\n' && *p != '\0') p++; /* store value */ strncpy(PMI2U_keyval_tab[PMI2U_keyval_tab_idx].value, valstart, MAXVALLEN); offset = p - valstart; /* When compiled with -fPIC, the pgcc compiler generates incorrect code if "p - valstart" is used instead of using the intermediate offset */ PMI2U_keyval_tab[PMI2U_keyval_tab_idx].value[offset] = '\0'; PMI2U_keyval_tab_idx++; if (*p == ' ') continue; if (*p == '\n' || *p == '\0') return (0); /* value has been set to empty */ } } void PMI2U_dump_keyvals(void) { int i; for (i = 0; i < PMI2U_keyval_tab_idx; i++) PMI2U_printf(" %s=%s", PMI2U_keyval_tab[i].key, PMI2U_keyval_tab[i].value); } char *PMI2U_getval(const char *keystr, char *valstr, int vallen) { int i; for (i = 0; i < PMI2U_keyval_tab_idx; i++) { if (strcmp(keystr, PMI2U_keyval_tab[i].key) == 0) { MPIU_Strncpy(valstr, PMI2U_keyval_tab[i].value, vallen); PMI2U_keyval_tab[i].value[vallen-1] = '\0'; return valstr; } } valstr[0] = '\0'; return NULL ; } void PMI2U_chgval(const char *keystr, char *valstr) { int i; for (i = 0; i < PMI2U_keyval_tab_idx; i++) { if (strcmp(keystr, PMI2U_keyval_tab[i].key) == 0) { strncpy(PMI2U_keyval_tab[i].value, valstr, MAXVALLEN); PMI2U_keyval_tab[i].value[MAXVALLEN - 1] = '\0'; } } } /* This code is borrowed from mpich2-1.5/src/pm/util/safestr2.c. The reason is to keep the save code logic around strncpy() as as in the original PMI2 implementation. @ MPIU_Strncpy - Copy a string with a maximum length Input Parameters: + instr - String to copy - maxlen - Maximum total length of 'outstr' Output Parameter: . outstr - String to copy into Notes: This routine is the routine that you wish 'strncpy' was. In copying 'instr' to 'outstr', it stops when either the end of 'outstr' (the null character) is seen or the maximum length 'maxlen' is reached. Unlike 'strncpy', it does not add enough nulls to 'outstr' after copying 'instr' in order to move precisely 'maxlen' characters. Thus, this routine may be used anywhere 'strcpy' is used, without any performance cost related to large values of 'maxlen'. If there is insufficient space in the destination, the destination is still null-terminated, to avoid potential failures in routines that neglect to check the error code return from this routine. Module: Utility @*/ int MPIU_Strncpy(char *dest, const char *src, size_t n) { char *d_ptr = dest; const char *s_ptr = src; register int i; if (n == 0) return 0; i = (int)n; while (*s_ptr && i-- > 0) { *d_ptr++ = *s_ptr++; } if (i > 0) { *d_ptr = 0; return 0; } else { /* Force a null at the end of the string (gives better safety in case the user fails to check the error code) */ dest[n-1] = 0; /* We may want to force an error message here, at least in the debugging version */ /* printf( "failure in copying %s with length %d\n", src, n ); */ return 1; } } slurm-slurm-15-08-7-1/contribs/pmi2/pmi2_util.h000066400000000000000000000057741265000126300211110ustar00rootroot00000000000000/* -*- Mode: C; c-basic-offset:4 ; -*- */ /* * (C) 2007 by Argonne National Laboratory. * See COPYRIGHT in top-level directory. */ #ifndef PMI2UTIL_H_INCLUDED #define PMI2UTIL_H_INCLUDED #include #include /* maximum sizes for arrays */ #define PMI2_MAXLINE 1024 #define PMI2_IDSIZE 32 #define TRUE 1 #define FALSE 0 #ifdef HAVE__FUNCTION__ #define PMI2U_FUNC __FUNCTION__ #elif defined(HAVE_CAP__FUNC__) #define PMI2U_FUNC __FUNC__ #elif defined(HAVE__FUNC__) #define PMI2U_FUNC __func__ #else #define PMI2U_FUNC __FILE__ #endif #ifdef DEBUG #define PMI2U_printf(x...) do { \ char logstr[1024]; \ snprintf(logstr, 1024, x); \ fprintf(stderr, "[%s (%d): %s] %s\n", \ __FILE__, __LINE__, __FUNCTION__, logstr); \ } while (0) #else #define PMI2U_printf(x...) #endif #define PMI2U_Assert(a_) do { \ if (!(a_)) { \ PMI2U_printf("ASSERT( %s )", #a_); \ } \ } while (0) #define PMI2U_ERR_POP(err) do { \ pmi2_errno = err; \ PMI2U_printf("err. %d", pmi2_errno); \ goto fn_fail; \ } while (0) #define PMI2U_ERR_SETANDJUMP(err, class, x...) do { \ char errstr[1024]; \ snprintf(errstr, 1024, x); \ PMI2U_printf("err. %s", errstr);\ pmi2_errno = class; \ goto fn_fail; \ } while (0) #define PMI2U_ERR_CHKANDJUMP(cond, err, class, x...) do { \ if (cond) PMI2U_ERR_SETANDJUMP(err, class, x); \ } while (0) #define PMI2U_CHKMEM_SETERR(rc_, nbytes_, name_) do { \ PMI2U_printf("ERROR: memory allocation of %lu bytes failed for %s", \ (long unsigned int) nbytes_, name_); \ rc_ = PMI2_ERR_NOMEM; \ goto fn_exit; \ } while(0) /* Persistent memory that we may want to recover if something goes wrong */ #define PMI2U_CHKMEM_DECL(n_) \ void* pmi2u_chkmem_stk_[n_] = {0}; \ int pmi2u_chkmem_stk_sp_= 0; \ const int pmi2u_chkmem_stk_sz_ = n_ #define PMI2U_CHKMEM_REAP() \ while (pmi2u_chkmem_stk_sp_ > 0) { \ free ((void*)( pmi2u_chkmem_stk_[--pmi2u_chkmem_stk_sp_] )); \ } #define PMI2U_CHKMEM_COMMIT() pmi2u_chkmem_stk_sp_ = 0 #define PMI2U_CHKMEM_MALLOC(pointer_,type_,nbytes_,rc_,name_) do { \ pointer_ = (type_)malloc(nbytes_); \ if (pointer_ && (pmi2u_chkmem_stk_sp_< pmi2u_chkmem_stk_sz_)) { \ pmi2u_chkmem_stk_[pmi2u_chkmem_stk_sp_++] = pointer_; \ } else { \ PMI2U_CHKMEM_SETERR(rc_,nbytes_,name_); \ goto fn_fail; \ } \ } while(0) #define PMI2U_CHKMEM_FREEALL() \ while (pmi2u_chkmem_stk_sp_ > 0) { \ free ((void*)( pmi2u_chkmem_stk_[--pmi2u_chkmem_stk_sp_] )); \ } /* prototypes for PMIU routines */ void PMI2U_Set_rank( int PMI_rank ); void PMI2U_SetServer( void ); int PMI2U_readline( int fd, char *buf, int max ); int PMI2U_writeline( int fd, char *buf ); int PMI2U_parse_keyvals( char *st ); void PMI2U_dump_keyvals( void ); char *PMI2U_getval( const char *keystr, char *valstr, int vallen ); void PMI2U_chgval( const char *keystr, char *valstr ); int MPIU_Strncpy(char *, const char *, size_t); #endif /* PMI2UTIL_H_INCLUDED */ slurm-slurm-15-08-7-1/contribs/pmi2/slurm/000077500000000000000000000000001265000126300201615ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/pmi2/slurm/pmi2.h000066400000000000000000000642421265000126300212110ustar00rootroot00000000000000/* -*- Mode: C; c-basic-offset:4 ; -*- */ /* * (C) 2007 by Argonne National Laboratory. * See COPYRIGHT in top-level directory. */ #ifndef PMI2_H_INCLUDED #define PMI2_H_INCLUDED #ifndef USE_PMI2_API /*#error This header file defines the PMI2 API, but PMI2 was not selected*/ #endif #define PMI2_MAX_KEYLEN 64 #define PMI2_MAX_VALLEN 1024 #define PMI2_MAX_ATTRVALUE 1024 #define PMI2_ID_NULL -1 #define PMII_COMMANDLEN_SIZE 6 #define PMII_MAX_COMMAND_LEN (64*1024) #if defined(__cplusplus) extern "C" { #endif static const char FULLINIT_CMD[] = "fullinit"; static const char FULLINITRESP_CMD[] = "fullinit-response"; static const char FINALIZE_CMD[] = "finalize"; static const char FINALIZERESP_CMD[] = "finalize-response"; static const char ABORT_CMD[] = "abort"; static const char JOBGETID_CMD[] = "job-getid"; static const char JOBGETIDRESP_CMD[] = "job-getid-response"; static const char JOBCONNECT_CMD[] = "job-connect"; static const char JOBCONNECTRESP_CMD[] = "job-connect-response"; static const char JOBDISCONNECT_CMD[] = "job-disconnect"; static const char JOBDISCONNECTRESP_CMD[] = "job-disconnect-response"; static const char KVSPUT_CMD[] = "kvs-put"; static const char KVSPUTRESP_CMD[] = "kvs-put-response"; static const char KVSFENCE_CMD[] = "kvs-fence"; static const char KVSFENCERESP_CMD[] = "kvs-fence-response"; static const char KVSGET_CMD[] = "kvs-get"; static const char KVSGETRESP_CMD[] = "kvs-get-response"; static const char GETNODEATTR_CMD[] = "info-getnodeattr"; static const char GETNODEATTRRESP_CMD[] = "info-getnodeattr-response"; static const char PUTNODEATTR_CMD[] = "info-putnodeattr"; static const char PUTNODEATTRRESP_CMD[] = "info-putnodeattr-response"; static const char GETJOBATTR_CMD[] = "info-getjobattr"; static const char GETJOBATTRRESP_CMD[] = "info-getjobattr-response"; static const char NAMEPUBLISH_CMD[] = "name-publish"; static const char NAMEPUBLISHRESP_CMD[] = "name-publish-response"; static const char NAMEUNPUBLISH_CMD[] = "name-unpublish"; static const char NAMEUNPUBLISHRESP_CMD[] = "name-unpublish-response"; static const char NAMELOOKUP_CMD[] = "name-lookup"; static const char NAMELOOKUPRESP_CMD[] = "name-lookup-response"; static const char RING_CMD[] = "ring"; static const char RINGRESP_CMD[] = "ring-response"; static const char PMIJOBID_KEY[] = "pmijobid"; static const char PMIRANK_KEY[] = "pmirank"; static const char SRCID_KEY[] = "srcid"; static const char THREADED_KEY[] = "threaded"; static const char RC_KEY[] = "rc"; static const char ERRMSG_KEY[] = "errmsg"; static const char PMIVERSION_KEY[] = "pmi-version"; static const char PMISUBVER_KEY[] = "pmi-subversion"; static const char RANK_KEY[] = "rank"; static const char SIZE_KEY[] = "size"; static const char APPNUM_KEY[] = "appnum"; static const char SPAWNERJOBID_KEY[] = "spawner-jobid"; static const char DEBUGGED_KEY[] = "debugged"; static const char PMIVERBOSE_KEY[] = "pmiverbose"; static const char ISWORLD_KEY[] = "isworld"; static const char MSG_KEY[] = "msg"; static const char JOBID_KEY[] = "jobid"; static const char KVSCOPY_KEY[] = "kvscopy"; static const char KEY_KEY[] = "key"; static const char VALUE_KEY[] = "value"; static const char FOUND_KEY[] = "found"; static const char WAIT_KEY[] = "wait"; static const char NAME_KEY[] = "name"; static const char PORT_KEY[] = "port"; static const char THRID_KEY[] = "thrid"; static const char INFOKEYCOUNT_KEY[] = "infokeycount"; static const char INFOKEY_KEY[] = "infokey%d"; static const char INFOVAL_KEY[] = "infoval%d"; static const char RING_COUNT_KEY[] = "ring-count"; static const char RING_LEFT_KEY[] = "ring-left"; static const char RING_RIGHT_KEY[] = "ring-right"; static const char TRUE_VAL[] = "TRUE"; static const char FALSE_VAL[] = "FALSE"; /* Local types */ /* Parse commands are in this structure. Fields in this structure are dynamically allocated as necessary */ typedef struct PMI2_Keyvalpair { const char *key; const char *value; int valueLen; /* Length of a value (values may contain nulls, so we need this) */ int isCopy; /* The value is a copy (and will need to be freed) if this is true, otherwise, it is a null-terminated string in the original buffer */ } PMI2_Keyvalpair; typedef struct PMI2_Command { int nPairs; /* Number of key=value pairs */ char *command; /* Overall command buffer */ PMI2_Keyvalpair **pairs; /* Array of pointers to pairs */ int complete; } PMI2_Command; /*D PMI2_CONSTANTS - PMI2 definitions Error Codes: + PMI2_SUCCESS - operation completed successfully . PMI2_FAIL - operation failed . PMI2_ERR_NOMEM - input buffer not large enough . PMI2_ERR_INIT - PMI not initialized . PMI2_ERR_INVALID_ARG - invalid argument . PMI2_ERR_INVALID_KEY - invalid key argument . PMI2_ERR_INVALID_KEY_LENGTH - invalid key length argument . PMI2_ERR_INVALID_VAL - invalid val argument . PMI2_ERR_INVALID_VAL_LENGTH - invalid val length argument . PMI2_ERR_INVALID_LENGTH - invalid length argument . PMI2_ERR_INVALID_NUM_ARGS - invalid number of arguments . PMI2_ERR_INVALID_ARGS - invalid args argument . PMI2_ERR_INVALID_NUM_PARSED - invalid num_parsed length argument . PMI2_ERR_INVALID_KEYVALP - invalid keyvalp argument . PMI2_ERR_INVALID_SIZE - invalid size argument - PMI2_ERR_OTHER - other unspecified error D*/ #define PMI2_SUCCESS 0 #define PMI2_FAIL -1 #define PMI2_ERR_INIT 1 #define PMI2_ERR_NOMEM 2 #define PMI2_ERR_INVALID_ARG 3 #define PMI2_ERR_INVALID_KEY 4 #define PMI2_ERR_INVALID_KEY_LENGTH 5 #define PMI2_ERR_INVALID_VAL 6 #define PMI2_ERR_INVALID_VAL_LENGTH 7 #define PMI2_ERR_INVALID_LENGTH 8 #define PMI2_ERR_INVALID_NUM_ARGS 9 #define PMI2_ERR_INVALID_ARGS 10 #define PMI2_ERR_INVALID_NUM_PARSED 11 #define PMI2_ERR_INVALID_KEYVALP 12 #define PMI2_ERR_INVALID_SIZE 13 #define PMI2_ERR_OTHER 14 /* This is here to allow spawn multiple functions to compile. This needs to be removed once those functions are fixed for pmi2 */ /* typedef struct PMI_keyval_t { char * key; char * val; } PMI_keyval_t; */ /*@ PMI2_Connect_comm_t - connection structure used when connecting to other jobs Fields: + read - Read from a connection to the leader of the job to which this process will be connecting. Returns 0 on success or an MPI error code on failure. . write - Write to a connection to the leader of the job to which this process will be connecting. Returns 0 on success or an MPI error code on failure. . ctx - An anonymous pointer to data that may be used by the read and write members. - isMaster - Indicates which process is the "master"; may have the values 1 (is the master), 0 (is not the master), or -1 (neither is designated as the master). The two processes must agree on which process is the master, or both must select -1 (neither is the master). Notes: A typical implementation of these functions will use the read and write calls on a pre-established file descriptor (fd) between the two leading processes. This will be needed only if the PMI server cannot access the KVS spaces of another job (this may happen, for example, if each mpiexec creates the KVS spaces for the processes that it manages). @*/ typedef struct PMI2_Connect_comm { int (*read)( void *buf, int maxlen, void *ctx ); int (*write)( const void *buf, int len, void *ctx ); void *ctx; int isMaster; } PMI2_Connect_comm_t; /*S MPID_Info - Structure of an MPID info Notes: There is no reference count because 'MPI_Info' values, unlike other MPI objects, may be changed after they are passed to a routine without changing the routine''s behavior. In other words, any routine that uses an 'MPI_Info' object must make a copy or otherwise act on any info value that it needs. A linked list is used because the typical 'MPI_Info' list will be short and a simple linked list is easy to implement and to maintain. Similarly, a single structure rather than separate header and element structures are defined for simplicity. No separate thread lock is provided because info routines are not performance critical; they may use the single critical section lock in the 'MPIR_Process' structure when they need a thread lock. This particular form of linked list (in particular, with this particular choice of the first two members) is used because it allows us to use the same routines to manage this list as are used to manage the list of free objects (in the file 'src/util/mem/handlemem.c'). In particular, if lock-free routines for updating a linked list are provided, they can be used for managing the 'MPID_Info' structure as well. The MPI standard requires that keys can be no less that 32 characters and no more than 255 characters. There is no mandated limit on the size of values. Module: Info-DS S*/ typedef struct MPID_Info { int handle; int pobj_mutex; int ref_count; struct MPID_Info *next; char *key; char *value; } MPID_Info; #define PMI2U_Info MPID_Info /*@ PMI2_Init - initialize the Process Manager Interface Output Parameter: + spawned - spawned flag . size - number of processes in the job . rank - rank of this process in the job - appnum - which executable is this on the mpiexec commandline Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: Initialize PMI for this process group. The value of spawned indicates whether this process was created by 'PMI2_Spawn_multiple'. 'spawned' will be non-zero iff this process group has a parent. @*/ int PMI2_Init(int *spawned, int *size, int *rank, int *appnum); /*@ PMI2_Finalize - finalize the Process Manager Interface Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: Finalize PMI for this job. @*/ int PMI2_Finalize(void); /*@ PMI2_Initialized - check if PMI has been initialized Return values: Non-zero if PMI2_Initialize has been called successfully, zero otherwise. @*/ int PMI2_Initialized(void); /*@ PMI2_Abort - abort the process group associated with this process Input Parameters: + flag - non-zero if all processes in this job should abort, zero otherwise - error_msg - error message to be printed Return values: If the abort succeeds this function will not return. Returns an MPI error code otherwise. @*/ int PMI2_Abort(int flag, const char msg[]); /*@ PMI2_Spawn - spawn a new set of processes Input Parameters: + count - count of commands . cmds - array of command strings . argcs - size of argv arrays for each command string . argvs - array of argv arrays for each command string . maxprocs - array of maximum processes to spawn for each command string . info_keyval_sizes - array giving the number of elements in each of the 'info_keyval_vectors' . info_keyval_vectors - array of keyval vector arrays . preput_keyval_size - Number of elements in 'preput_keyval_vector' . preput_keyval_vector - array of keyvals to be pre-put in the spawned keyval space - jobIdSize - size of the buffer provided in jobId Output Parameter: + jobId - job id of the spawned processes - errors - array of errors for each command Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: This function spawns a set of processes into a new job. The 'count' field refers to the size of the array parameters - 'cmd', 'argvs', 'maxprocs', 'info_keyval_sizes' and 'info_keyval_vectors'. The 'preput_keyval_size' refers to the size of the 'preput_keyval_vector' array. The 'preput_keyval_vector' contains keyval pairs that will be put in the keyval space of the newly created job before the processes are started. The 'maxprocs' array specifies the desired number of processes to create for each 'cmd' string. The actual number of processes may be less than the numbers specified in maxprocs. The acceptable number of processes spawned may be controlled by ``soft'' keyvals in the info arrays. The ``soft'' option is specified by mpiexec in the MPI-2 standard. Environment variables may be passed to the spawned processes through PMI implementation specific 'info_keyval' parameters. @*/ int PMI2_Job_Spawn(int count, const char * cmds[], int argcs[], const char ** argvs[], const int maxprocs[], const int info_keyval_sizes[], const struct MPID_Info *info_keyval_vectors[], int preput_keyval_size, const struct MPID_Info *preput_keyval_vector[], char jobId[], int jobIdSize, int errors[]); /*@ PMI2_Job_GetId - get job id of this job Input parameters: . jobid_size - size of buffer provided in jobid Output parameters: . jobid - the job id of this job Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Job_GetId(char jobid[], int jobid_size); /*@ PMI2_Job_GetRank - get rank of this job Output parameters: . rank - the rank of this job Return values: Returns 'PMI2_SUCCESS' on success and an PMI error code on failure. @*/ int PMI2_Job_GetRank(int* rank); /*@ PMI2_Info_GetSize - get the number of processes on the node Output parameters: . size - the number of processes on the node Return values: Returns 'PMI2_SUCCESS' on success and an PMI error code on failure. @*/ int PMI2_Info_GetSize(int* size); /*@ PMI2_Job_Connect - connect to the parallel job with ID jobid Input parameters: . jobid - job id of the job to connect to Output parameters: . conn - connection structure used to establish communication with the remote job Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: This just "registers" the other parallel job as part of a parallel program, and is used in the PMI2_KVS_xxx routines (see below). This is not a collective call and establishes a connection between all processes that are connected to the calling processes (on the one side) and that are connected to the named jobId on the other side. Processes that are already connected may call this routine. @*/ int PMI2_Job_Connect(const char jobid[], PMI2_Connect_comm_t *conn); /*@ PMI2_Job_Disconnect - disconnects from the job with ID jobid Input parameters: . jobid - job id of the job to connect to Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Job_Disconnect(const char jobid[]); /*@ PMIX_Ring - execute ring exchange over processes in group Input Parameters: + value - input string - maxvalue - max size of input and output strings Output Parameters: + rank - returns caller's rank within ring - ranks - returns number of procs within ring - left - buffer to receive value provided by (rank - 1) % ranks - right - buffer to receive value provided by (rank + 1) % ranks Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: This function is collective, but not necessarily synchronous, across all processes in the process group to which the calling process belongs. All processes in the group must call this function, but a process may return before all processes have called the function. @*/ #define HAVE_PMIX_RING 1 /* so one can conditionally compile with this funciton */ int PMIX_Ring(const char value[], int *rank, int *ranks, char left[], char right[], int maxvalue); /*@ PMI2_KVS_Put - put a key/value pair in the keyval space for this job Input Parameters: + key - key - value - value Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: If multiple PMI2_KVS_Put calls are made with the same key between calls to PMI2_KVS_Fence, the behavior is undefined. That is, the value returned by PMI2_KVS_Get for that key after the PMI2_KVS_Fence is not defined. @*/ int PMI2_KVS_Put(const char key[], const char value[]); /*@ PMI2_KVS_Fence - commit all PMI2_KVS_Put calls made before this fence Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: This is a collective call across the job. It has semantics that are similar to those for MPI_Win_fence and hence is most easily implemented as a barrier across all of the processes in the job. Specifically, all PMI2_KVS_Put operations performed by any process in the same job must be visible to all processes (by using PMI2_KVS_Get) after PMI2_KVS_Fence completes. However, a PMI implementation could make this a lazy operation by not waiting for all processes to enter their corresponding PMI2_KVS_Fence until some process issues a PMI2_KVS_Get. This might be appropriate for some wide-area implementations. @*/ int PMI2_KVS_Fence(void); /*@ PMI2_KVS_Get - returns the value associated with key in the key-value space associated with the job ID jobid Input Parameters: + jobid - the job id identifying the key-value space in which to look for key. If jobid is NULL, look in the key-value space of this job. . src_pmi_id - the pmi id of the process which put this keypair. This is just a hint to the server. PMI2_ID_NULL should be passed if no hint is provided. . key - key - maxvalue - size of the buffer provided in value Output Parameters: + value - value associated with key - vallen - length of the returned value, or, if the length is longer than maxvalue, the negative of the required length is returned Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_KVS_Get(const char *jobid, int src_pmi_id, const char key[], char value [], int maxvalue, int *vallen); /*@ PMI2_Info_GetNodeAttr - returns the value of the attribute associated with this node Input Parameters: + name - name of the node attribute . valuelen - size of the buffer provided in value - waitfor - if non-zero, the function will not return until the attribute is available Output Parameters: + value - value of the attribute - found - non-zero indicates that the attribute was found Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: This provides a way, when combined with PMI2_Info_PutNodeAttr, for processes on the same node to share information without requiring a more general barrier across the entire job. If waitfor is non-zero, the function will never return with found set to zero. Predefined attributes: + memPoolType - If the process manager allocated a shared memory pool for the MPI processes in this job and on this node, return the type of that pool. Types include sysv, anonmmap and ntshm. . memSYSVid - Return the SYSV memory segment id if the memory pool type is sysv. Returned as a string. . memAnonMMAPfd - Return the FD of the anonymous mmap segment. The FD is returned as a string. - memNTName - Return the name of the Windows NT shared memory segment, file mapping object backed by system paging file. Returned as a string. @*/ int PMI2_Info_GetNodeAttr(const char name[], char value[], int valuelen, int *found, int waitfor); /*@ PMI2_Info_GetNodeAttrIntArray - returns the value of the attribute associated with this node. The value must be an array of integers. Input Parameters: + name - name of the node attribute - arraylen - number of elements in array Output Parameters: + array - value of attribute . outlen - number of elements returned - found - non-zero if attribute was found Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: Notice that, unlike PMI2_Info_GetNodeAttr, this function does not have a waitfor parameter, and will return immediately with found=0 if the attribute was not found. Predefined array attribute names: + localRanksCount - Return the number of local ranks that will be returned by the key localRanks. . localRanks - Return the ranks in MPI_COMM_WORLD of the processes that are running on this node. - cartCoords - Return the Cartesian coordinates of this process in the underlying network topology. The coordinates are indexed from zero. Value only if the Job attribute for physTopology includes cartesian. @*/ int PMI2_Info_GetNodeAttrIntArray(const char name[], int array[], int arraylen, int *outlen, int *found); /*@ PMI2_Info_PutNodeAttr - stores the value of the named attribute associated with this node Input Parameters: + name - name of the node attribute - value - the value of the attribute Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Notes: For example, it might be used to share segment ids with other processes on the same SMP node. @*/ int PMI2_Info_PutNodeAttr(const char name[], const char value[]); /*@ PMI2_Info_GetJobAttr - returns the value of the attribute associated with this job Input Parameters: + name - name of the job attribute - valuelen - size of the buffer provided in value Output Parameters: + value - value of the attribute - found - non-zero indicates that the attribute was found Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Info_GetJobAttr(const char name[], char value[], int valuelen, int *found); /*@ PMI2_Info_GetJobAttrIntArray - returns the value of the attribute associated with this job. The value must be an array of integers. Input Parameters: + name - name of the job attribute - arraylen - number of elements in array Output Parameters: + array - value of attribute . outlen - number of elements returned - found - non-zero if attribute was found Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. Predefined array attribute names: + universeSize - The size of the "universe" (defined for the MPI attribute MPI_UNIVERSE_SIZE . hasNameServ - The value hasNameServ is true if the PMI2 environment supports the name service operations (publish, lookup, and unpublish). . physTopology - Return the topology of the underlying network. The valid topology types include cartesian, hierarchical, complete, kautz, hypercube; additional types may be added as necessary. If the type is hierarchical, then additional attributes may be queried to determine the details of the topology. For example, a typical cluster has a hierarchical physical topology, consisting of two levels of complete networks - the switched Ethernet or Infiniband and the SMP nodes. Other systems, such as IBM BlueGene, have one level that is cartesian (and in virtual node mode, have a single-level physical topology). . physTopologyLevels - Return a string describing the topology type for each level of the underlying network. Only valid if the physTopology is hierarchical. The value is a comma-separated list of physical topology types (except for hierarchical). The levels are ordered starting at the top, with the network closest to the processes last. The lower level networks may connect only a subset of processes. For example, for a cartesian mesh of SMPs, the value is cartesian,complete. All processes are connected by the cartesian part of this, but for each complete network, only the processes on the same node are connected. . cartDims - Return a string of comma-separated values describing the dimensions of the Cartesian topology. This must be consistent with the value of cartCoords that may be returned by PMI2_Info_GetNodeAttrIntArray. These job attributes are just a start, but they provide both an example of the sort of external data that is available through the PMI interface and how extensions can be added within the same API and wire protocol. For example, adding more complex network topologies requires only adding new keys, not new routines. . isHeterogeneous - The value isHeterogeneous is true if the processes belonging to the job are running on nodes with different underlying data models. @*/ int PMI2_Info_GetJobAttrIntArray(const char name[], int array[], int arraylen, int *outlen, int *found); /*@ PMI2_Nameserv_publish - publish a name Input parameters: + service_name - string representing the service being published . info_ptr - - port - string representing the port on which to contact the service Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Nameserv_publish(const char service_name[], const struct MPID_Info *info_ptr, const char port[]); /*@ PMI2_Nameserv_lookup - lookup a service by name Input parameters: + service_name - string representing the service being published . info_ptr - - portLen - size of buffer provided in port Output parameters: . port - string representing the port on which to contact the service Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Nameserv_lookup(const char service_name[], const struct MPID_Info *info_ptr, char port[], int portLen); /*@ PMI2_Nameserv_unpublish - unpublish a name Input parameters: + service_name - string representing the service being unpublished - info_ptr - Return values: Returns 'MPI_SUCCESS' on success and an MPI error code on failure. @*/ int PMI2_Nameserv_unpublish(const char service_name[], const struct MPID_Info *info_ptr); #if defined(__cplusplus) } #endif #endif /* PMI2_H_INCLUDED */ slurm-slurm-15-08-7-1/contribs/pmi2/testpmi2.c000066400000000000000000000102231265000126300207300ustar00rootroot00000000000000 /*****************************************************************************\ * testpmi2.c ***************************************************************************** * Copyright (C) 2014 SchedMD LLC * Written by David Bigagli * * This file is part of SLURM, a resource management program. * For details, see . * Please also read the included file: DISCLAIMER. * * SLURM is free software; you can redistribute it and/or modify it under * the terms of the GNU General Public License as published by the Free * Software Foundation; either version 2 of the License, or (at your option) * any later version. * * In addition, as a special exception, the copyright holders give permission * to link the code of portions of this program with the OpenSSL library under * certain conditions as described in each individual source file, and * distribute linked combinations including the two. You must obey the GNU * General Public License in all respects for all of the code used other than * OpenSSL. If you modify file(s) with this exception, you may extend this * exception to your version of the file(s), but you are not obligated to do * so. If you do not wish to do so, delete this exception statement from your * version. If you delete this exception statement from all source files in * the program, then also delete it here. * * SLURM is distributed in the hope that it will be useful, but WITHOUT ANY * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more * details. * * You should have received a copy of the GNU General Public License along * with SLURM; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. \*****************************************************************************/ #include #include #include #include #include #include static char *mrand(int, int); int main(int argc, char **argv) { int rank; int size; int appnum; int spawned; int flag; int len; int i; struct timeval tv; struct timeval tv2; char jobid[128]; char key[128]; char val[128]; char buf[128]; { int x = 1; while (x == 0) { sleep(2); } } gettimeofday(&tv, NULL); srand(tv.tv_sec); PMI2_Init(&spawned, &size, &rank, &appnum); PMI2_Job_GetId(jobid, sizeof(buf)); memset(val, 0, sizeof(val)); PMI2_Info_GetJobAttr("mpi_reserved_ports", val, PMI2_MAX_ATTRVALUE, &flag); sprintf(key, "mpi_reserved_ports"); PMI2_KVS_Put(key, val); memset(val, 0, sizeof(val)); sprintf(buf, "PMI_netinfo_of_task"); PMI2_Info_GetJobAttr(buf, val, PMI2_MAX_ATTRVALUE, &flag); sprintf(key, buf); PMI2_KVS_Put(key, val); memset(val, 0, sizeof(val)); sprintf(key, "david@%d", rank); sprintf(val, "%s", mrand(97, 122)); PMI2_KVS_Put(key, val); PMI2_KVS_Fence(); for (i = 0; i < size; i++) { memset(val, 0, sizeof(val)); sprintf(key, "PMI_netinfo_of_task"); PMI2_KVS_Get(jobid, PMI2_ID_NULL, key, val, sizeof(val), &len); printf("rank: %d key:%s val:%s\n", rank, key, val); memset(val, 0, sizeof(val)); sprintf(key, "david@%d", rank); PMI2_KVS_Get(jobid, PMI2_ID_NULL, key, val, sizeof(val), &len); printf("rank: %d key:%s val:%s\n", rank, key, val); memset(val, 0, sizeof(val)); sprintf(key, "mpi_reserved_ports"); PMI2_KVS_Get(jobid, PMI2_ID_NULL, key, val, sizeof(val), &len); printf("rank: %d key:%s val:%s\n", rank, key, val); } PMI2_Finalize(); gettimeofday(&tv2, NULL); printf("%f\n", ((tv2.tv_sec - tv.tv_sec) * 1000.0 + (tv2.tv_usec - tv.tv_usec) / 1000.0)); return 0; } /* Generate a random number between * min and Max and convert it to * a string. */ static char * mrand(int m, int M) { int i; time_t t; static char buf[64]; memset(buf, 0, sizeof(buf)); for (i = 0; i < 16; i++) buf[i] = rand() % (M - m + 1) + m; return buf; } slurm-slurm-15-08-7-1/contribs/pmi2/testpmi2_put.c000066400000000000000000000017771265000126300216360ustar00rootroot00000000000000#include #include #include int main(int argc, char **argv) { int spawned, size, rank, appnum; int ret; char jobid[50]; int msg = 0; char val[20] = "0\n"; int len = 0; ret = PMI2_Init(&spawned, &size, &rank, &appnum); if (ret != PMI2_SUCCESS) { perror("PMI2_Init failed"); return 1; } PMI2_Job_GetId(jobid, sizeof(jobid)); printf("spawned=%d, size=%d, rank=%d, appnum=%d, jobid=%s\n", spawned, size, rank, appnum, jobid); fflush(stdout); PMI2_KVS_Fence(); /* broadcast msg=42 from proc 0 */ if (rank == 0) { msg = 42; snprintf(val, sizeof(val), "%d\n", msg); PMI2_KVS_Put("msg", val); printf("%d> send %d\n", rank, msg); fflush(stdout); } PMI2_KVS_Fence(); PMI2_KVS_Get(jobid, PMI2_ID_NULL, "msg", val, sizeof(val), &len); msg = atoi(val); printf("%d> got %d\n", rank, msg); fflush(stdout); PMI2_Finalize(); return 0; } slurm-slurm-15-08-7-1/contribs/pmi2/testpmixring.c000066400000000000000000000025021265000126300217170ustar00rootroot00000000000000 #include #include #include #include //#include #include #include /* * To build: * * gcc -g -O0 -o testpmixring testpmixring.c -I/include -Wl,-rpath,/lib -L/lib -lpmi2 * * To run: * * srun -n8 -m block ./testpmixring * srun -n8 -m cyclic ./testpmixring */ int main(int argc, char **argv) { int spawned, size, rank, appnum; struct timeval tv, tv2; int ring_rank, ring_size; char jobid[128]; char val[128]; char buf[128]; char left[128]; char right[128]; { int x = 1; while (x) { fprintf(stderr, "attachme %d\n", getpid()); sleep(2); } } gettimeofday(&tv, NULL); PMI2_Init(&spawned, &size, &rank, &appnum); PMI2_Job_GetId(jobid, sizeof(buf)); /* test PMIX_Ring */ snprintf(val, sizeof(val), "pmi_rank=%d", rank); PMIX_Ring(val, &ring_rank, &ring_size, left, right, 128); printf("pmi_rank:%d ring_rank:%d ring_size:%d left:%s mine:%s right:%s\n", rank, ring_rank, ring_size, left, val, right); PMI2_Finalize(); gettimeofday(&tv2, NULL); printf("%f\n", ((tv2.tv_sec - tv.tv_sec) * 1000.0 + (tv2.tv_usec - tv.tv_usec) / 1000.0)); return 0; } slurm-slurm-15-08-7-1/contribs/ptrace.patch000066400000000000000000000102241265000126300204460ustar00rootroot00000000000000 The Linux kernels must implement ptrace semantics required by the TotalView debugger. In order to initiate a parallel job under debugger control, a resource manager or job launch utility must be able to start all tasks in a stopped state, notify TotalView, and then allow TotalView debugger servers to attach to all tasks. This functionality requires the ability to: * Detach from a traced process and leave the process stopped * Attach to a stopped process Most newer versions of the Linux kernel support this functionality. For some older Linux kernels, both of the above are impossible without the following patch by Vic Zandy: * initial posting * follow up Further discussion of Vic's patch can be found in this thread . The main objections to the patch seemed to be * It causes a behavior change for ptrace() * No apparent agreement on the "right" thing to do when attaching to a stopped process. [Any Etnus or Quadrics references to patches available?] On a vanilla 2.6.4 kernel, the ptrace() test is able to accomplish part 1 of the test (detaching from a process and leaving the process stopped), but part 2 (attaching to a stopped process) appears to still be failing. It is possible to workaround a failure of step 2. Technically, it is not the attach that hangs when doing a ptrace() of a stopped process, but the subsequent call to waitpid(). Since the process was already stopped, the waitpid() never returns. Perhaps Etnus has implemented a workaround for this behavior. Testing: The followinbg piece of test code can be used to verify that the proper functionality exists in the current kernel: testptrace.c This test should succeed when run without any arguments. ------------------------------------------------------------------------ Linux Kernel: Re: [PATCH] ptrace on stopped processes (2.4) From: Vic Zandy@cs.wisc.edu Date: Mar 18 2002 This is a repost of the ptrace patch to 2.4 kernels we've discussed in recent months. Since the last post, I have updated it to linux 2.4.18 (no changes) and tested it with subterfuge and uml. Subterfuge seems to be unaffected. UML needs minor modifications; I've discussed them with Jeff Dike and (I believe) he is happy. I believe I have addressed everyone's concerns. The patch fixes these two bugs: 1. gdb and other tools cannot attach to a stopped process. The wait that follows the PTRACE_ATTACH will block indefinitely. 2. It is not possible to use PTRACE_DETACH to leave a process stopped, because ptrace ignores SIGSTOPs sent by the tracing process. Vic --- /home/vic/p/linux-2.4.18.orig/kernel/ptrace.c Wed Mar 13 13:14:54 2002 +++ /home/vic/p/linux-2.4.18/kernel/ptrace.c Mon Mar 18 21:58:11 2002 @@ -54,6 +54,7 @@ int ptrace_attach(struct task_struct *task) { + int stopped; task_lock(task); if (task->pid <= 1) goto bad; @@ -90,7 +91,13 @@ } write_unlock_irq(&tasklist_lock); + stopped = (task->state == TASK_STOPPED); send_sig(SIGSTOP, task, 1); + /* If it was stopped when we got here, + clear the pending SIGSTOP. */ + if (stopped) + wake_up_process(task); + return 0; bad: --- /home/vic/p/linux-2.4.18.orig/arch/i386/kernel/signal.c Wed Mar 13 13:16:44 2002 +++ /home/vic/p/linux-2.4.18/arch/i386/kernel/signal.c Wed Mar 13 16:31:38 2002 @@ -620,9 +620,9 @@ continue; current->exit_code = 0; - /* The debugger continued. Ignore SIGSTOP. */ - if (signr == SIGSTOP) - continue; + /* The debugger continued. */ + if (signr == SIGSTOP && current->ptrace & PT_PTRACED) + continue; /* ignore SIGSTOP */ /* Update the siginfo structure. Is this good? */ if (signr != info.si_signo) { - ------------------------------------------------------------------------ slurm-slurm-15-08-7-1/contribs/sgather/000077500000000000000000000000001265000126300176055ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/sgather/Makefile.am000066400000000000000000000010731265000126300216420ustar00rootroot00000000000000# Makefile for sgather AUTOMAKE_OPTIONS = foreign man1_MANS = sgather.1 bin_SCRIPTS = sgather install-binSCRIPTS: $(bin_SCRIPTS) @$(NORMAL_INSTALL) test -z "$(DESTDIR)$(bindir)" || $(mkdir_p) "$(DESTDIR)$(bindir)" @list='$(bin_SCRIPTS)'; for p in $$list; do \ cp $(top_srcdir)/contribs/sgather/$$p $(DESTDIR)$(bindir)/$$p; \ chmod 755 $(DESTDIR)$(bindir)/$$p;\ done uninstall-binSCRIPTS: @$(NORMAL_UNINSTALL) @list='$(bin_SCRIPTS)'; for p in $$list; do \ echo " rm -f '$(DESTDIR)$(bindir)/$$p'"; \ rm -f "$(DESTDIR)$(bindir)/$$p"; \ done clean: slurm-slurm-15-08-7-1/contribs/sgather/Makefile.in000066400000000000000000000511441265000126300216570ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/sgather DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(bindir)" "$(DESTDIR)$(man1dir)" SCRIPTS = $(bin_SCRIPTS) AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac man1dir = $(mandir)/man1 NROFF = nroff MANS = $(man1_MANS) am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ # Makefile for sgather AUTOMAKE_OPTIONS = foreign man1_MANS = sgather.1 bin_SCRIPTS = sgather all: all-am .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign contribs/sgather/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign contribs/sgather/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs install-man1: $(man1_MANS) @$(NORMAL_INSTALL) @list1='$(man1_MANS)'; \ list2=''; \ test -n "$(man1dir)" \ && test -n "`echo $$list1$$list2`" \ || exit 0; \ echo " $(MKDIR_P) '$(DESTDIR)$(man1dir)'"; \ $(MKDIR_P) "$(DESTDIR)$(man1dir)" || exit 1; \ { for i in $$list1; do echo "$$i"; done; \ if test -n "$$list2"; then \ for i in $$list2; do echo "$$i"; done \ | sed -n '/\.1[a-z]*$$/p'; \ fi; \ } | while read p; do \ if test -f $$p; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; echo "$$p"; \ done | \ sed -e 'n;s,.*/,,;p;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,' | \ sed 'N;N;s,\n, ,g' | { \ list=; while read file base inst; do \ if test "$$base" = "$$inst"; then list="$$list $$file"; else \ echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man1dir)/$$inst'"; \ $(INSTALL_DATA) "$$file" "$(DESTDIR)$(man1dir)/$$inst" || exit $$?; \ fi; \ done; \ for i in $$list; do echo "$$i"; done | $(am__base_list) | \ while read files; do \ test -z "$$files" || { \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(man1dir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(man1dir)" || exit $$?; }; \ done; } uninstall-man1: @$(NORMAL_UNINSTALL) @list='$(man1_MANS)'; test -n "$(man1dir)" || exit 0; \ files=`{ for i in $$list; do echo "$$i"; done; \ } | sed -e 's,.*/,,;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,'`; \ dir='$(DESTDIR)$(man1dir)'; $(am__uninstall_files_from_dir) tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(SCRIPTS) $(MANS) installdirs: for dir in "$(DESTDIR)$(bindir)" "$(DESTDIR)$(man1dir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-man install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-binSCRIPTS install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-man1 install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-binSCRIPTS uninstall-man uninstall-man: uninstall-man1 .MAKE: install-am install-strip .PHONY: all all-am check check-am clean clean-generic clean-libtool \ cscopelist-am ctags-am distclean distclean-generic \ distclean-libtool distdir dvi dvi-am html html-am info info-am \ install install-am install-binSCRIPTS install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-html install-html-am install-info \ install-info-am install-man install-man1 install-pdf \ install-pdf-am install-ps install-ps-am install-strip \ installcheck installcheck-am installdirs maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-generic \ mostlyclean-libtool pdf pdf-am ps ps-am tags-am uninstall \ uninstall-am uninstall-binSCRIPTS uninstall-man uninstall-man1 install-binSCRIPTS: $(bin_SCRIPTS) @$(NORMAL_INSTALL) test -z "$(DESTDIR)$(bindir)" || $(mkdir_p) "$(DESTDIR)$(bindir)" @list='$(bin_SCRIPTS)'; for p in $$list; do \ cp $(top_srcdir)/contribs/sgather/$$p $(DESTDIR)$(bindir)/$$p; \ chmod 755 $(DESTDIR)$(bindir)/$$p;\ done uninstall-binSCRIPTS: @$(NORMAL_UNINSTALL) @list='$(bin_SCRIPTS)'; for p in $$list; do \ echo " rm -f '$(DESTDIR)$(bindir)/$$p'"; \ rm -f "$(DESTDIR)$(bindir)/$$p"; \ done clean: # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/sgather/sgather000077500000000000000000000214231265000126300211720ustar00rootroot00000000000000#!/bin/bash # # sgather - a counterpart of "sbcast" # # Copyright (c) 2013, ZIH, TU Dresden, Federal Republic of Germany # # Author: Matthias Jurenz # Version 1.0 # # Change history: # version date author comment # 1.0 2013-10-29 jurenz released initial version # 2013-10-28 jurenz do not perform a remote copy if the destination # file is node-global (i.e. /scratch/* or /home/*) # # # "global" variables # SCONTROL="scontrol" SRUN="srun" SCP="scp" RM="rm" VERSION="1.0" # Define node-global file systems for which a remote copy is not required # Specify any number of file system paths as desired here GLOBAL_FILE[0]="/home/*" GLOBAL_FILE[1]="/scratch/*" # # show_help - display help message # function show_help { cat << EOF Usage: sgather [OPTIONS] SOURCE DEST -C, --compress compress the file being transmitted -f, --force ignore nonexistent source file -F, --fanout=num specify message fanout -k, --keep do not remove source file after transmission -p, --preserve preserve modes and times of source file -r, --recursive copy directories recursively -t, --timeout=secs specify message timeout (seconds) -v, --verbose provide detailed event logging -V, --version print version information and exit Help options: --help show this help message --usage display brief usage message EOF } # # show_usage - display brief usage message # function show_usage { echo "Usage: sgather [-CfFkprtvV] SOURCE DEST" } # # verbose - print a verbose message to stdout # function verbose { if [ $_SGATHER_VERBOSE -ge $1 ]; then if [ -z "$_SGATHER_SPAWNED" ]; then prefix="sgather:" else prefix="sgather($(hostname)):" fi echo "$prefix $2" fi } # # "main" # if [ -z "$_SGATHER_SPAWNED" ]; then if [ ! -z ${SGATHER_COMPRESS+x} ]; then _SGATHER_COMPRESS=true; else _SGATHER_COMPRESS=false; fi if [ ! -z ${SGATHER_FORCE+x} ]; then _SGATHER_FORCE=true; else _SGATHER_FORCE=false; fi if [ ! -z ${SGATHER_FANOUT+x} ]; then _SGATHER_FANOUT=$SGATHER_FANOUT; else _SGATHER_FANOUT=8; fi if [ ! -z ${SGATHER_KEEP+x} ]; then _SGATHER_KEEP=true; else _SGATHER_KEEP=false; fi if [ ! -z ${SGATHER_PRESERVE+x} ]; then _SGATHER_PRESERVE=true; else _SGATHER_PRESERVE=false; fi if [ ! -z ${SGATHER_RECURSIVE+x} ]; then _SGATHER_RECURSIVE=true; else _SGATHER_RECURSIVE=false; fi if [ ! -z ${SGATHER_TIMEOUT+x} ]; then _SGATHER_TIMEOUT=$SGATHER_TIMEOUT; else _SGATHER_TIMEOUT=60; fi _SGATHER_VERBOSE=0 _SGATHER_SOURCE= _SGATHER_DEST= _SGATHER_REMOTE_COPY=true _SGATHER_BATCHHOST= # parse command line options # opt_error=0 opts=$(getopt -n "$0" --options "CfF:hkprt:vV" --long "compress,force,fanout:,help,keep,preserve,recursive,timeout,usage,verbose,version" -- "$@") || opt_error=$? eval set -- "$opts" while [ $opt_error -eq 0 -a $# -gt 0 ]; do case "$1" in -C|--compress) _SGATHER_COMPRESS=true shift ;; -f|--force) _SGATHER_FORCE=true shift ;; -F|--fanout) _SGATHER_FANOUT=$2 shift 2 ;; -h|--help) show_help exit 0 ;; -k|--keep) _SGATHER_KEEP=true shift ;; -p|--preserve) _SGATHER_PRESERVE=true shift ;; -r|--recursive) _SGATHER_RECURSIVE=true shift ;; -t|--timeout) _SGATHER_TIMEOUT=$2 shift 2 ;; --usage) show_usage exit 0 ;; -v|--verbose) _SGATHER_VERBOSE=$(( $_SGATHER_VERBOSE + 1 )) shift ;; -V|--version) echo "sgather $VERSION ($($SRUN --version))" exit $? ;; --) shift if [ $# -ne 2 ]; then echo "Need two file names, have $# names" >&2 opt_error=1 else _SGATHER_SOURCE="$1" # convert relative to absolute destination path, if necessary case $2 in /*) _SGATHER_DEST="$2" ;; *) _SGATHER_DEST="$PWD/$2" ;; esac fi break ;; esac done # verify given fanout # if [ $opt_error -eq 0 ] && ! [ $opt_error -eq 0 -a $_SGATHER_FANOUT -eq $_SGATHER_FANOUT -a $_SGATHER_FANOUT -gt 0 -a $_SGATHER_FANOUT -le 8 ] 2>/dev/null; then echo "$0: invalid fanout -- '$_SGATHER_FANOUT'" >&2 opt_error=1 fi # verify given timeout # if [ $opt_error -eq 0 ] && ! [ $opt_error -eq 0 -a $_SGATHER_TIMEOUT -eq $_SGATHER_TIMEOUT -a $_SGATHER_TIMEOUT -gt 0 ] 2>/dev/null; then echo "$0: invalid timeout -- '$_SGATHER_TIMEOUT'" >&2 opt_error=1 fi if [ $opt_error -ne 0 ]; then echo "Try \"sgather --help\" for more information" >&2 exit $opt_error fi verbose 1 "-----------------------------" verbose 1 "compress = $_SGATHER_COMPRESS" verbose 1 "force = $_SGATHER_FORCE" verbose 1 "fanout = $_SGATHER_FANOUT" verbose 1 "keep source = $_SGATHER_KEEP" verbose 1 "preserve = $_SGATHER_PRESERVE" verbose 1 "recursive = $_SGATHER_RECURSIVE" verbose 1 "timeout = $_SGATHER_TIMEOUT" verbose 1 "verbose = $_SGATHER_VERBOSE" verbose 1 "source = $_SGATHER_SOURCE" verbose 1 "dest = $_SGATHER_DEST" verbose 1 "-----------------------------" # check whether we're within a SLURM job # if [ -z $SLURM_JOBID ]; then echo "$0: error: Command only valid from within SLURM job" >&2 exit 1 fi verbose 1 "jobid = $SLURM_JOBID" verbose 1 "node_cnt = $SLURM_NNODES" verbose 1 "node_list = $SLURM_NODELIST" # check whether the destination file is node-global # (->no remote copying is necessary) # inx=0 while [ $inx -lt ${#GLOBAL_FILE[@]} ] do case $_SGATHER_DEST in ${GLOBAL_FILE[$inx]}) _SGATHER_REMOTE_COPY=false ;; esac inx=$((inx+1)) done verbose 1 "remote copy = $_SGATHER_REMOTE_COPY" # determine the batch host node via scontrol # tmp=$($SCONTROL show job $SLURM_JOBID | grep BatchHost) || exit $? _SGATHER_BATCHHOST=$(echo $tmp | cut -d '=' -f 2) # export control environment variables for subsequent call to itself # export _SGATHER_COMPRESS export _SGATHER_FORCE export _SGATHER_FANOUT export _SGATHER_KEEP export _SGATHER_PRESERVE export _SGATHER_RECURSIVE export _SGATHER_TIMEOUT export _SGATHER_VERBOSE export _SGATHER_SOURCE export _SGATHER_DEST export _SGATHER_REMOTE_COPY export _SGATHER_BATCHHOST export _SGATHER_SPAWNED=1 # spawn this script to all job nodes # # either in one step, if the destination file is node-global (remote copying # isn't necessary) ... # if ! $_SGATHER_REMOTE_COPY; then $SRUN --ntasks=$SLURM_NNODES --ntasks-per-node=1 $0 || exit $? # ... or in multiple steps with regard to $_SGATHER_FANOUT # else nodelist=$($SCONTROL show hostnames $SLURM_NODELIST | sort) set $nodelist nodesublist="" nodesubcnt=0 while [ $# -gt 0 ]; do if [ -z $nodesublist ]; then nodesublist=$1 else nodesublist="$nodesublist,$1" fi nodesubcnt=$(( $nodesubcnt + 1 )) shift if [ $# -eq 0 -o $nodesubcnt -eq $_SGATHER_FANOUT ]; then $SRUN --nodelist="$nodesublist" --ntasks=$nodesubcnt --ntasks-per-node=1 $0 || exit $? nodesublist="" nodesubcnt=0 fi done fi else #_SGATHER_SPAWNED # check whether the source file exist # if [ ! -e $_SGATHER_SOURCE ]; then verbose 0 "error: Can't open $_SGATHER_SOURCE" >&2 if $_SGATHER_FORCE; then verbose 0 "$_SGATHER_SOURCE ignored" >&2 exit 0 else exit 1 fi fi # compose scp command # # prepend destination node name to destination file, if remote copying # is necessary # if $_SGATHER_REMOTE_COPY && [ $(hostname) != $_SGATHER_BATCHHOST ]; then _SGATHER_DEST="$_SGATHER_BATCHHOST:$_SGATHER_DEST" fi # append source node name to destination file _SGATHER_DEST="$_SGATHER_DEST.$(hostname)" scp_cmd="$SCP" if [ $_SGATHER_VERBOSE -eq 0 ]; then scp_cmd="$SCP -q" elif [ $_SGATHER_VERBOSE -ge 2 ]; then scp_cmd="$SCP -v" fi if $_SGATHER_RECURSIVE; then scp_cmd="$scp_cmd -r" fi if $_SGATHER_COMPRESS; then scp_cmd="$scp_cmd -C" fi if [ $_SGATHER_TIMEOUT -ne 0 ]; then scp_cmd="$scp_cmd -o ConnectTimeout=$_SGATHER_TIMEOUT" fi if $_SGATHER_PRESERVE; then scp_cmd="$scp_cmd -p" fi scp_cmd="$scp_cmd $_SGATHER_SOURCE $_SGATHER_DEST" # run scp # verbose 1 "executing \"$scp_cmd\"" $scp_cmd || exit $? # remove source file, if desired # if ! $_SGATHER_KEEP; then verbose 1 "removing $_SGATHER_SOURCE" $RM -r $_SGATHER_SOURCE || exit $? fi fi exit 0 slurm-slurm-15-08-7-1/contribs/sgather/sgather.1000066400000000000000000000054721265000126300213340ustar00rootroot00000000000000.TH SGATHER "1" "October 2013" "sgather 1.0" "ZIH Slurm extensions" .SH "NAME" sgather \- transmit a file from the nodes allocated to a SLURM job. .SH "SYNOPSIS" \fBsgather\fR [\-rikCFpvV] SOURCE DEST .SH "DESCRIPTION" \fBsgather\fR is used to transmit a file from all nodes allocated to the currently active SLURM job. This command should only be executed from within a SLURM batch job or within the shell spawned after a SLURM job's resource allocation. \fBSOURCE\fR should be the fully qualified pathname for the file copy to be fetched from each node. \fBSOURCE\fR should be on a file system local to that node. \fBDEST\fR is the name of the file to be created on the current node where the source node name will be appended. Note that parallel file systems \fImay\fR provide better performance than \fBsgather\fR can provide, although performance will vary by file size, degree of parallelism, and network type. .SH "OPTIONS" .TP \fB\-C\fR, \fB\-\-compress\fR Compress the file being transmitted. .TP \fB\-f\fR, \fB\-\-force Ignore nonexistent source file. .TP \fB\-F\fR \fInumber\fR, \fB\-\-fanout\fR=\fInumber\fR Specify the fanout of messages used for file transfer. Maximum value is currently eight. .TP \fB\-k\fR, \fB\-\-keep\fR Do not remove the source file after transmission. .TP \fB\-p\fR, \fB\-\-preserve\fR Preserves modification times, access times, and modes from the original file. .TP \fB\-r\fR, \fB\-\-recursive\fR Copy directories recursively. .TP \fB\-t\fB \fIseconds\fR, \fB\-\-timeout\fR=\fIseconds\fR Specify the message timeout in seconds. The default value is 60 seconds. .TP \fB\-v\fR, \fB\-\-verbose\fR Provide detailed event logging through program execution. .TP \fB\-V\fR, \fB\-\-version\fR Print version information and exit. .SH "ENVIRONMENT VARIABLES" .PP Some \fBsgather\fR options may be set via environment variables. These environment variables, along with their corresponding options, are listed below. (Note: Command line options will always override these settings.) .TP 20 \fBSGATHER_COMPRESS\fR \fB\-C, \-\-compress\fR .TP \fBSGATHER_FANOUT\fR \fB\-F\fB \fInumber\fR, \fB\-\-fanout\fR=\fInumber\fR .TP \fBSGATHER_FORCE\fR \fB\-f, \-\-force\fR .TP \fBSGATHER_KEEP\fR \fB\-k, \-\-keep\fR .TP \fBSGATHER_PRESERVE\fR \fB\-p, \-\-preserve\fR .TP \fBSGATHER_RECURSIVE\fR \fB\-r, \-\-recursive\fR .TP \fBSGATHER_TIMEOUT\fR \fB\-t\fB \fIseconds\fR, \fB\-\-timeout\fR=\fIseconds\fR .SH "EXAMPLE" Using a batch script, execute a program that produces \fB/tmp/my.data\fR on all nodes allocated to the SLURM job and then transmit these files to the batch node. .nf > cat my.job #!/bin/bash srun my.prog --output /tmp/my.data sgather /tmp/my.data all_data > sbatch \-\-nodes=8 my.job srun: jobid 12345 submitted .fi .SH "COPYING" Copyright (C) 2013-2013 ZIH, TU Dresden, Federal Republic of Germany. .SH "SEE ALSO" \fBsbcast\fR(1) slurm-slurm-15-08-7-1/contribs/sgi/000077500000000000000000000000001265000126300167325ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/sgi/Makefile.am000066400000000000000000000006341265000126300207710ustar00rootroot00000000000000# # Makefile for sgi programs # AUTOMAKE_OPTIONS = foreign if HAVE_NETLOC EXTRA_DIST = README.txt bin_PROGRAMS = netloc_to_topology netloc_to_topology_SOURCES = netloc_to_topology.c netloc_to_topology_CPPFLAGS = $(NETLOC_CPPFLAGS) $(HWLOC_CPPFLAGS) netloc_to_topology_LDFLAGS = $(NETLOC_LDFLAGS) $(NETLOC_LIBS) $(HWLOC_LDFLAGS) $(HWLOC_LIBS) else EXTRA_DIST = \ netloc_to_topology.c \ README.txt endif slurm-slurm-15-08-7-1/contribs/sgi/Makefile.in000066400000000000000000000664551265000126300210170ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ # # Makefile for sgi programs # VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ @HAVE_NETLOC_TRUE@bin_PROGRAMS = netloc_to_topology$(EXEEXT) subdir = contribs/sgi DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am \ $(top_srcdir)/auxdir/depcomp ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__installdirs = "$(DESTDIR)$(bindir)" PROGRAMS = $(bin_PROGRAMS) am__netloc_to_topology_SOURCES_DIST = netloc_to_topology.c @HAVE_NETLOC_TRUE@am_netloc_to_topology_OBJECTS = netloc_to_topology-netloc_to_topology.$(OBJEXT) netloc_to_topology_OBJECTS = $(am_netloc_to_topology_OBJECTS) netloc_to_topology_LDADD = $(LDADD) AM_V_lt = $(am__v_lt_@AM_V@) am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@) am__v_lt_0 = --silent am__v_lt_1 = netloc_to_topology_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC \ $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=link $(CCLD) \ $(AM_CFLAGS) $(CFLAGS) $(netloc_to_topology_LDFLAGS) \ $(LDFLAGS) -o $@ AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = DEFAULT_INCLUDES = -I.@am__isrc@ -I$(top_builddir) -I$(top_builddir)/slurm depcomp = $(SHELL) $(top_srcdir)/auxdir/depcomp am__depfiles_maybe = depfiles am__mv = mv -f COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \ $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \ $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \ $(AM_CFLAGS) $(CFLAGS) AM_V_CC = $(am__v_CC_@AM_V@) am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@) am__v_CC_0 = @echo " CC " $@; am__v_CC_1 = CCLD = $(CC) LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ $(AM_LDFLAGS) $(LDFLAGS) -o $@ AM_V_CCLD = $(am__v_CCLD_@AM_V@) am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@) am__v_CCLD_0 = @echo " CCLD " $@; am__v_CCLD_1 = SOURCES = $(netloc_to_topology_SOURCES) DIST_SOURCES = $(am__netloc_to_topology_SOURCES_DIST) am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` ETAGS = etags CTAGS = ctags DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign @HAVE_NETLOC_FALSE@EXTRA_DIST = \ @HAVE_NETLOC_FALSE@ netloc_to_topology.c \ @HAVE_NETLOC_FALSE@ README.txt @HAVE_NETLOC_TRUE@EXTRA_DIST = README.txt @HAVE_NETLOC_TRUE@netloc_to_topology_SOURCES = netloc_to_topology.c @HAVE_NETLOC_TRUE@netloc_to_topology_CPPFLAGS = $(NETLOC_CPPFLAGS) $(HWLOC_CPPFLAGS) @HAVE_NETLOC_TRUE@netloc_to_topology_LDFLAGS = $(NETLOC_LDFLAGS) $(NETLOC_LIBS) $(HWLOC_LDFLAGS) $(HWLOC_LIBS) all: all-am .SUFFIXES: .SUFFIXES: .c .lo .o .obj $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign contribs/sgi/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign contribs/sgi/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): install-binPROGRAMS: $(bin_PROGRAMS) @$(NORMAL_INSTALL) @list='$(bin_PROGRAMS)'; test -n "$(bindir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(bindir)'"; \ $(MKDIR_P) "$(DESTDIR)$(bindir)" || exit 1; \ fi; \ for p in $$list; do echo "$$p $$p"; done | \ sed 's/$(EXEEXT)$$//' | \ while read p p1; do if test -f $$p \ || test -f $$p1 \ ; then echo "$$p"; echo "$$p"; else :; fi; \ done | \ sed -e 'p;s,.*/,,;n;h' \ -e 's|.*|.|' \ -e 'p;x;s,.*/,,;s/$(EXEEXT)$$//;$(transform);s/$$/$(EXEEXT)/' | \ sed 'N;N;N;s,\n, ,g' | \ $(AWK) 'BEGIN { files["."] = ""; dirs["."] = 1 } \ { d=$$3; if (dirs[d] != 1) { print "d", d; dirs[d] = 1 } \ if ($$2 == $$4) files[d] = files[d] " " $$1; \ else { print "f", $$3 "/" $$4, $$1; } } \ END { for (d in files) print "f", d, files[d] }' | \ while read type dir files; do \ if test "$$dir" = .; then dir=; else dir=/$$dir; fi; \ test -z "$$files" || { \ echo " $(INSTALL_PROGRAM_ENV) $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL_PROGRAM) $$files '$(DESTDIR)$(bindir)$$dir'"; \ $(INSTALL_PROGRAM_ENV) $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL_PROGRAM) $$files "$(DESTDIR)$(bindir)$$dir" || exit $$?; \ } \ ; done uninstall-binPROGRAMS: @$(NORMAL_UNINSTALL) @list='$(bin_PROGRAMS)'; test -n "$(bindir)" || list=; \ files=`for p in $$list; do echo "$$p"; done | \ sed -e 'h;s,^.*/,,;s/$(EXEEXT)$$//;$(transform)' \ -e 's/$$/$(EXEEXT)/' \ `; \ test -n "$$list" || exit 0; \ echo " ( cd '$(DESTDIR)$(bindir)' && rm -f" $$files ")"; \ cd "$(DESTDIR)$(bindir)" && rm -f $$files clean-binPROGRAMS: @list='$(bin_PROGRAMS)'; test -n "$$list" || exit 0; \ echo " rm -f" $$list; \ rm -f $$list || exit $$?; \ test -n "$(EXEEXT)" || exit 0; \ list=`for p in $$list; do echo "$$p"; done | sed 's/$(EXEEXT)$$//'`; \ echo " rm -f" $$list; \ rm -f $$list netloc_to_topology$(EXEEXT): $(netloc_to_topology_OBJECTS) $(netloc_to_topology_DEPENDENCIES) $(EXTRA_netloc_to_topology_DEPENDENCIES) @rm -f netloc_to_topology$(EXEEXT) $(AM_V_CCLD)$(netloc_to_topology_LINK) $(netloc_to_topology_OBJECTS) $(netloc_to_topology_LDADD) $(LIBS) mostlyclean-compile: -rm -f *.$(OBJEXT) distclean-compile: -rm -f *.tab.c @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/netloc_to_topology-netloc_to_topology.Po@am__quote@ .c.o: @am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $< .c.obj: @am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'` .c.lo: @am__fastdepCC_TRUE@ $(AM_V_CC)$(LTCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LTCOMPILE) -c -o $@ $< netloc_to_topology-netloc_to_topology.o: netloc_to_topology.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(netloc_to_topology_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT netloc_to_topology-netloc_to_topology.o -MD -MP -MF $(DEPDIR)/netloc_to_topology-netloc_to_topology.Tpo -c -o netloc_to_topology-netloc_to_topology.o `test -f 'netloc_to_topology.c' || echo '$(srcdir)/'`netloc_to_topology.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/netloc_to_topology-netloc_to_topology.Tpo $(DEPDIR)/netloc_to_topology-netloc_to_topology.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='netloc_to_topology.c' object='netloc_to_topology-netloc_to_topology.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(netloc_to_topology_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o netloc_to_topology-netloc_to_topology.o `test -f 'netloc_to_topology.c' || echo '$(srcdir)/'`netloc_to_topology.c netloc_to_topology-netloc_to_topology.obj: netloc_to_topology.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(netloc_to_topology_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT netloc_to_topology-netloc_to_topology.obj -MD -MP -MF $(DEPDIR)/netloc_to_topology-netloc_to_topology.Tpo -c -o netloc_to_topology-netloc_to_topology.obj `if test -f 'netloc_to_topology.c'; then $(CYGPATH_W) 'netloc_to_topology.c'; else $(CYGPATH_W) '$(srcdir)/netloc_to_topology.c'; fi` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/netloc_to_topology-netloc_to_topology.Tpo $(DEPDIR)/netloc_to_topology-netloc_to_topology.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='netloc_to_topology.c' object='netloc_to_topology-netloc_to_topology.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(netloc_to_topology_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o netloc_to_topology-netloc_to_topology.obj `if test -f 'netloc_to_topology.c'; then $(CYGPATH_W) 'netloc_to_topology.c'; else $(CYGPATH_W) '$(srcdir)/netloc_to_topology.c'; fi` mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-am TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-am CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-am cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(PROGRAMS) installdirs: for dir in "$(DESTDIR)$(bindir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-binPROGRAMS clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -rf ./$(DEPDIR) -rm -f Makefile distclean-am: clean-am distclean-compile distclean-generic \ distclean-tags dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-binPROGRAMS install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -rf ./$(DEPDIR) -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-compile mostlyclean-generic \ mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-binPROGRAMS .MAKE: install-am install-strip .PHONY: CTAGS GTAGS TAGS all all-am check check-am clean \ clean-binPROGRAMS clean-generic clean-libtool cscopelist-am \ ctags ctags-am distclean distclean-compile distclean-generic \ distclean-libtool distclean-tags distdir dvi dvi-am html \ html-am info info-am install install-am install-binPROGRAMS \ install-data install-data-am install-dvi install-dvi-am \ install-exec install-exec-am install-html install-html-am \ install-info install-info-am install-man install-pdf \ install-pdf-am install-ps install-ps-am install-strip \ installcheck installcheck-am installdirs maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-compile \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags tags-am uninstall uninstall-am uninstall-binPROGRAMS # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/sgi/README.txt000066400000000000000000000046421265000126300204360ustar00rootroot00000000000000Copyright (C) 2014 Silicon Graphics International Corp. All rights reserved. The SGI hypercube topology plugin for SLURM enables SLURM to understand the hypercube topologies on some SGI ICE InfiniBand clusters. With this understanding about where nodes are physically located in relation to each other, SLURM can make better decisions about which sets of nodes to allocate to jobs. The plugin requires a properly set up topology.conf file. This is built using the contribs/sgi/netloc_to_topology program which in turn uses the OpenMPI group's netloc and hwloc tools. Please execute the following steps: 1) Ensure that hwloc and netloc are installed on every node in your cluster 2) Create a temporary directory in a shared filesystem available to each node in your cluster. In this example we'll call it /data/slurm/cluster_data/. 3) Create a subdirectory called hwloc, ie. /data/slurm/cluster_data/hwloc/. 4) Create the following script in /data/slurm/cluster_data/create.sh #!/bin/sh HN=`hostname` hwloc-ls /data/slurm/cluster_data/hwloc/$HN.xml 5) Run the script on each compute node $ cexec /data/slurm/cluster_data/create.sh 6) Ensure that hwloc output files got put into /data/slurm/cluster_data/hwloc/. If you have any nodes down right now, their missing data may cause you problems later. 7) Run netloc discovery on the primary InfiniBand fabric $ cd /data/slurm/cluster_data/ $ netloc_ib_gather_raw --out-dir ib-raw --sudo --force-subnet mlx4_0:1 $ netloc_ib_extract_dats 8) Run netloc_to_topology to turn the netloc and hwloc data into a SLURM topology.conf. $ netloc_to_topology -d /data/slurm/cluster_data/ netloc_to_topology assumes a InfiniBand fabric ID of "fe80:0000:0000:0000". If you have a different fabric ID, then you'll need to specify it with the "-f" option. You can find the fabric ID with `ibv_devinfo -v`. E.g. $ ibv_devinfo -v Look down the results and for the HCA and port that you want to key off of, look at its GID field. E.g. GID[ 0]: fec0:0000:0000:0000:f452:1403:0047:36d1 Use the first four couplets: $ netloc_to_topology -d /data/slurm/cluster_data/ -f fec0:0000:0000:0000 9) Copy the resulting topology.conf file into SLURM's location for configuration files. The following command copies it to the compute nodes. Make sure to copy it to the node(s) running slurmctld as well. $ cpush topology.conf /etc/slurm/topology.conf 10) Restart SLURM slurm-slurm-15-08-7-1/contribs/sgi/netloc_to_topology.c000066400000000000000000000711761265000126300230340ustar00rootroot00000000000000/***************************************************************************** * Copyright (C) 2014 Silicon Graphics International Corp. * All rights reserved. ****************************************************************************/ #if HAVE_CONFIG_H # include "config.h" #endif #define _GNU_SOURCE #include #include #include #include #ifdef HAVE_NETLOC_NOSUB # include #else # include #endif typedef struct node_group { char *node_name; int node_name_len; int cpus; int memory; int cores_per_socket; int threads_per_core; } node_group; typedef struct switch_name { const char *sw_name; unsigned long physical_id; } switch_name; // Parse the command line arguments and update variables appropriately static int parse_args(int argc, char ** argv); // Check the directory parameters to make sure they are formatted correctly static int check_directory_parameters(); // initialize NetLoc topology to be used to lookup NetLoc information static netloc_topology_t setup_topology(char *data_uri); // initialize NetLoc map to be used to lookup HwLoc information static netloc_map_t setup_map(char *data_uri); // Generate a topology.conf file based on NetLoc topology and save it to file static int generate_topology_file(netloc_topology_t *topology, netloc_map_t *map); // Loop through and parse all of the switches and their connections static int loop_through_switches(netloc_topology_t *topology, netloc_map_t *map, netloc_dt_lookup_table_t *switches); // Loop through and parse all of the edges for a switch static int loop_through_edges(netloc_topology_t *topology, netloc_map_t *map, netloc_node_t *node, const char *src_name, FILE *f_temp); // Add a switch connection and its link speed to the switch list static int add_switch_connection(netloc_edge_t **edges, int idx, int num_edges, const char *src_name, const char *dst_name, char *switch_str); // calculate the link speed for an edge between two switches static int calculate_link_speed(netloc_edge_t *edge); // Add a node connection to the node list static int add_node_connection( netloc_topology_t *topology, netloc_map_t *map, netloc_edge_t *edge, char *node_str ); // Find a node group that matches the specifications given static int find_node_group( int cpus, int cores_per_socket, int threads_per_core, int memory, const char *dst_name); // Make a new node group in the table and fill in information static void make_new_node_group( int cpus, int cores_per_socket, int threads_per_core, int memory, const char *dst_name); // Save Topology data of network to topology.conf file static int save_topology_data_to_file(); // Gets the name and the hw_loc topology for a NetLoc node static int get_node_name_and_topology(netloc_topology_t *topology, netloc_map_t *map, netloc_node_t *node, const char **name, hwloc_topology_t *hw_topo); // Gets the name of a switch in the network static int get_switch_name( netloc_topology_t *topology, netloc_map_t *map, netloc_node_t *node, const char **name ); // Find a switch_name that matches the Physical ID given static int find_switch_name( netloc_node_t *node ); // Compares switch_name with all of the names in the table static int check_unique_switch_name( char *sw_name); // Make a new switch_name entry in the table and fill in information static int make_new_switch_name( netloc_topology_t *topology, netloc_map_t *map, netloc_node_t *node, const char **name ); #define NETLOC_DIR "netloc" const char * ARG_OUTDIR = "--outdir"; const char * ARG_SHORT_OUTDIR = "-o"; const char * ARG_DATADIR = "--datadir"; const char * ARG_SHORT_DATADIR = "-d"; const char * ARG_VERBOSE = "--verbose"; const char * ARG_SHORT_VERBOSE = "-v"; const char * ARG_FABRIC = "--fabric"; const char * ARG_SHORT_FABRIC = "-f"; const char * ARG_HELP = "--help"; const char * ARG_SHORT_HELP = "-h"; static char * outdir = NULL; static char * datadir = NULL; static char * fabric = "fe80:0000:0000:0000"; static int verbose = 0; static int max_nodes = 0, max_switches = 0; static node_group *node_group_table = NULL; static int node_group_cnt = 0; static int node_groups_max = 32; static switch_name **switch_name_table = NULL; static int switch_name_cnt = 0; static int switch_name_max = 256; static char *file_location = NULL, *file_location_temp= NULL; int main(int argc, char ** argv) { int ret; netloc_topology_t topology; netloc_map_t map; // Parse the command line arguments and update variables appropriately if( 0 != parse_args(argc, argv) ) { printf( "Usage: %s\n" "\t%s|%s \n" "\t[%s|%s ]\n" "\t[%s|%s ]\n" "\t[%s|%s] [--help|-h]\n", argv[0], ARG_DATADIR, ARG_SHORT_DATADIR, ARG_OUTDIR, ARG_SHORT_OUTDIR, ARG_FABRIC, ARG_SHORT_FABRIC, ARG_VERBOSE, ARG_SHORT_VERBOSE); printf(" Default %-10s = current working directory\n", ARG_OUTDIR); return NETLOC_ERROR; } asprintf(&file_location, "%stopology.conf", outdir); asprintf(&file_location_temp, "%s.temp", file_location); // initialize NetLoc topology to be used to lookup NetLoc information topology = setup_topology(datadir); (verbose) ? printf("Successfully Created Network Topology \n") : 0 ; // initialize NetLoc map to be used to lookup HwLoc information map = setup_map(datadir); (verbose) ? printf("Successfully Created Network Map\n") : 0 ; node_group_table = malloc( sizeof(node_group) * node_groups_max ); switch_name_table = malloc( sizeof(switch_name *) * switch_name_max ); // Generate a topology.conf file based on NetLoc topology and save to file ret = generate_topology_file(&topology, &map); if( NETLOC_SUCCESS == ret ) printf("\nDone generating topology.conf file from NetLoc data\n"); else printf("Error: Couldn't Create topology.conf file from NetLoc data\n"); netloc_detach(topology); netloc_map_destroy(map); return ret; } // Parse the command line arguments and update variables appropriately static int parse_args(int argc, char ** argv) { int i, ret = NETLOC_SUCCESS; for(i = 1; i < argc; ++i ) { // --outdir if( ( 0 == strncmp(ARG_OUTDIR, argv[i], strlen(ARG_OUTDIR)) ) || (0 == strncmp(ARG_SHORT_OUTDIR, argv[i], strlen(ARG_SHORT_OUTDIR))) ) { ++i; if( i >= argc ) { fprintf(stderr, "Error: Must supply an argument to %s\n", ARG_OUTDIR ); return NETLOC_ERROR; } outdir = strdup(argv[i]); } // --datadir (directory with hwloc and netloc input data directories) else if( 0 ==strncmp(ARG_DATADIR, argv[i], strlen(ARG_DATADIR)) || 0 == strncmp(ARG_SHORT_DATADIR, argv[i], strlen(ARG_SHORT_DATADIR)) ) { ++i; if( i >= argc ) { fprintf(stderr, "Error: Must supply an argument to %s " "(input data directory)\n", ARG_DATADIR ); return NETLOC_ERROR; } datadir = strdup(argv[i]); } // verbose output else if( 0 == strncmp(ARG_VERBOSE, argv[i], strlen(ARG_VERBOSE)) || (0 == strncmp(ARG_SHORT_VERBOSE, argv[i], strlen(ARG_SHORT_VERBOSE)))){ verbose = 1; } // Help else if( 0 == strncmp(ARG_HELP, argv[i], strlen(ARG_HELP)) || 0 == strncmp(ARG_SHORT_HELP, argv[i], strlen(ARG_SHORT_HELP)) ) { return NETLOC_ERROR; } else if (0 == strcmp(ARG_FABRIC, argv[i]) || 0 == strcmp(ARG_SHORT_FABRIC, argv[i])) { i++; if (i >= argc) { fprintf(stderr, "Error: Must supply an argument to %s (fabric ID)\n", ARG_FABRIC); } fabric = strdup(argv[i]); } // Unknown options throw warnings else { fprintf(stderr, "Warning: Unknown argument of <%s>\n", argv[i]); return NETLOC_ERROR; } } // Check the directory parameters to make sure they are formatted correctly ret = check_directory_parameters(); return ret; } // Check the directory parameters to make sure they are formatted correctly static int check_directory_parameters() { int ret = NETLOC_SUCCESS; // Check Output Directory Parameter if( NULL == outdir || strlen(outdir) <= 0 ) { if( NULL != outdir ) free(outdir); // Default: current working directory outdir = strdup("."); } if( '/' != outdir[strlen(outdir)-1] ) { outdir = (char *)realloc(outdir, sizeof(char) * (strlen(outdir)+1)); outdir[strlen(outdir)+1] = '\0'; outdir[strlen(outdir)] = '/'; } // Check Input Data Directory Parameter if( NULL == datadir || strlen(datadir) <= 0 ) { fprintf(stderr, "Error: Must supply an argument to %s|%s (input data" " directory)\n", ARG_DATADIR, ARG_SHORT_DATADIR ); return NETLOC_ERROR; } else if( '/' != datadir[strlen(datadir)-1] ) { datadir = (char *)realloc(datadir, sizeof(char) * (strlen(datadir)+1)); datadir[strlen(datadir)+1] = '\0'; datadir[strlen(datadir)] = '/'; } // Display Parsed Arguments (verbose) ? printf(" Input Data Directory: %s\n", datadir) : 0 ; (verbose) ? printf(" Output Directory : %s\n", outdir) : 0 ; return ret; } // initialize NetLoc topology to be used to lookup NetLoc information static netloc_topology_t setup_topology(char *data_uri) { int ret; netloc_topology_t topology; netloc_network_t *tmp_network = NULL; char *search_uri = NULL; // Setup a Network connection tmp_network = netloc_dt_network_t_construct(); tmp_network->network_type = NETLOC_NETWORK_TYPE_INFINIBAND; tmp_network->subnet_id = strdup(fabric); asprintf(&search_uri, "file://%s%s", data_uri, NETLOC_DIR); ret = netloc_find_network(search_uri, tmp_network); free(search_uri); if (NETLOC_SUCCESS != ret) { fprintf(stderr, "Error: netloc_find_network return error (%d)\n" "\tConsider passing a different IB fabric ID with -f\n", ret); exit(ret); } // Attach to the topology context ret = netloc_attach(&topology, *tmp_network); netloc_dt_network_t_destruct(tmp_network); if( NETLOC_SUCCESS != ret ) { fprintf(stderr, "Error: netloc_attach returned an error (%d)\n", ret); exit(ret); } return topology; } // initialize NetLoc map to be used to lookup HwLoc information static netloc_map_t setup_map(char *data_uri) { int err; netloc_map_t map; char *path; err = netloc_map_create(&map); if (err) { fprintf(stderr, "Failed to create the map\n"); exit(EXIT_FAILURE); } asprintf(&path, "%shwloc", data_uri); err = netloc_map_load_hwloc_data(map, path); free(path); if (err) { fprintf(stderr, "Failed to load hwloc data\n"); exit(EXIT_FAILURE); } asprintf(&path, "file://%s%s", data_uri, NETLOC_DIR); err = netloc_map_load_netloc_data(map, path); free(path); if (err) { fprintf(stderr, "Failed to load netloc data\n"); exit(EXIT_FAILURE); } err = netloc_map_build(map, 0); if (err) { fprintf(stderr, "Failed to build map data\n"); exit(EXIT_FAILURE); } return map; } // Generate a topology.conf file based on NetLoc topology and save it to file static int generate_topology_file(netloc_topology_t *topology, netloc_map_t *map) { int ret; netloc_dt_lookup_table_t switches = NULL; // Get all of the switches ret = netloc_get_all_switch_nodes(*topology, &switches); if( NETLOC_SUCCESS != ret ) { fprintf(stderr, "Error: get_all_switch_nodes returned %d\n", ret); return ret; } // Loop through and parse all of the switches and their connections ret = loop_through_switches(topology, map, &switches); if( NETLOC_SUCCESS != ret ) { fprintf(stderr, "Error: loop_through_switches returned %d\n", ret); return ret; } // Save Topology data of network to topology.conf file save_topology_data_to_file(); // Cleanup netloc_lookup_table_destroy(switches); free(switches); free(file_location); free(file_location_temp); int i; for ( i = 0; i < node_group_cnt; i++) free(node_group_table[i].node_name); free(node_group_table); for ( i = 0; i < switch_name_cnt; i++) free(switch_name_table[i]); free(switch_name_table); return NETLOC_SUCCESS; } // Loop through and parse all of the switches and their connections static int loop_through_switches(netloc_topology_t *topology, netloc_map_t *map, netloc_dt_lookup_table_t *switches) { int ret; netloc_dt_lookup_table_iterator_t hti = NULL; FILE *f_temp = fopen(file_location_temp, "w"); /* Loop through all of the switches */ hti = netloc_dt_lookup_table_iterator_t_construct(*switches); while (!netloc_lookup_table_iterator_at_end(hti)) { const char * key = netloc_lookup_table_iterator_next_key(hti); if (NULL == key) {break;} netloc_node_t *node = (netloc_node_t *) netloc_lookup_table_access(*switches, key); if (NETLOC_NODE_TYPE_SWITCH != node->node_type) { fprintf(stderr, "Error: Returned unexpected node: %s\n", netloc_pretty_print_node_t(node)); return NETLOC_ERROR; } // Get the Switch Name const char *src_name; ret = get_switch_name(topology, map, node, &src_name); if (NETLOC_SUCCESS != ret) { if (verbose) { fprintf(stderr, "Did not find data for any nodes attached to switch %s\n", netloc_pretty_print_node_t(node)); } continue; } // Loop through and parse all of the edges for a switch loop_through_edges(topology, map, node, src_name, f_temp); } // Cleanup fclose(f_temp); netloc_dt_lookup_table_iterator_t_destruct(hti); return NETLOC_SUCCESS; } // Loop through and parse all of the edges for a switch static int loop_through_edges(netloc_topology_t *topology, netloc_map_t *map, netloc_node_t *node, const char *src_name, FILE *f_temp) { int ret, i, num_edges, nodes_cnt = 0, switches_cnt = 0; netloc_edge_t **edges = NULL; size_t slen = 4096; char *switch_str = malloc(sizeof(char) * slen); char *node_str = malloc(sizeof(char) * slen); strcpy(switch_str, ""); strcpy(node_str, ""); // Get all of the edges ret = netloc_get_all_edges(*topology, node, &num_edges, &edges); if (NETLOC_SUCCESS != ret) { fprintf(stderr, "Error: get_all_edges_by_id returned %d for" " node %s\n", ret, node->description); return ret; } (verbose) ? printf("\nFound Switch: %s - %s which has %d edges \n", src_name, node->physical_id, num_edges) : 0; // Loop through all of the edges for (i = 0; i < num_edges; i++) { (verbose) ? printf("\tEdge %2d - Speed: %s, Width: %s - " , i, edges[i]->speed, edges[i]->width) : 0; if (NETLOC_NODE_TYPE_SWITCH == edges[i]->dest_node->node_type) { // get the dest_node name const char *dst_name; ret = get_switch_name( topology, map, edges[i]->dest_node, &dst_name); if (NETLOC_SUCCESS != ret) { if (verbose) { fprintf(stderr, "Did not find data for any nodes attached to switch %s\n", netloc_pretty_print_node_t(node)); } continue; } // Add name and link_speed to switch_str ret = add_switch_connection(edges, i, num_edges, src_name, dst_name, switch_str); if (NETLOC_SUCCESS == ret) {switches_cnt++;} } else if (NETLOC_NODE_TYPE_HOST == edges[i]->dest_node->node_type) { // if edge goes to a node, add name to node_str and put in a group ret = add_node_connection(topology, map, edges[i], node_str); if (NETLOC_SUCCESS == ret) {nodes_cnt++;} } else { fprintf(stderr, "Error: Returned unexpected node: %s\n", netloc_pretty_print_node_t(edges[i]->dest_node)); return NETLOC_ERROR; } } // update maximum totals needed later max_switches = MAX(switches_cnt, max_switches); max_nodes = MAX(max_nodes, nodes_cnt); // Erase any trailing commas assert(0 < strlen(switch_str) && slen > strlen(switch_str)); assert(0 < strlen(node_str) && slen > strlen(node_str)); switch_str[strlen(switch_str) - 1] = '\0'; node_str[strlen(node_str) - 1] = '\0'; // combine strings together and output to tolopogy file fprintf(f_temp, "SwitchName=%s Switches=%s Nodes=%s\n", src_name, switch_str, node_str); free(switch_str); free(node_str); return NETLOC_SUCCESS; } // Add a switch connection and its link speed to the switch list static int add_switch_connection(netloc_edge_t **edges, int idx, int num_edges, const char *src_name, const char *dst_name, char *switch_str) { netloc_node_t* dn = edges[idx]->dest_node; char * pch = strstr(switch_str, dst_name); int i, total_link_speed = 0; unsigned long current_ID = dn->physical_id_int; // Print out node information (verbose) ? printf("Dst:%9s - (%s - %s) [%20s][%18lu]/[%7s] - (%d edges)\n", dst_name, netloc_decode_network_type(dn->network_type), netloc_decode_node_type(dn->node_type), dn->physical_id, dn->physical_id_int, dn->logical_id, dn->num_edges) : 0; // Check to see if this switch is already on the switch connection list if (pch != NULL) {return NETLOC_ERROR;} // Total up the link speed for all the connections between the two switches for (i = idx; i < num_edges; i++) { // If the IDs match then the connections go to the same switch if (edges[i]->dest_node->physical_id_int == current_ID) { int link_speed = calculate_link_speed(edges[i]); if (0 >= link_speed) { fprintf(stderr, "\nError: invalid connection width %s or " "speed %s between %s and %s\n", edges[idx]->width, edges[idx]->speed, src_name, dst_name); return NETLOC_ERROR; } total_link_speed += link_speed; } } // Put the switch and its link_speed on the switch string sprintf(switch_str, "%s%s-%d,", switch_str, dst_name, total_link_speed); return NETLOC_SUCCESS; } // calculate the link speed for an edge between two switches static int calculate_link_speed(netloc_edge_t *edge) { // calculate the link speed between the two switches int link_speed = atoi(edge->width); if (link_speed < 1 || (link_speed > 24 ) ){ return -1; } if ( strcasecmp(edge->speed, "SDR" ) == 0 ) link_speed *= 2; else if ( strcasecmp(edge->speed, "DDR" ) == 0 ) link_speed *= 4; else if ( strcasecmp(edge->speed, "QDR" ) == 0 ) link_speed *= 8; else if ( strcasecmp(edge->speed, "FDR-10" ) == 0 ) link_speed *= 10; else if ( strcasecmp(edge->speed, "FDR" ) == 0 ) link_speed *= 14; else if ( strcasecmp(edge->speed, "EDR" ) == 0 ) link_speed *= 25; else if ( strcasecmp(edge->speed, "HDR" ) == 0 ) link_speed *= 50; else{ return -1; } return link_speed; } // Add a node connection to the node list static int add_node_connection(netloc_topology_t *topology, netloc_map_t *map, netloc_edge_t *edge, char *node_str) { int ret; hwloc_topology_t dst_hw_topo; const char *dst_name; ret = get_node_name_and_topology(topology, map, edge->dest_node, &dst_name, &dst_hw_topo); if (NETLOC_SUCCESS != ret) {return NETLOC_ERROR;} (verbose) ? printf( "Dst:%9s - ", dst_name) : 0; sprintf(node_str, "%s%s,",node_str, dst_name); // get and calculate needed node information hwloc_obj_t hw_obj = hwloc_get_root_obj(dst_hw_topo); int cpus = hwloc_get_nbobjs_by_type(dst_hw_topo, HWLOC_OBJ_PU); int sockets = hwloc_get_nbobjs_by_type(dst_hw_topo, HWLOC_OBJ_SOCKET); int cores = hwloc_get_nbobjs_by_type(dst_hw_topo, HWLOC_OBJ_CORE); int cores_per_socket = cores / sockets; int threads_per_core = cpus / cores; int memory = hw_obj->memory.total_memory/1024/1024; // Find a node group that matches the specifications given ret = find_node_group(cpus, cores_per_socket, threads_per_core, memory, dst_name); // if couldn't find a matching node group, create a new one if (ret == node_group_cnt) { // Make a new node group in the table and fill in information make_new_node_group(cpus, cores_per_socket, threads_per_core, memory, dst_name); } netloc_node_t* dn = edge->dest_node; ( verbose ) ? printf("(%s - %s) [%20s][%18lu]/[%7s] - (%d edges)\n", netloc_decode_network_type(dn->network_type), netloc_decode_node_type(dn->node_type), dn->physical_id, dn->physical_id_int, dn->logical_id, dn->num_edges) : 0; return NETLOC_SUCCESS; } // Find a node group that matches the specifications given static int find_node_group( int cpus, int cores_per_socket, int threads_per_core, int memory, const char *dst_name) { int j; for ( j=0; j < node_group_cnt; j++){ // Check to make sure all of the numbers are the same if ((node_group_table[j].cpus == cpus) && (node_group_table[j].memory == memory) && (node_group_table[j].cores_per_socket == cores_per_socket) && (node_group_table[j].threads_per_core == threads_per_core)){ // Make node_name string bigger if there isn't enough space if ((strlen(node_group_table[j].node_name) + strlen(dst_name) + 3) >= node_group_table[j].node_name_len ){ node_group_table[j].node_name_len *= 2; char *temp_node_name = (char *) realloc( node_group_table[j].node_name, sizeof(char) * node_group_table[j].node_name_len); if (temp_node_name == NULL) { printf("Error (re)allocating memory - node_name string\n"); exit(-1); } node_group_table[j].node_name = temp_node_name; } sprintf(node_group_table[j].node_name, "%s,%s", node_group_table[j].node_name, dst_name); return j; } } return j; } // Make a new node group in the table and fill in information static void make_new_node_group( int cpus, int cores_per_socket, int threads_per_core, int memory, const char *dst_name) { node_group_table[node_group_cnt].node_name = malloc( sizeof(char) * 2048); node_group_table[node_group_cnt].node_name_len = 2048; strcpy(node_group_table[node_group_cnt].node_name, dst_name); node_group_table[node_group_cnt].cpus = cpus; node_group_table[node_group_cnt].memory = memory; node_group_table[node_group_cnt].cores_per_socket = cores_per_socket; node_group_table[node_group_cnt].threads_per_core = threads_per_core; node_group_cnt++; // if there aren't any more empty groups, make new ones if ( node_group_cnt >= node_groups_max){ node_groups_max *= 2; node_group *temp_node_group = realloc(node_group_table, sizeof(node_group) * node_groups_max); if ( temp_node_group == NULL){ printf("Error (re)allocating memory for more node groups"); exit(-1); } node_group_table = temp_node_group; } } // Save Topology data of network to topology.conf file int save_topology_data_to_file() { int j; // open up files to save data to topology.conf FILE *f = fopen(file_location, "w"); FILE *f_temp = fopen(file_location_temp, "r"); if ( (f == NULL) || (f_temp == NULL) ){ printf("Error opening file!\n"); exit(1); } // print hypercube topology configuration information for reference fprintf(f,"#############################################################" "#####\n# SLURM's network topology configuration file for use with the" " topology/hypercube plugin\n#########################################" "#########################\n# Hypcube topology information:\n# Maximum " "Number of Dimensions: %d \n# Maximum Number of Nodes per Switch: %d\n" "\n##################################################################\n" ,max_switches, max_nodes); /* * Print out compute nodes info and partitions nodes list for slurm.conf * in case the user wants to use this tool to fill in their node list for * that config file. */ fprintf(f, "# Compute Nodes information for slurm.conf:\n"); for ( j=0; j < node_group_cnt; j++){ fprintf(f,"# NodeName=%s CPUs=%d RealMemory=%d CoresPerSocket=%d " "ThreadsPerCore=%d State=UNKNOWN\n", node_group_table[j].node_name, node_group_table[j].cpus, node_group_table[j].memory, node_group_table[j].cores_per_socket, node_group_table[j].threads_per_core); } fprintf(f,"\n###########################################################" "#######\n# Partition nodes list for slurm.conf: \n" "# Nodes=" ); for ( j=0; j < node_group_cnt-1; j++){ fprintf(f, "%s,", node_group_table[j].node_name ); } fprintf(f, "%s \n", node_group_table[j].node_name ); // copy switch information from temp file to topology.conf fprintf(f, "\n#########################################################" "#########\n# Switch Hypercube Topology Information: \n"); char ch; while ( ( ch = fgetc(f_temp) ) != EOF ) fputc(ch, f); // Cleanup fclose(f); fclose(f_temp); remove(file_location_temp); return NETLOC_SUCCESS; } // Gets the name and the hw_loc topology for a NetLoc node static int get_node_name_and_topology( netloc_topology_t *topology, netloc_map_t *map, netloc_node_t *node, const char **name, hwloc_topology_t *hw_topo) { netloc_map_port_t port = NULL; hwloc_obj_t hw_obj = NULL; netloc_map_server_t server = NULL; int ret; ret = netloc_map_netloc2port(*map, *topology, node, NULL, &port); if( NETLOC_SUCCESS != ret ) { if (verbose) { printf( "\n Error: netloc_map_netloc2port could not find" " port info for %s\n", netloc_pretty_print_node_t(node) ); } return ret; } ret = netloc_map_port2hwloc(port, hw_topo, &hw_obj); if( NETLOC_SUCCESS != ret ) { fprintf(stderr, "Error: netloc_map_port2hwloc returned an error"); return ret; } ret = netloc_map_hwloc2server(*map, *hw_topo, &server); if( NETLOC_SUCCESS != ret ) { fprintf(stderr, "Error: netloc_map_hwloc2server returned an error"); return ret; } ret = netloc_map_server2name(server, name); if( NETLOC_SUCCESS != ret ) { fprintf(stderr, "Error: netloc_map_server2name returned an error"); return ret; } return NETLOC_SUCCESS; } // Gets the name of a switch in the network static int get_switch_name(netloc_topology_t *topology, netloc_map_t *map, netloc_node_t *node, const char **name) { // Find a switch_name that matches the Physical ID given int ret = find_switch_name(node); // If there already a switch_name assigned to the physical ID if (ret != switch_name_cnt) { *name = switch_name_table[ret]->sw_name; } // Else if couldn't find a matching switch_name create a new one else{ // Make a switch_name entry in the table and fill in information ret = make_new_switch_name(topology, map, node, name); if (NETLOC_SUCCESS != ret) {return ret;} switch_name *sw_name_entry = malloc(sizeof(switch_name)); sw_name_entry->sw_name = *name; sw_name_entry->physical_id = node->physical_id_int; switch_name_table[switch_name_cnt] = sw_name_entry; switch_name_cnt++; // If no more room for more switch_names, then make more space if (switch_name_cnt == switch_name_max) { switch_name_max *= 2; switch_name **temp_switch_name_table = realloc( switch_name_table, sizeof(switch_name) * switch_name_max); if (temp_switch_name_table == NULL){ printf("Error (re)allocating memory for more switch_names"); exit(-1); } switch_name_table = temp_switch_name_table; } } return NETLOC_SUCCESS; } // Find a switch_name that matches the Physical ID given static int find_switch_name( netloc_node_t *node ) { int j; for ( j=0; j < switch_name_cnt; j++){ // Check to see if the numbers are the same if ( switch_name_table[j]->physical_id == node->physical_id_int ) { return j; } } return j; } // Compares switch_name with all of the names in the table static int check_unique_switch_name( char *sw_name) { int j; for ( j=0; j < switch_name_cnt; j++){ // Check to see if the names are the same if ( strcmp( switch_name_table[j]->sw_name, sw_name ) == 0 ) { break; } } // if the name already exists return 0, else return 1 if ( j < switch_name_cnt ) return NETLOC_ERROR; else return NETLOC_SUCCESS; } // Make a new switch_name entry in the table and fill in information static int make_new_switch_name(netloc_topology_t *topology, netloc_map_t *map, netloc_node_t *node, const char **name ) { int ret, i, num_edges; netloc_edge_t **edges = NULL; const char *node_name; //Get all of the edges ret = netloc_get_all_edges(*topology, node, &num_edges, &edges); if (NETLOC_SUCCESS != ret) { fprintf(stderr, "Error: netloc_get_all_edges returned %d for" " node %s\n", ret, netloc_pretty_print_node_t(node)); return ret; } // get the node name of the first host connected to the switch for (i = 0; i < num_edges; i++) { if (NETLOC_NODE_TYPE_HOST == edges[i]->dest_node->node_type) { hwloc_topology_t dst_hw_topo; ret = get_node_name_and_topology( topology, map, edges[i]->dest_node, &node_name, &dst_hw_topo); if (NETLOC_SUCCESS == ret) {break;} } } /* * If we couldn't find hwloc data for any host attached to the switch, * let's issue a warning but otherwise assume that the switch won't be * used */ if (num_edges == i) { if (verbose) { fprintf(stderr, "Skipping switch because no data was available for attached nodes:\n" "\t%s\n", netloc_pretty_print_node_t(node)); } return NETLOC_ERROR_EMPTY; } // Use the node name to create the switch name char * temp_node_name = strdup(node_name); char * temp_name = strtok (temp_node_name,"n"); char * sw_name; int switch_cnt = 0; asprintf( &sw_name, "%ss%d", temp_name, switch_cnt); // Check to see if the switch name is unique, change it if it isn't while (check_unique_switch_name(sw_name) == NETLOC_ERROR) { free(sw_name); switch_cnt++; asprintf( &sw_name, "%ss%d", temp_name, switch_cnt); } free(temp_node_name); *name = sw_name; return NETLOC_SUCCESS; } slurm-slurm-15-08-7-1/contribs/sjobexit/000077500000000000000000000000001265000126300177775ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/sjobexit/Makefile.am000066400000000000000000000016471265000126300220430ustar00rootroot00000000000000# # Makefile for job exit code management scripts AUTOMAKE_OPTIONS = foreign bin_SCRIPTS = sjobexitmod sjobexitmod: _perldir=$(exec_prefix)`perl -e 'use Config; $$T=$$Config{installsitearch}; $$P=$$Config{installprefix}; $$P1="$$P/local"; $$T =~ s/$$P1//; $$T =~ s/$$P//; print $$T;'` install-binSCRIPTS: $(bin_SCRIPTS) @$(NORMAL_INSTALL) test -z "$(DESTDIR)$(bindir)" || $(mkdir_p) "$(DESTDIR)$(bindir)" @list='$(bin_SCRIPTS)'; for p in $$list; do \ echo "sed 's%use lib .*%use lib qw(${_perldir});%' $(top_srcdir)/contribs/sjobexit/$$p.pl > $(DESTDIR)$(bindir)/$$p"; \ sed "s%use lib .*%use lib qw(${_perldir});%" $(top_srcdir)/contribs/sjobexit/$$p.pl >$(DESTDIR)$(bindir)/$$p; \ chmod 755 $(DESTDIR)$(bindir)/$$p;\ done uninstall-binSCRIPTS: @$(NORMAL_UNINSTALL) @list='$(bin_SCRIPTS)'; for p in $$list; do \ echo " rm -f '$(DESTDIR)$(bindir)/$$p'"; \ rm -f "$(DESTDIR)$(bindir)/$$p"; \ done clean: slurm-slurm-15-08-7-1/contribs/sjobexit/Makefile.in000066400000000000000000000463711265000126300220570ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ # # Makefile for job exit code management scripts VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = contribs/sjobexit DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(bindir)" SCRIPTS = $(bin_SCRIPTS) AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign bin_SCRIPTS = sjobexitmod _perldir = $(exec_prefix)`perl -e 'use Config; $$T=$$Config{installsitearch}; $$P=$$Config{installprefix}; $$P1="$$P/local"; $$T =~ s/$$P1//; $$T =~ s/$$P//; print $$T;'` all: all-am .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign contribs/sjobexit/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign contribs/sjobexit/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(SCRIPTS) installdirs: for dir in "$(DESTDIR)$(bindir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-binSCRIPTS install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-binSCRIPTS .MAKE: install-am install-strip .PHONY: all all-am check check-am clean clean-generic clean-libtool \ cscopelist-am ctags-am distclean distclean-generic \ distclean-libtool distdir dvi dvi-am html html-am info info-am \ install install-am install-binSCRIPTS install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-html install-html-am install-info \ install-info-am install-man install-pdf install-pdf-am \ install-ps install-ps-am install-strip installcheck \ installcheck-am installdirs maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-generic \ mostlyclean-libtool pdf pdf-am ps ps-am tags-am uninstall \ uninstall-am uninstall-binSCRIPTS sjobexitmod: install-binSCRIPTS: $(bin_SCRIPTS) @$(NORMAL_INSTALL) test -z "$(DESTDIR)$(bindir)" || $(mkdir_p) "$(DESTDIR)$(bindir)" @list='$(bin_SCRIPTS)'; for p in $$list; do \ echo "sed 's%use lib .*%use lib qw(${_perldir});%' $(top_srcdir)/contribs/sjobexit/$$p.pl > $(DESTDIR)$(bindir)/$$p"; \ sed "s%use lib .*%use lib qw(${_perldir});%" $(top_srcdir)/contribs/sjobexit/$$p.pl >$(DESTDIR)$(bindir)/$$p; \ chmod 755 $(DESTDIR)$(bindir)/$$p;\ done uninstall-binSCRIPTS: @$(NORMAL_UNINSTALL) @list='$(bin_SCRIPTS)'; for p in $$list; do \ echo " rm -f '$(DESTDIR)$(bindir)/$$p'"; \ rm -f "$(DESTDIR)$(bindir)/$$p"; \ done clean: # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/contribs/sjobexit/sjobexitmod.pl000077500000000000000000000120341265000126300226660ustar00rootroot00000000000000#! /usr/bin/perl # # # sjobexitmod # # Author: Phil Eckert # Date: 10/28/2010 # Last Modified: 10/28/2010 # BEGIN { # Just dump the man page in *roff format and exit if --roff specified. foreach my $arg (@ARGV) { if ($arg eq "--") { last; } elsif ($arg eq "--roff") { use Pod::Man; my $parser = Pod::Man->new (section => 1); $parser->parse_from_file($0, \*STDOUT); exit 0; } } } use strict; use Getopt::Long 2.24 qw(:config no_ignore_case); use autouse 'Pod::Usage' => qw(pod2usage); use File::Basename; my ( $base, $help, $cluster, $code, $execute_line, $jobid, $list, $man, $reason ); # # Format for listing job. # my $list_format = "JobID,Account,NNodes,NodeList,State,ExitCode,DerivedExitCode,Comment"; # # Get options. # getoptions(); my $rval; # # Exexute the utility. # $rval = `$execute_line 2>&1`; # # Determine if Successful. # my $status = $?; if ($status == 0) { printf("\n Modification of job $jobid was successful.\n\n"); exit(0); } else { printf("\n $rval\n"); exit($status); } sub getoptions { my $argct = $#ARGV; # # Set default partition name. # GetOptions( 'help|h|?' => \$help, 'man' => \$man, 'e=s' => \$code, 'r=s' => \$reason, 'c=s' => \$cluster, 'l' => \$list, ) or usage(); # # Fix the exit code (if set) to reflect the # fact that it represents the leftmost 8 bits # of the integer field. # $code = 256 * ($code & 0xFF) if ($code); # # # Display a simple help package. # usage() if ($help); show_man() if ($man); # # Make sure there is a job id, and make sure it is numeric. # if (!($jobid = shift(@ARGV)) || !isnumber($jobid)) { printf("\n Job Id needed.\n\n"); usage(); } # # List option was selected. # if ($list) { die(" \n wrong use of list option, format is ' $base -l JobId'\n\n") if ($argct != 1); system(" sacct -X -j $jobid -o $list_format"); exit(0); } # # Check for required options. # if (!$reason && !$code) { printf("\n Either reason string or exit code required.\n\n"); exit(1); } # # Build execute line from the options that are set. # $execute_line = "sacctmgr -i modify job jobid=$jobid set"; $execute_line .= " Comment=\"$reason\"" if ($reason); $execute_line .= " DerivedExitCode=$code" if ($code); $execute_line .= " Cluster=$cluster" if ($cluster); return; } # # Simple check to see if number is an integer, # retrun 0 if it is not, else return 1. # sub isnumber { my ($var) = $_; if ($var !~ /\D+/) { return(1); #if it is just a number. } else { return(0); #if it is not just a number. } } sub usage { my $base = basename($0); printf("\ Usage: $base [-e ] [-r ] [-c ] JobId $base -l JobId $base [-h] $base [-man] -e Modify the derived exit code to new value. -r Modify the job's comment field to new value. -c Name of cluster (optional). -l List information for a completed job. -h Show usage. JobId The identification number of the job. -man Show man page. \n"); exit; } sub show_man { if ($< == 0) { # Cannot invoke perldoc as root my $id = eval { getpwnam("nobody") }; $id = eval { getpwnam("nouser") } unless defined $id; $id = -2 unless defined $id; $< = $id; printf("\n You can not do this as root!\n\n"); exit 1; } $> = $<; # Disengage setuid $ENV{PATH} = "/bin:/usr/bin"; # Untaint PATH delete @ENV{'IFS', 'CDPATH', 'ENV', 'BASH_ENV'}; if ($0 =~ /^([-\/\w\.]+)$/) { $0 = $1; } # Untaint $0 else { die "Illegal characters were found in \$0 ($0)\n"; } pod2usage(-exitstatus => 0, -verbose => 2); return; } __END__ =head1 NAME B - Modifies a completed job in the slurmdbd =head1 SYNOPSIS sjobexitmod [-e exit_code] [-r reason_string] [-c cluster_name] JobId sjobexitmod -l JobId sjobexitmod -h sjobexitmod -man =head1 DESCRIPTION sjobexitmod is a wrapper which effectively does the same operation as using the sacct utility to modify certain aspects of a completed job. sacctmgr -i modify job jobid=1286 set DerivedExitCode=1 Comment="code error" or to list certain aspects of a completed job. sacct -o jobid,derivedexitcode,comment,cluster =head1 OPTIONS =over 4 =item B<-h> A usage summary message is displayed, and sjobexitmod terminates. =item B<-man> Show the man page for this utility.. =item B<-c> I The name of the cluster the job ran on. =item B<-e> I The exit code (DerivedExitCode) to be used. =item B<-l> I List selected attributes of a completed job. =item B<-r> I The reason (Comment) for job termination. =item B the numeric job id. =back =head1 EXIT CONDITIONS If there is an error, sjobexitmod returns either the exit status returned by sacctmgr, or a non-zero value. =head1 AUTHOR Written by Philip D. Eckert =head1 REPORTING BUGS Report bugs to =head1 SEE ALSO sacctmgr,sacct slurm-slurm-15-08-7-1/contribs/sjstat000077500000000000000000000356001265000126300174120ustar00rootroot00000000000000#!/usr/bin/perl ############################################################################### # # sjstat - List attributes of jobs under SLURM control # ############################################################################### # Copyright (C) 2007 The Regents of the University of California. # Copyright (C) 2008-2009 Lawrence Livermore National Security. # Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). # Written by Phil Eckert . # CODE-OCEC-09-009. All rights reserved. # # This file is part of SLURM, a resource management program. # For details, see . # Please also read the included file: DISCLAIMER. # # SLURM is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free # Software Foundation; either version 2 of the License, or (at your option) # any later version. # # In addition, as a special exception, the copyright holders give permission # to link the code of portions of this program with the OpenSSL library under # certain conditions as described in each individual source file, and # distribute linked combinations including the two. You must obey the GNU # General Public License in all respects for all of the code used other than # OpenSSL. If you modify file(s) with this exception, you may extend this # exception to your version of the file(s), but you are not obligated to do # so. If you do not wish to do so, delete this exception statement from your # version. If you delete this exception statement from all source files in # the program, then also delete it here. # # SLURM is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more # details. # # You should have received a copy of the GNU General Public License along # with SLURM; if not, write to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. # # Based off code with permission copyright 2006, 2007 Cluster Resources, Inc. ############################################################################### # # Man page stuff. # BEGIN { # Just dump the man page in *roff format and exit if --roff specified. foreach my $arg (@ARGV) { if ($arg eq "--") { last; } elsif ($arg eq "--roff") { use Pod::Man; my $parser = Pod::Man->new (section => 1); $parser->parse_from_file($0, \*STDOUT); exit 0; } } } use strict; use Getopt::Long 2.24 qw(:config no_ignore_case); use autouse 'Pod::Usage' => qw(pod2usage); # # Global Variables. # my ($help, $man, $pool, $running, $verbose); my (%MaxNodes, %MaxTime); # # Check SLURM status. # isslurmup(); # # See if bluegene system. # my $bglflag = 1 if (`scontrol show config | grep -i bluegene`); # # Get user options. # get_options(); # # Get partition information from scontrol, used # currently in conjunction with the sinfo data.. # do_scontrol_part(); # # Get and display the sinfo data. # do_sinfo(); # # If the -c option was entered, stop here. # exit if ($pool); # # Get and display the squeue data. # do_squeue(); exit; # # Do usable for bluegene # sub Usable { my ($tot, $out) = @_; $tot *= 1024.0 if ($tot =~ /K/); $out *= 1024.0 if ($out =~ /K/); my $usable = $tot - $out; if ($usable > 1024.0) { $usable /= 1024.0; $usable .= 'K'; } return($usable); } # # Get the SLURM partitions information. # sub do_sinfo { my (@s_part, @s_mem, @s_cpu, @s_feat, @s_active, @s_idle, @s_out, @s_total, @s_usable); # # Get the partition and node info. # my $options = "\"%9P %7m %.4c %.22F %f\""; my $ct = 0; my @sin = `sinfo -e -o $options`; foreach my $tmp (@sin) { next if ($tmp =~ /^PARTITION/); chomp $tmp; my @line = split(' ',$tmp); $s_part[$ct] = $line[0]; $s_mem[$ct] = $line[1]; $s_cpu[$ct] = $line[2]; # # Split the status into various components. # my @fields = split(/\//, $line[3]); $s_active[$ct] = $fields[0]; $s_idle[$ct] = $fields[1]; $s_out[$ct] = $fields[2]; $s_total[$ct] = $fields[3]; if ($bglflag) { $s_usable[$ct] = Usable($s_total[$ct], $s_out[$ct]); } else { $s_usable[$ct] = $s_total[$ct] - $s_out[$ct]; } $s_feat[$ct] = ($line[4] .= " "); $s_feat[$ct] =~ s/\(null\)//g; $ct++; } printf("\nScheduling pool data:\n"); if ($verbose) { printf("----------------------------------------------------------------------------------\n"); printf(" Total Usable Free Node Time Other \n"); printf("Pool Memory Cpus Nodes Nodes Nodes Limit Limit traits \n"); printf("----------------------------------------------------------------------------------\n"); } else { printf("-------------------------------------------------------------\n"); printf("Pool Memory Cpus Total Usable Free Other Traits \n"); printf("-------------------------------------------------------------\n"); } for (my $i = 0; $i < $ct; $i++) { if ($verbose) { my $p = $s_part[$i]; $p =~ s/\*//; printf("%-9s %7dMb %5s %6s %7s %6s %6s %10s %-s\n", $s_part[$i], $s_mem[$i], $s_cpu[$i], $s_total[$i], $s_usable[$i], $s_idle[$i], $MaxNodes{$p}, $MaxTime{$p}, $s_feat[$i]); } else { printf("%-9s %7dMb %5s %6s %6s %6s %-s\n", $s_part[$i], $s_mem[$i], $s_cpu[$i], $s_total[$i], $s_usable[$i], $s_idle[$i], $s_feat[$i]); } } printf("\n"); return; } # # Get the SLURM queues. # sub do_squeue { my (@s_job, @s_user, @s_nodes, @s_status, @s_begin, @s_limit, @s_start, @s_pool, @s_used, @s_master); # # Base options on whether this partition is node or process scheduled. # my ($type, $options); my $rval = system("scontrol show config | grep cons_res >> /dev/null"); if ($rval) { $type = "Nodes"; $options = "\"%8i %8u %.6D %2t %S %.12l %.9P %.11M %1000R\""; } else { $type = "Procs"; $options = "\"%8i %8u %.6C %2t %S %.12l %.9P %.11M %1000R\""; } # # Get the job information. # my $ct = 0; my $pat = "tr -s '[' '\000' |cut -d'-' -f 1 | cut -d',' -f 1"; my @sout = `squeue -o $options`; foreach my $tmp (@sout) { next if ($tmp =~ /^JOBID/); next if ($running && $tmp =~ / PD /); chomp $tmp; my @line = split(' ', $tmp); $s_job[$ct] = $line[0]; $s_user[$ct] = $line[1]; $s_nodes[$ct] = $line[2]; $s_status[$ct] = $line[3]; $line[4] =~ s/^.....//; $line[4] = "N/A" if ($line[3] =~ /PD/); $s_begin[$ct] = $line[4]; $s_limit[$ct] = $line[5]; if ($line[5] eq "UNLIMITED") { $s_limit[$ct] = $line[5]; } else { $s_limit[$ct] = convert_time($line[5]); } $s_pool[$ct] = $line[6]; $s_used[$ct] = $line[7]; # # Only keep the master node from the nodes list. # $line[8] =~ s/\[([0-9.]*).*/$1/; $s_master[$ct] = $line[8]; $ct++; } printf("Running job data:\n"); if ($verbose) { printf("---------------------------------------------------------------------------------------------------\n"); printf(" Time Time Time \n"); printf("JobID User $type Pool Status Used Limit Started Master/Other \n"); printf("---------------------------------------------------------------------------------------------------\n"); } else { printf("----------------------------------------------------------------------\n"); printf("JobID User $type Pool Status Used Master/Other \n"); printf("----------------------------------------------------------------------\n"); } for (my $i = 0; $i < $ct; $i++) { if ($verbose) { printf("%-8s %-8s %6s %-9s %-7s %10s %11s %14s %.12s\n", $s_job[$i], $s_user[$i], $s_nodes[$i], $s_pool[$i], $s_status[$i], $s_used[$i], $s_limit[$i], $s_begin[$i], $s_master[$i]); } else { printf("%-8s %-8s %6s %-9s %-7s %10s %.12s\n", $s_job[$i], $s_user[$i], $s_nodes[$i], $s_pool[$i], $s_status[$i], $s_used[$i], $s_master[$i]); } } printf("\n"); return; } # # Get the SLURM partitions. # sub do_scontrol_part { # # Get All partition data Don't need it all now, but # it may be useful later. # my @scon = `scontrol show part`; my $part; foreach my $tmp (@scon) { chomp $tmp; my @line = split(' ',$tmp); ($part) = ($tmp =~ m/PartitionName=(\S+)/) if ($tmp =~ /PartitionName=/); ($MaxTime{$part}) = ($tmp =~ m/MaxTime=(\S+)\s+/) if ($tmp =~ /MaxTime=/); ($MaxNodes{$part}) = ($tmp =~ m/MaxNodes=(\S+)\s+/) if ($tmp =~ /MaxNodes=/); $MaxTime{$part} =~ s/UNLIMITED/UNLIM/ if ($MaxTime{$part}); $MaxNodes{$part} =~ s/UNLIMITED/UNLIM/ if ($MaxNodes{$part}); } return; } # # Show the man page. # sub show_man { if ($< == 0) { # Cannot invoke perldoc as root my $id = eval { getpwnam("nobody") }; $id = eval { getpwnam("nouser") } unless defined $id; $id = -2 unless defined $id; $< = $id; printf("\n You can not do this as root!\n\n"); exit 1; } $> = $<; # Disengage setuid $ENV{PATH} = "/bin:/usr/bin"; # Untaint PATH delete @ENV{'IFS', 'CDPATH', 'ENV', 'BASH_ENV'}; if ($0 =~ /^([-\/\w\.]+)$/) { $0 = $1; } # Untaint $0 else { die "Illegal characters were found in \$0 ($0)\n"; } pod2usage(-exitstatus => 0, -verbose => 2); return; } # # Convert the time to a better format. # sub convert_time { my $val = shift(@_); my $tmp; my @field = split(/-|:/, $val); if (@field == 4) { $tmp = ($field[0]*24)+$field[1] . ':'.$field[2] . ':' . $field[3]; } else { $tmp = sprintf("%8s",$val); } return($tmp); } # # Get options. # sub get_options { GetOptions( 'help|h|?' => \$help, 'man' => \$man, 'v' => \$verbose, 'r' => \$running, 'c' => \$pool, ) or usage(1); show_man() if ($man); usage(0) if ($help); return; } # # Usage. # sub usage { my $eval = shift(@_); # # Print usage instructions and exit. # print STDERR "\nUsage: sjstat [-h] [-c] p\[-man] [-r] [-v]\n"; printf("\ -h shows usage. -c shows computing resources info only. -man shows man page. -r show only running jobs. -v is for the verbose mode.\n Output is very similar to that of squeue. \n\n"); exit($eval); } # # Determine if SLURM is available. # sub isslurmup { my $out = `scontrol show part 2>&1`; if ($?) { printf("\n SLURM is not communicating.\n\n"); exit(1); } return; } __END__ =head1 NAME B - List attributes of jobs under the SLURM control =head1 SYNOPSIS B [B<-h> ] [B<-c>] [B<-r> ] [B<-v>] =head1 DESCRIPTION The B command is used to display statistics of jobs under control of SLURM. The output is designed to give information on the resource usage and availablilty, as well as information about jobs that are currently active on the machine. This output is built using the SLURM utilities, sinfo, squeue and scontrol, the man pages for these utilites will provide more information and greater depth of understanding. =head1 OPTIONS =over 4 =item B<-h> Display a brief help message =item B<-c> Display the computing resource information only. =item B<-man> Show the man page. =item B<-r> Display only the running jobs. =item B<-v> Display more verbose information. =back =head1 EXAMPLE The following is a basic request for status. > sjstat Scheduling pool data: ------------------------------------------------------------ Pool Memory Cpus Total Usable Free Other Traits ------------------------------------------------------------ pdebug 15000Mb 8 32 32 24 (null) pbatch* 15000Mb 8 1072 1070 174 (null) Running job data: ------------------------------------------------------------------- JobID User Nodes Pool Status Used Master/Other ------------------------------------------------------------------- 395 mary 1000 pbatch PD 0:00 (JobHeld) 396 mary 1000 pbatch PD 0:00 (JobHeld) 375 sam 1000 pbatch CG 0:00 (JobHeld) 388 fred 32 pbatch R 25:27 atlas89 361 harry 512 pbatch R 1:01:12 atlas618 1077742 sally 8 pdebug R 20:16 atlas18 The Scheduling data contains information pertaining to the: Pool a set of nodes Memory the amount of memory on each node Cpus the number of cpus on each node Total the total number of nodes in the pool Usable total usaable nodes in the pool Free total nodes that are currently free The Running job data contains information pertaining to the: JobID the SLURM job id User owner of the job Nodes nodes required, or in use by the job (Note: On cpu scheduled machines, this field will be labled "Procs" show the number of processors the job is using.) Pool the Pool required or in use by the job Status current status of the job Used Wallclick time used by the job Master/Other Either the Master (head) node used by the job, or may indicate furhter status of a pending, or completing job. The common status values are: R The job is running PD The job is Pending CG The job is Completing These are states reproted by SLURM and more elaborate docuemntation can be found in the squeue/sinfo man pages. An example of the -v option. Scheduling pool data: ----------------------------------------------------------------------------- Total Usable Free Node Time Other Pool Memory Cpus Nodes Nodes Nodes Limit Limit Traits ----------------------------------------------------------------------------- pdebug 15000Mb 8 32 32 24 16 30 (null) pbatch* 15000Mb 8 1072 1070 174 UNLIM UNLIM (null) Running job data: --------------------------------------------------------------------------------------------------- Time Time Time JobID User Nodes Pool Status Used Limit Started Master/Other --------------------------------------------------------------------------------------------------- 38562 tom 4 pbatch PD 0:00 1:00:00 01-14T18:11:22 (JobHeld) The added fields to the "Scheduling pool data" are: Node Limit SLURM imposed node limit. Time Limit SLURM imposed time limit, value in minutes. The added fields to the "Running job data" are: Limit Time limit of job. Start Start time of job. =head1 REPORTING BUGS Report bugs to =cut slurm-slurm-15-08-7-1/contribs/skilling.c000066400000000000000000000105741265000126300201370ustar00rootroot00000000000000//+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // Filename: hilbert.c // // Purpose: Hilbert and Linked-list utility procedures for BayeSys3. // // History: TreeSys.c 17 Apr 1996 - 31 Dec 2002 // Peano.c 10 Apr 2001 - 11 Jan 2003 // merged 1 Feb 2003 // Arith debug 28 Aug 2003 // Hilbert.c 14 Oct 2003 // 2 Dec 2003 //----------------------------------------------------------------------------- /* Copyright (c) 1996-2003 Maximum Entropy Data Consultants Ltd, 114c Milton Road, Cambridge CB4 1XE, England This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA #include "license.txt" */ #include typedef unsigned int coord_t; // char,short,int for up to 8,16,32 bits per word static void TransposetoAxes( coord_t* X, // I O position [n] int b, // I # bits int n) // I dimension { coord_t M, P, Q, t; int i; // Gray decode by H ^ (H/2) t = X[n-1] >> 1; for( i = n-1; i; i-- ) X[i] ^= X[i-1]; X[0] ^= t; // Undo excess work M = 2 << (b - 1); for( Q = 2; Q != M; Q <<= 1 ) { P = Q - 1; for( i = n-1; i; i-- ) if( X[i] & Q ) X[0] ^= P; // invert else{ t = (X[0] ^ X[i]) & P; X[0] ^= t; X[i] ^= t; } // exchange if( X[0] & Q ) X[0] ^= P; // invert } } static void AxestoTranspose( coord_t* X, // I O position [n] int b, // I # bits int n) // I dimension { coord_t P, Q, t; int i; // Inverse undo for( Q = 1 << (b - 1); Q > 1; Q >>= 1 ) { P = Q - 1; if( X[0] & Q ) X[0] ^= P; // invert for( i = 1; i < n; i++ ) if( X[i] & Q ) X[0] ^= P; // invert else{ t = (X[0] ^ X[i]) & P; X[0] ^= t; X[i] ^= t; } // exchange } // Gray encode (inverse of decode) for( i = 1; i < n; i++ ) X[i] ^= X[i-1]; t = X[n-1]; for( i = 1; i < b; i <<= 1 ) X[n-1] ^= X[n-1] >> i; t ^= X[n-1]; for( i = n-2; i >= 0; i-- ) X[i] ^= t; } /* This is an sample use of Skilling's functions above. * You will need to modify the code if the value of BITS or DIMS is changed. * The the output of this can be used to order the node name entries in slurm.conf */ #define BITS 5 /* number of bits used to store the axis values, size of Hilbert space */ #define DIMS 3 /* number of dimensions in the Hilbert space */ main(int argc, char **argv) { int i, H; coord_t X[DIMS]; // any position in 32x32x32 cube for BITS=5 if (argc != (DIMS + 1)) { printf("Usage %s X Y Z\n", argv[0]); exit(1); } for (i=0; i>0 & 1) << 0) + ((X[1]>>0 & 1) << 1) + ((X[0]>>0 & 1) << 2) + ((X[2]>>1 & 1) << 3) + ((X[1]>>1 & 1) << 4) + ((X[0]>>1 & 1) << 5) + ((X[2]>>2 & 1) << 6) + ((X[1]>>2 & 1) << 7) + ((X[0]>>2 & 1) << 8) + ((X[2]>>3 & 1) << 9) + ((X[1]>>3 & 1) << 10) + ((X[0]>>3 & 1) << 11) + ((X[2]>>4 & 1) << 12) + ((X[1]>>4 & 1) << 13) + ((X[0]>>4 & 1) << 14); printf("Hilbert integer = %d (%d%d%d%d%d%d%d%d%d%d%d%d%d%d%d)\n", H, X[0]>>4 & 1, X[1]>>4 & 1, X[2]>>4 & 1, X[0]>>3 & 1, X[1]>>3 & 1, X[2]>>3 & 1, X[0]>>2 & 1, X[1]>>2 & 1, X[2]>>2 & 1, X[0]>>1 & 1, X[1]>>1 & 1, X[2]>>1 & 1, X[0]>>0 & 1, X[1]>>0 & 1, X[2]>>0 & 1); #if 0 /* Used for validation purposes */ TransposetoAxes(X, BITS, DIMS); // Hilbert transpose for 5 bits and 3 dimensions printf("Axis coordinates = %d %d %d\n", X[0], X[1], X[2]); #endif } slurm-slurm-15-08-7-1/contribs/slurm_completion_help/000077500000000000000000000000001265000126300225535ustar00rootroot00000000000000slurm-slurm-15-08-7-1/contribs/slurm_completion_help/README.md000066400000000000000000000103351265000126300240340ustar00rootroot00000000000000slurm-helper ============ Bunch of helper files for the Slurm resource manager Vim syntax file --------------- The Vim syntax file renders the Slurm batch submission scripts easier to read and to spot errors in the submission options. As submission scripts are indeed shell scripts, and all Slurm options are actually Shell comments, it can be difficult to spot errors in the options. This syntax file allows vim to understand the Slurm option and highlight them accordingly. Whenever possible, the syntax rules check the validity of the options and put in a special color what is not recognized as a valid option, or valid parameters values. __Installation__ Under Linux or MacOS, simply copy the file in the directory .vim/after/syntax/sh/ or whatever shell other than ``sh`` you prefer. For system wide use with bash put the file in /etc/bash_completion.d/ The syntax file is then read and applied on a Shell script after the usual syntax file has been processed. __Known issues__ * Some regex needed to validate options or parameter values are not exactly correct, but should work in most cases. * Any new option unknown to the syntax file will be spotted as an error. * On a Debian system (Ubuntu) you may see messages like... _get_comp_words_by_ref: command not found after a tab. Based on http://askubuntu.com/questions/33440/tab-completion-doesnt-work-for-commands you need to alter your /etc/bash.bashrc to make this work correctly. Bash completion --------------- The Bash completion script offers completion for Slurm commands. At present the following Slurm commands are considered * scontrol * sreport __Instalation__ Simply source the script in your .bashrc or .profile __Examples__ root@frontend:~ # squeue -- --account --iterate --qos --usage --clusters --jobs --sort --user --format --nodes --start --verbose --help --noheader --state --version --hide --partition --steps root@frontend:~ # squeue --us --usage --user root@frontend:~ # squeue --user user1 user2 user3 user4 root@frontend:~ # scontrol abort delete pidinfo requeue shutdown update checkpoint hold ping resume suspend version completing listpids reconfigure setdebug takeover create notify release show uhold root@frontend:~ # scontrol update jobid= nodename= partitionname= reservationname= step= root@frontend:~ # scontrol update nodename= root@frontend:~ # scontrol update nodename=node node01 node03 node05 node07 node09 node11 node13 node15 node17 node19 node02 node04 node06 node08 node10 node12 node14 node16 node18 node20 root@frontend:~ # scontrol update nodename=node12 features= reason= weight= gres= state= root@frontend:~ # scontrol update nodename=node12 state= alloc down fail idle mixed power_up allocated drain failing maint power_down resume root@frontend:~ # scontrol update nodename=node12 state=resume root@frontend:~ # squeue --format "% %a(Account) %E(dependency) %i(id) %M(time) %s(selecplugin) %A(NTasks) %e(end) %I(Ncores/socket) %N(alloc_nodes) %t(state) %b(gres) %f(features) %j(name) %n(reqnodes) %T(state) %c(mincpu) %G(gID) %k(comment) %O(contiguous) %U(uID) %C(Ncpus) %g(group) %l(limit) %p(priority) %u(user) %d(minTmp) %H(Nsockets) %L(timeleft) %r(reason) %v(reservation) %D(NNodes) %h(shared) %m(mem) %R(reason) %x(excnodes) slurm-slurm-15-08-7-1/contribs/slurm_completion_help/slurm.vim000066400000000000000000000230501265000126300244320ustar00rootroot00000000000000""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" " " Vim syntax file for completion for Slurm " """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" " Copyright (C) 2012 Damien Franois. " Written by Damien Franois. . " " This file is part of SLURM, a resource management program. " For details, see . " Please also read the included file: DISCLAIMER. " " SLURM is free software; you can redistribute it and/or modify it under " the terms of the GNU General Public License as published by the Free " Software Foundation; either version 2 of the License, or (at your option) " any later version. " " In addition, as a special exception, the copyright holders give permission " to link the code of portions of this program with the OpenSSL library under " certain conditions as described in each individual source file, and " distribute linked combinations including the two. You must obey the GNU " General Public License in all respects for all of the code used other than " OpenSSL. If you modify file(s) with this exception, you may extend this " exception to your version of the file(s), but you are not obligated to do " so. If you do not wish to do so, delete this exception statement from your " version. If you delete this exception statement from all source files in " the program, then also delete it here. " " SLURM is distributed in the hope that it will be useful, but WITHOUT ANY " WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS " FOR A PARTICULAR PURPOSE. See the GNU General Public License for more " details. " " You should have received a copy of the GNU General Public License along " with SLURM; if not, write to the Free Software Foundation, Inc., " 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. " """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" " handling /bin/sh with is_kornshell/is_sh {{{1 " b:is_sh is set when "#! /bin/sh" is found; " However, it often is just a masquerade by bash (typically Linux) " or kornshell (typically workstations with Posix "sh"). " So, when the user sets "is_bash" or "is_kornshell", " a b:is_sh is converted into b:is_bash/b:is_kornshell, " respectively. if !exists("b:is_kornshell") && !exists("b:is_bash") if exists("g:is_posix") && !exists("g:is_kornshell") let g:is_kornshell= g:is_posix endif if exists("g:is_kornshell") let b:is_kornshell= 1 if exists("b:is_sh") unlet b:is_sh endif elseif exists("g:is_bash") let b:is_bash= 1 if exists("b:is_sh") unlet b:is_sh endif else let b:is_sh= 1 endif endif " Slurm: {{{1 " =================== " Slurm SBATCH comments are one liners beginning with #SBATCH and containing " the keyword (i.e.SBATCH), one option (here only options starting with -- are " considered), and one optional value. syn region shSlurmComment start="^#SBATCH" end="\n" oneline contains=shSlurmKeyword,shSlurmOption,shSlurmValue " all shSlurmString are suspect; they probably could be narrowed down to more " specific regular expressions. Typical example is --mail-type or --begin syn match shSlurmKeyword contained '#SBATCH\s*' syn match shSlurmOption contained '--account=' nextgroup=shSlurmString syn match shSlurmOption contained '--acctg-freq=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--extra-node-info=' nextgroup=shSlurmNodeInfo syn match shSlurmOption contained '--socket-per-node=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--cores-per-socket=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--threads-per-core=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--begin=' nextgroup=shSlurmString syn match shSlurmOption contained '--checkpoint=' nextgroup=shSlurmString syn match shSlurmOption contained '--checkpoint-dir=' nextgroup=shSlurmString syn match shSlurmOption contained '--comment=' nextgroup=shSlurmIdentifier syn match shSlurmOption contained '--constraint=' nextgroup=shSlurmString syn match shSlurmOption contained '--contiguous' syn match shSlurmOption contained '--cpu-bind==' nextgroup=shSlurmString syn match shSlurmOption contained '--cpus-per-task=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--dependency=' nextgroup=shSlurmString syn match shSlurmOption contained '--workdir=' nextgroup=shSlurmString syn match shSlurmOption contained '--error=' nextgroup=shSlurmString syn match shSlurmOption contained '--exclusive' syn match shSlurmOption contained '--nodefile=' nextgroup=shSlurmString syn match shSlurmOption contained '--get-user-env' syn match shSlurmOption contained '--get-user-env=' nextgroup=shSlurmEnv syn match shSlurmOption contained '--gid=' nextgroup=shSlurmString syn match shSlurmOption contained '--hint=' nextgroup=shSlurmHint syn match shSlurmOption contained '--immediate' nextgroup=shSlurmNumber syn match shSlurmOption contained '--input=' nextgroup=shSlurmString syn match shSlurmOption contained '--job-name=' nextgroup=shSlurmString syn match shSlurmOption contained '--job-id=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--no-kill' syn match shSlurmOption contained '--licences=' nextgroup=shSlurmString syn match shSlurmOption contained '--distribution=' nextgroup=shSlurmDist syn match shSlurmOption contained '--mail-user=' nextgroup=shSlurmEmail syn match shSlurmOption contained '--mail-type=' nextgroup=shSlurmString syn match shSlurmOption contained '--mem=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--mem-per-cpu=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--mem-bind=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--mincores=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--mincpus=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--minsockets=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--minthreads=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--nodes=' nextgroup=shSlurmInterval syn match shSlurmOption contained '--ntasks=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--network=' nextgroup=shSlurmString syn match shSlurmOption contained '--nice' syn match shSlurmOption contained '--nice=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--no-requeue' syn match shSlurmOption contained '--ntasks-per-core=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--ntasks-per-socket=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--ntasls-per-node=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--overcommit' syn match shSlurmOption contained '--output=' nextgroup=shSlurmString syn match shSlurmOption contained '--open-mode=' nextgroup=shSlurmMode syn match shSlurmOption contained '--partition=' nextgroup=shSlurmString syn match shSlurmOption contained '--propagate' syn match shSlurmOption contained '--propagate=' nextgroup=shSlurmPropag syn match shSlurmOption contained '--quiet' syn match shSlurmOption contained '--requeue' syn match shSlurmOption contained '--reservation=' nextgroup=shSlurmString syn match shSlurmOption contained '--share' syn match shSlurmOption contained '--signal=' nextgroup=shSlurmString syn match shSlurmOption contained '--time=' nextgroup=shSlurmDuration syn match shSlurmOption contained '--tasks-per-node=' nextgroup=shSlurmNumber syn match shSlurmOption contained '--tmp=' nextgroup=shSlurmString syn match shSlurmOption contained '--uid=' nextgroup=shSlurmString syn match shSlurmOption contained '--nodelist=' nextgroup=shSlurmString syn match shSlurmOption contained '--wckey=' nextgroup=shSlurmString syn match shSlurmOption contained '--wrap=' nextgroup=shSlurmString syn match shSlurmOption contained '--exclude=' nextgroup=shSlurmString syn region shSlurmValue start="=" end="$" contains=shSlurmNoshSlurmEnvdeInfo,shSlurmString,shSlurmMailType,shSlurmIdentifier,shSlurmEnv,shSlurmHint,shSlurmMode,shSlurmPropag,shSlurmInterval,shSlurmDist,shSlurmEmail syn match shSlurmNumber contained '\d\d*' syn match shSlurmDuration contained '\d\d*\(:\d\d\)\{,2}' syn match shSlurmNodeInfo contained '\d\d*\(:\d\d*\)\{,2}' syn match shSlurmDuration contained '\d\d*-\d\=\d\(:\d\d\)\{,2}' syn match shSlurmInterval contained '\d\d*\(-\d*\)\=' syn match shSlurmString contained '.*' syn match shSlurmEnv contained '\d*L\=S\=' syn keyword shSlurmHint contained compute_bound memory_bound nomultithread multithread syn keyword shSlurmMode contained append truncate syn keyword shSlurmPropag contained ALL AS CORE CPU DATA FSIZE MEMLOCK NOFILE CPROC RSS STACK syn keyword shSlurmDist contained block cyclic arbitrary syn match shSlurmDist contained 'plane\(=.*\)\=' syn match shSlurmEmail contained '[-a-zA-Z0-9.+]*@[-a-zA-Z0-9.+]*' "Anything that is not recognized is marked as error hi def link shSlurmComment Error "The #SBATCH keyword hi def link shSlurmKeyword Function "The option hi def link shSlurmOption Operator "The values hi def link shSlurmDuration Special hi def link shSlurmString Special hi def link shSlurmMailType Special hi def link shSlurmNumber Special hi def link shSlurmSep Special hi def link shSlurmNodeInfo Special hi def link shSlurmEnv Special hi def link shSlurmHint Special hi def link shSlurmMode Special hi def link shSlurmPropag Special hi def link shSlurmInterval Special hi def link shSlurmDist Special hi def link shSlurmEmail Special slurm-slurm-15-08-7-1/contribs/slurm_completion_help/slurm_completion.sh000066400000000000000000001711401265000126300265060ustar00rootroot00000000000000############################################################################### # # Bash completion for Slurm # ############################################################################### # Copyright (C) 2012 Damien François. # Written by Damien François. . # # This file is part of SLURM, a resource management program. # For details, see . # Please also read the included file: DISCLAIMER. # # SLURM is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free # Software Foundation; either version 2 of the License, or (at your option) # any later version. # # In addition, as a special exception, the copyright holders give permission # to link the code of portions of this program with the OpenSSL library under # certain conditions as described in each individual source file, and # distribute linked combinations including the two. You must obey the GNU # General Public License in all respects for all of the code used other than # OpenSSL. If you modify file(s) with this exception, you may extend this # exception to your version of the file(s), but you are not obligated to do # so. If you do not wish to do so, delete this exception statement from your # version. If you delete this exception statement from all source files in # the program, then also delete it here. # # SLURM is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more # details. # # You should have received a copy of the GNU General Public License along # with SLURM; if not, write to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. # ############################################################################### function compute_set_diff(){ res="" for i in $1; do [[ "$2" =~ ${i%%=*} ]] && continue res="$i $res" done echo $res } _split_long_opt() { [[ $cur == = || $cur == : ]] && cur="" [[ $prev == = ]] && prev=${COMP_WORDS[$cword-2]} } function find_first_partial_occurence(){ res="" for item1 in $1; do for item2 in $2; do if [[ $item2 == "$item1=" ]]; then res="$item1" break fi done if [[ $res != "" ]]; then break fi done echo $res } function find_first_occurence(){ res="" for item1 in $1; do for item2 in $2; do if [[ $item1 = $item2 ]]; then res="$item1" break fi done if [[ $res != "" ]]; then break fi done echo $res } function offer (){ remainings=$(compute_set_diff "$1" "${COMP_WORDS[*]}") COMPREPLY=( $( compgen -W "$remainings" -- $cur ) ) if [[ "$1" == *=* || "$1" == *%* || "$1" == *:* ]]; then #echo "NO SPACE $1" >> loglog compopt -o nospace fi } function offer_list () { curlist=${cur%,*} curitem=${cur##*,} if [[ $curlist == $curitem ]] then COMPREPLY=( $( compgen -W "${1}" -- $cur ) ) ; return elif [[ $cur == *, ]] ; then compvalues="" for i in $1;do [[ $cur =~ $i ]] && continue compvalues="$i $compvalues " done uniqueprefix=1 prefix=${compvalues:0:1} for i in $compvalues;do [[ ${i:0:1} == $prefix ]] || uniqueprefix=0 done if [[ $uniqueprefix == 1 ]] then compvalues="" for i in $1;do [[ $cur =~ $i ]] && continue compvalues="$compvalues $curlist,$i" done fi COMPREPLY=( $( compgen -W "${compvalues}" -- "" ) ) ; return else compvalues="" for i in $1;do [[ $cur =~ $i ]] && continue compvalues="$compvalues $curlist,$i" done COMPREPLY=( $( compgen -W "${compvalues}" -- $cur ) ) ; fi } function offer_many () { availablevalues="" for i in $1;do [[ $cur =~ $i ]] && continue availablevalues="$i $availablevalues" done # Check that there is no unique prefix for all remaining options (God knows why I have to do this. Must be missing something) # TODO when all suboptions start with the same prefix, it is not working great uniqueprefix=1 prefix=${availablevalues:0:1} for i in $availablevalues;do [[ ${i:0:1} == $prefix ]] || uniqueprefix=0 done #if [[ "$1" == *'\"'% ]]; #then # compopt -o nospace #fi #added for --format in squeue if [[ ${COMP_WORDS[COMP_CWORD-1]} == "$argname" ]]; then # echo "The first value is about to be entered" >> loglog cur="" COMPREPLY=( $( compgen -W "${1}" -- $cur ) ) ; return fi if [[ ${COMP_WORDS[COMP_CWORD-1]} == '=' && "$cur" != *,* ]]; then # echo "A supplementary value is being entered" >> loglog COMPREPLY=( $( compgen -W "${1}" -- $cur ) ) ; return fi if [[ ${cur:${#cur}-1:1} == "," && $uniqueprefix == 0 ]]; then echo "A supplementary value is about to be entered and there is a no unique suffix" >> loglog compvalues="" for i in $1;do [[ $cur =~ $i ]] && continue compvalues="$i $compvalues" done cur="" COMPREPLY=( $( compgen -W "${compvalues}" -- $cur ) ) ; return fi if [[ "$cur" =~ "," ]] ; then echo "A supplementary value is about to be entered and there is a unique prefix or we are in the middle of one" >> loglog compvalues="" for i in $1;do [[ $cur =~ $i ]] && continue compvalues="$compvalues ${cur%,*},$i" #compvalues="$compvalues $i" done COMPREPLY=( $( compgen -W "${compvalues}" -- $cur ) ) ; # This is lame, we show complete list rather than last element return fi return 255 } function param () { argname="$1" [[ ${COMP_WORDS[COMP_CWORD]} == "=" && ${COMP_WORDS[COMP_CWORD-1]} == $1 ]] && return 0 [[ ${COMP_WORDS[COMP_CWORD-1]} == "=" && ${COMP_WORDS[COMP_CWORD-2]} == $1 ]] && return 0 [[ ${COMP_WORDS[COMP_CWORD-1]} == $1 ]] && return 0 return 255 } function _jobs() { echo $( scontrol -o show jobs | cut -d' ' -f 1 | cut -d'=' -f 2 ) ; } function _wckeys() { echo $(sacctmgr -p -n list wckeys | cut -d'|' -f1) ; } function _qos() { echo $(sacctmgr -p -n list qos | cut -d'|' -f1) ; } function _clusters() { echo $(sacctmgr -p -n list clusters | cut -d'|' -f1) ; } function _jobnames() { echo $( scontrol -o show jobs | cut -d' ' -f 2 | cut -d'=' -f 2 ) ; } function _partitions() { echo $(scontrol show partitions|grep PartitionName|cut -c 15- |cut -f 1 -d' '|paste -s -d ' ') ; } function _nodes() { echo $(scontrol show nodes | grep NodeName | cut -c 10- | cut -f 1 -d' ' | paste -s -d ' ') ; } function _accounts() { echo $(sacctmgr -pn list accounts | cut -d'|' -f1 | paste -s -d' ') ; } function _licenses() { echo $(scontrol show config| grep Licenses | sed 's/Licenses *=//'| paste -s -d' ') ; } function _nodes() { echo $(scontrol show nodes | grep NodeName | cut -c 10- | cut -f 1 -d' ' | paste -s -d ' ') ; } function _features() { echo $(scontrol -o show nodes|cut -d' ' -f7|sed 's/Features=//'|sort -u|tr -d '()'|paste -d, -s) ; } function _users() { echo $(sacctmgr -pn list users | cut -d'|' -f1) ; } function _reservations() { echo $(scontrol -o show reservations | cut -d' ' -f1 | cut -d= -f2) ; } function _gres() { echo $(scontrol show config | grep GresTypes | cut -d= -f2) } function _jobname() { echo $(scontrol show -o jobs | cut -d' ' -f 2 | sed 's/Name=//') } function _resource() { echo $(sacctmgr -pn list resource | cut -d'|' -f1 | paste -s -d' ') } function _step() { echo $( scontrol -o show step | cut -d' ' -f 1 | cut -d'=' -f 2 ) ; } _sacctmgr() { _get_comp_words_by_ref cur prev words cword _split_long_opt local subopts="" local commands="add archive create delete dump list load modify show " local shortoptions="-h -i -n -p -P -Q -r -s -v -V" local longoptions="--help --immediate --noheader --parsable \ --parsable2 --quiet --readonly --associations --verbose --version" local assocparams="clusters= accounts= users= partition= " local assocbasedparams="defaultqos= fairshare= gracetime=\ grpcpumins= grpcpurunmins= grpcpus=\ grpjobs= grpmemory= grpnodes= grpsubmitjobs=\ grpwall= maxcpumins= maxcpus= maxjobs= maxnodes=\ maxsubmitjobs= maxwall= qoslevel=" local qosflags="DenyOneLimit EnforceUsageThreshold NoReserve\ PartitionMaxNodes PartitionMinNodes PartitionQos\ PartitionTimeLimit" local qospreempt="cluster cancel checkpoint requeue suspend" local clusflags="aix bgl bgq bluegene crayxt frontend multipleslumd\ sunconstellation xcpu" # Check whether we are in the middle of an option. If so serve them. remainings=$(compute_set_diff "$longoptions" "${COMP_WORDS[*]}") [[ $cur == - ]] && { offer "$shortoptions" ; return ; } [[ $cur == --* ]] && { offer "$remainings" ; return ; } # Search for a command in the argument list (first occurence) # the command might be in any position because of the options command=$(find_first_occurence "${COMP_WORDS[*]}" "$commands") # If no command has been entered, serve the list of valid commands [[ $command == "" ]] && { offer "$commands" ; return ; } # Load command has a specific syntax. Treat it first [[ $command == "load" ]] && { _filedir ; return ; } case $command in add|create) objects="account cluster coordinator qos user " object=$(find_first_occurence "${COMP_WORDS[*]}" "$objects") case $object in account) params="cluster= description= name= organization= parent=\ rawusage= withassoc withcoord withdeleted" if param "cluster" ; then offer_list "$(_clusters)" ; elif param "parent" ; then offer_list "$(_accounts)" ; else offer "$params" fi ;; cluster) params="classification= flags= name= rpc= wolimits" if param "flags" ; then offer_list "$clusflags" ; else offer "$params" fi ;; coordinator) params="accounts= names=" if param "names" ; then offer_list "$(_users)" ; elif param "accounts" ; then offer_list "$(_accounts)" ; else offer "$params" fi ;; qos) params="flags= gracetime= grpcpumins= grpcpurunmins= grpcpus=\ grpjobs= grpnodes= grpsubmitjobs= grpwall= maxcpumins=\ maxcpus= maxcpusperuser= maxjobs= maxnodes= mincpus=\ maxnodesperuser= maxsubmitjobs= maxwall= name= preempt=\ preemptmode= priority= usagefactor= usagethreshold=\ withdeleted" if param "preemptmode" ; then offer_list "$qospreempt" ; elif param "flags" ; then offer_list "$qosflags" ; elif param "preempt" ; then offer_list "$(_qos)" ; else offer "$params" fi ;; user) params="account= adminlevel= cluster= defaultaccount=\ defaultwckey= name= partition= rawusage= wckey= withassoc withcoord withdeleted" if param "defaultaccount" ; then offer_list "$(_accounts)" ; elif param "account" ; then offer_list "$(_accounts)"; elif param "adminlevel" ; then offer_list "none operator admin" ; elif param "cluster" ; then offer_list "$(_cluster)" ; elif param "defaultwckey" ; then offer_list "$(_wckey)" ; else offer "$params" fi ;; *) offer "$objects" ;; esac ;; archive) objects="dump load" object=$(find_first_occurence "${COMP_WORDS[*]}" "$objects") case $object in dump) _filedir ;; load) _filedir ;; *) offer "$objects" ;; esac ;; delete) objects="account cluster coordinator qos user" object=$(find_first_occurence "${COMP_WORDS[*]}" "$objects") case $object in account) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="cluster= description= name= organization= parent=" if param "cluster" ; then offer_list "$(_clusters)" ; elif param "parent" ; then offer_list "$(_accounts)" ; elif param "name" ; then offer_list "$(_accounts)" ; else offer "$params" fi ;; cluster) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="classification= flags= name= rpc= $assocbasedparams" if param "flags" ; then offer_list "$clusflags" ; elif param "defaultqos" ; then offer_list "$(_qos)" ; else offer "$params" fi ;; coordinator) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="accounts= names=" if param "names" ; then offer_list "$(_users)" ; elif param "accounts" ; then offer_list "$(_accounts)" ; else offer "$params" fi ;; user) params="account= adminlevel= cluster= defaultaccount=\ defaultwckey= name= wckeys= withassoc" if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi if param "defaultaccount" ; then offer_list "$(_accounts)" ; elif param "account" ; then offer_list "$(_accounts)"; elif param "adminlevel" ; then offer_list "none operator admin" ; elif param "cluster" ; then offer_list "$(_cluster)" ; elif param "wckeys" ; then offer_list "$(_wckeys)" ; elif param "defaultwckey" ; then offer_list "$(_wckey)" ; else offer "$params" ; fi ;; *) offer "$objects" ;; esac ;; list|show) objects="account association cluster configuration \ event problem qos resource transaction user wckey" object=$(find_first_occurence "${COMP_WORDS[*]}" "$objects") case $object in account) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="cluster= description= name= organization= parent=\ withassoc withcoord withdeleted $assocparams\ $assocbasedparams" if param "cluster" ; then offer_list "$(_clusters)" ; elif param "parent" ; then offer_list "$(_accounts)" ; elif param "users" ; then offer_list "$(_users)" ; elif param "partition" ; then offer_list "$(_partition)" ; elif param "defaultqos" ; then offer_list "$(_qos)" ; elif param "name" ; then offer_list "$(_accounts)" ; else offer "$params" fi ;; association) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="$assocparams onlydefaults tree withdeleted withsubaccounts\ wolimits wopinfo woplimits" if param "clusters" ; then offer_list "$(_clusters)" ; elif param "accounts" ; then offer_list "$(_accounts)" ; elif param "users" ; then offer_list "$(_users)" ; elif param "partition" ; then offer_list "$(_partitions)" ; else offer "$params" fi ;; cluster) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="classification= flags= name= rpc= $assocbasedparams\ wolimits" if param "flags" ; then offer_list "$clusflags" ; elif param "defaultqos" ; then offer_list "$(_qos)" ; else offer "$params" fi ;; event) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="all_clusters all_time clusters= end= event= maxcpu=\ mincpus= nodes= reason= start= states= user= " if param "clusters" ; then offer_list "$(_clusters)" ; elif param "nodes" ; then offer_list "$(_nodes)" ; elif param "event" ; then offer_list "cluster node" ; elif param "states" ; then offer_list "alloc allocated down drain\ fail failing idle mixed maint power_down power_up\ resume" ; elif param "users" ; then offer_list "$(_users)" ; else offer "$params" fi ;; qos) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="flags= gracetime= grpcpumins= grpcpus= grpcpurunmins=\ grpjobs= grpnodes= grpsubmitjobs= grpwall= id= maxcpumins=\ maxcpusmins= maxcpus= maxjobs= maxnodes= mincpus=\ maxnodesperuser= maxsubmitjobs= maxwall= name= preempt=\ preemptmode= priority= rawusage= usagefactor=\ usagethreshold= withdeleted" if param "preemptmode" ; then offer_list "cluster cancel\ checkpoint requeue\ suspend" ; elif param "flags" ; then offer_list "$qosflags" ; elif param "preempt" ; then offer_list "$(_qos)" ; else offer "$params" fi ;; resource) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="cluster= count= description= flags= servertype= name=\ precentallowed= server= type=" if param "name" ; then offer_list "$(_resource)" ; elif param "cluster" ; then offer_list "$(_clusters)" ; else offer "$params" fi ;; transaction) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="accounts= action= actor= clusters= endtime= startime=\ users= withassoc" if param "accounts" ; then offer_list "$(_accounts)" ; elif param "actor" ; then offer_list "$(_users)" ; elif param "clusters" ; then offer_list "$(_clusters)" ; else offer "$params" fi ;; user) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="account= adminlevel= cluster= defaultaccount=\ defaultwckey= name= partition= wckeys= withassoc withcoord \ withdelted" if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi if param "defaultaccount" ; then offer_list "$(_accounts)" ; elif param "account" ; then offer_list "$(_accounts)"; elif param "adminlevel" ; then offer_list "none operator admin" ; elif param "cluster" ; then offer_list "$(_cluster)" ; elif param "wckeys" ; then offer_list "$(_wckeys)" ; elif param "defaultwckey" ; then offer_list "$(_wckey)" ; else offer "$params" ; fi ;; *) offer "$objects" ;; esac ;; modify) objects="account cluster job qos user" object=$(find_first_occurence "${COMP_WORDS[*]}" "$objects") case $object in account) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="cluster= description= name= organization= parent=\ rawusage= $assocbasedparams" if param "cluster" ; then offer_list "$(_clusters)" ; elif param "parent" ; then offer_list "$(_accounts)" ; elif param "name" ; then offer_list "$(_accounts)" ; else offer "$params set" fi ;; cluster) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="classification= flags= name= rpc= $assocbasedparams" if param "flags" ; then offer_list "$clusflags" ; elif param "defaultqos" ; then offer_list "$(_qos)" ; else offer "$params set" fi ;; qos) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="flags= gracetime= grpcpumins= grpcpurunmins= grpcpus=\ grpjobs= grpnodes= grpsubmitjobs= grpwall= maxcpumins=\ maxcpus= maxcpusperuser= maxjobs= maxnodes= mincpus=\ maxnodesperuser= maxsubmitjobs= maxwall= name= preempt=\ preemptmode= priority= rawusage= usagefactor=\ usagethreshold= withdeleted" if param "flags" ; then offer_list "$qosflags" ; elif param "name" ; then offer_list "$(_qos)" ; elif param "preemptmode" ; then offer_list "$qospreempt" ; elif param "preempt" ; then offer_list "$(_qos)" ; else offer "$params set" fi ;; user) if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi params="account= adminlevel= cluster= defaultaccount=\ defaultwckey= name= partition= rawusage= wckeys= withassoc" if [[ "${COMP_WORDS[*]}" != *where* ]] ; then offer "where" ; return ;fi if param "defaultaccount" ; then offer_list "$(_accounts)" ; elif param "account" ; then offer_list "$(_accounts)"; elif param "adminlevel" ; then offer_list "none operator admin" ; elif param "cluster" ; then offer_list "$(_cluster)" ; elif param "wckeys" ; then offer_list "$(_wckeys)" ; elif param "defaultwckey" ; then offer_list "$(_wckey)" ; else offer "$params" ; fi ;; *) offer "$objects" ;; esac ;; esac } complete -F _sacctmgr sacctmgr _sreport() { _get_comp_words_by_ref cur prev words cword _split_long_opt local subopts="" local commands="cluster job user reservation" local shortoptions="-a -n -h -p -P -Q -t -v -V" local longoptions="--all_clusters --help --noheader --parsable\ --parsable2 --quiet --verbose --version" # Check whether we are in the middle of an option. If so serve them. remainings=$(compute_set_diff "$longoptions" "${COMP_WORDS[*]}") [[ $cur == - ]] && { offer "$shortoptions" ; return ; } [[ $cur == --* ]] && { offer "$remainings" ; return ; } # Search for a command in the argument list (first occurence) # the command might be in any position because of the options command=$(find_first_occurence "${COMP_WORDS[*]}" "$commands") # If no command has been entered, serve the list of valid commands [[ $command == "" ]] && { offer "$commands" ; return ; } opts_all="All_Clusters Clusters= End= Format= Start=" case $command in user) objects="TopUsage" object=$(find_first_occurence "${COMP_WORDS[*]}" "$objects") case $object in TopUsage) params="$opts_all Accounts= Group TopCount= Users=" if param "Clusters" ; then offer_list "$(_clusters)" ; elif param "Format" ; then offer_list "Account Cluster Login\ Proper User" ; elif param "Accounts" ; then offer_list "$(_accounts)" ; elif param "Users" ; then offer_list "$(_users)" ; else offer "$params" fi ;; *) offer "$objects" ;; esac ;; reservation) objects="Utilization" object=$(find_first_occurence "${COMP_WORDS[*]}" "$objects") case $object in Utilization) params="$opts_all Names= Nodes= Accounts= Group TopCount= Users= " if param "Clusters" ; then offer_list "$(_clusters)" ; elif param "Format" ; then offer_list "Allocated Associations \ Clusters CPUCount CPUTime End Flags Idle Name Nodes\ ReservationId Start TotalTime"; elif param "Nodes" ; then offer_list "$(_nodes)" ; else offer "$params" fi ;; *) offer "$objects" ;; esac ;; job) objects="SizesByAccount SizesByAccountAndWckey SizesByWckey" object=$(find_first_occurence "${COMP_WORDS[*]}" "$objects") case $object in SizesByAccount|SizesByAccountAndWckey) params="$opts_all Accounts= FlatView GID= Grouping= \ Jobs= Nodes= Partitions= PrintJobCount Users= Wckeys=" if param "Clusters" ; then offer_list "$(_clusters)" ; elif param "Format" ; then offer_list "Account Cluster" ; elif param "Accounts" ; then offer_list "$(_accounts)" ; elif param "GID" ; then _gids ; elif param "Users" ; then offer_list "$(_users)" ; elif param "Wckeys" ; then offer_list "$(_wckeys)" ; else offer "$params" fi ;; SizesByWckey) params="$opts_all Accounts= FlatView GID= Grouping= \ Jobs= Nodes= OPartitions= PrintJobCount Users= Wckeys=" if param "Clusters" ; then offer_list "$(_clusters)" ; elif param "Format" ; then offer_list "Wckey Cluster" ; elif param "Accounts" ; then offer_list "$(_accounts)" ; elif param "GID" ; then _gids ; elif param "Users" ; then offer_list "$(_users)" ; elif param "Wckeys" ; then offer_list "$(_wckeys)" ; else offer "$params" fi ;; *) offer "$objects" ;; esac ;; cluster) objects="AccountUtilizationByUser UserUtilizationByAccount \ UserUtilizationByWCKey Utilization WCKeyUtilizationByUser" object=$(find_first_occurence "${COMP_WORDS[*]}" "$objects") case $object in Utilization) params="$opts_all Names= Nodes=" if param "Clusters" ; then offer_list "$(_clusters)" ; elif param "Format" ; then offer_list "Allocated Cluster \ CPUCount Down Idle Overcommited PlannedDown Reported Reserved"; elif param "Nodes" ; then offer_list "$(_nodes)" ; else offer "$params" fi ;; AccountUtilizationByUser|UserUtilizationByAccount) params="$opts_all Accounts= Tree Users= Wckeys=" if param "Clusters" ; then offer_list "$(_clusters)" ; elif param "Format" ; then offer_list "Accounts Cluster CPUCount\ Login Proper Used" ; elif param "Accounts" ; then offer_list "$(_accounts)" ; elif param "Users" ; then offer_list "$(_users)" ; elif param "Wckeys" ; then offer_list "$(_wckeys)" ; else offer "$params" fi ;; UserUtilizationByWCKey|WCKeyUtilizationByUser) params="$opts_all Accounts= Tree Users= Wckeys=" if param "Clusters" ; then offer_list "$(_clusters)" ; elif param "Format" ; then offer_list "Cluster CPUCount Login\ Proper Used Wckey" ; elif param "Accounts" ; then offer_list "$(_accounts)" ; elif param "Users" ; then offer_list "$(_users)" ; elif param "Wckeys" ; then offer_list "$(_wckeys)" ; else offer "$params" fi ;; *) offer "$objects" ;; esac ;; esac } complete -F _sreport sreport _scontrol() { local cur=${COMP_WORDS[COMP_CWORD]} local prev=${COMP_WORDS[COMP_CWORD-1]} local commands="abort checkpoint cluster create completing delete details\ errnumstr help hold notify oneliner\ pidinfo listpids ping quit reboot_nodes reconfigure release\ requeue requeuehold schedloglevel resume schedloglevel\ script setdebug setdebugflags show shutdown suspend\ takeover uhold update verbose version wait_job" local shortoptions="-a -d -h -M -o -Q -v -V " local longoptions="--all --details --help --hide --cluster --oneliner \ --quiet --verbose --version" # Check whether we are in the middle of an option. If so serve them. remainings=$(compute_set_diff "$longoptions" "${COMP_WORDS[*]}") [[ $cur == - ]] && { offer "$shortoptions" ; return ; } [[ $cur == -- ]] && { offer "$remainings" ; return ; } [[ $cur == --* ]] && { offer "$(sed 's/<[^>]*>//g' <<< $remainings)"; return ; } # Search for a command in the argument list (first occurence) # the command might be in any position because of the options command=$(find_first_occurence "${COMP_WORDS[*]}" "$commands") # If no command has been entered, serve the list of valid commands [[ $command == "" ]] && { offer "$commands" ; return ; } # Otherwise process command case $command in shutdown) # scontrol shutdown object offer "slurmctld controller" ;; setdebug) # scontrol setdebug value offer "quiet info warning error debug debug2 debug3 debug4 debug5 " # FIXME ;; uhold | suspend | release | requeue | resume | hold ) offer "$(_jobs)" ;; #TODO notify checkpoint) # scontrol checkpoint create jobid [parameter1=value1,...] # This one has unsusual ordering: object is before command. # command subcommand argument #TODO add support for additional options cfr manpage objects="able create disable enable error restart requeue vacate" if [[ $prev == checkpoint ]]; then offer "$objects"; elif [[ $objects == *$prev* ]]; then offer "$(_jobs)"; else echo todo #TODO fi ;; show) # scontrol show object [id] objects="aliases config block daemons frontend hostlist hostlistsorted\ hostnames job nodes partitions reservations slurmd steps\ submp topology" # Search for the current object in the argument list object=$(find_first_occurence "${COMP_WORDS[*]}" "$objects") # If no object has yet been (fully) typed in, serve the list of objects [[ $object == "" ]] && { offer "$objects" ; return ; } # Otherwise, offer the ids depending on the object if param "job" ; then offer "$(_jobs)" ; fi if param "nodes" ; then offer_list "$(_nodes)" ; fi if param "partitions" ; then offer "$(_partitions)" ; fi if param "reservations" ; then offer "$(_reservations)" ; fi #TODO if object "steps" ;; delete) # scontrol delete objectname=id parameters="partitionname= reservationname=" # If a parameter has been fully typed in, serve the corresponding # values, otherwise, serve the list of parameters. if param "partitionname" ; then offer_many "$(_partitions)" elif param "reservationname" ; then offer_many "$(_reservations)" else offer "$parameters" ; fi ;; update) parameters="jobid= step= nodename= partitionname=\ reservationname=" param=$(find_first_partial_occurence "${COMP_WORDS[*]}" "$parameters") [[ $param == "" ]] && { offer "$parameters" ; return ; } # If a parameter has been fully typed in, serve the corresponding # values, if it is the first one. if param "jobid" ; then offer_many "$(_jobs)" ; return elif param "nodename" ; then offer_many "$(_nodes)" ; return elif param "partitionname" ; then offer_many "$(_partitions)" ; return elif param "reservationname" ; then offer_many "$(_reservations)" ; return elif param "step" ; then offer_many "$(_step)" ; return fi # Otherwise, process the others based on the first one case $param in jobid) local parameters="account= conn-type= \ contiguous= dependency=\ eligibletime=yyyy-mm-dd excnodelist=\ features= geometry= gres=\ jobid= licenses=\ mincpusnode= minmemorycpu=\ minmemorynode=\ mintmpdisknode= name=\ nice[=delta] nodelist=\ numcpus=\ numtasks= partition=\ priority= qos= reqcores=\ reqnodelist= reqsockets=\ reqthreads= requeue=<0|1>\ reservationname= rotate=\ shared= starttime=yyyy-mm-dd\ switches=[@]\ timelimit=[d-]h:m:s userid=\ wckey=" remainings=$(compute_set_diff "$parameters" "${COMP_WORDS[*]}") # If a new named argument is about to be entered, serve the list of options [[ $cur == "" && $prev != "=" ]] && { offer "$remainings" ; return ; } # Test all potential arguments and server corresponding values if param "account" ; then offer_many "$(_accounts)" elif param "excnodelist" ; then offer_many "$(_nodes)" elif param "nodelist" ; then offer_many "$(_nodes)" elif param "features" ; then offer_many "$(_features)" elif param "gres" ; then offer_many "$(_gres)" elif param "licences" ; then offer_many "$(_licenses)" elif param "partition" ; then offer_many "$(_partitions)" elif param "reservationname" ; then offer_many "$(_reservations)" elif param "qos" ; then offer_many "$(_qos)" elif param "wckey" ; then offer_many "$(wckeys)" elif param "conn-type" ; then offer_many "MESH TORUS NAV" elif param "rotate" ; then offer_many "yes no" elif param "shared" ; then offer_many "yes no" else offer "$(sed 's/\=[^ ]*/\=/g' <<< $remainings)" fi ;; nodename) local parameters="features= gres= \ reason= state= weight=" remainings=$(compute_set_diff "$parameters" "${COMP_WORDS[*]}") # If a new named argument is about to be entered, serve the list of options [[ $cur == "" && $prev != "=" ]] && { offer "$remainings" ; return ; } # Test all potential arguments and server corresponding values if param "features" ; then offer_many "$(_features)" elif param "gres" ; then offer_many "$(_gres)" elif param "state" ; then offer_many "noresp drain fail future\ resume power_down\ power_up undrain" else offer "$(sed 's/\=[^ ]*/\=/g' <<< $remainings)" fi ;; partitionname) local parameters="allowgroups= allocnodes=\ alternate= default=yes|no\ defaulttime=d-h:m:s|unlimited defmempercpu=\ defmempercnode= disablerootjobs=yes|no\ gracetime= hidden=yes|no\ maxmempercpu= maxmempercnode=\ maxnodes= maxtime=d-h:m:s|unlimited\ minnodes= nodes=\ preemptmode=off|cancel|checkpoint|requeue|suspend\ priority=count rootonly=yes|no reqresv=\ shared=yes|no|exclusive|force\ state=up|down|drain|inactive" remainings=$(compute_set_diff "$parameters" "${COMP_WORDS[*]}") # If a new named argument is about to be entered, serve the list of options [[ $cur == "" && $prev != "=" ]] && { offer "$remainings" ; return ; } # Test all potential arguments and server corresponding values if param "allocnodes" ; then offer_many "$(_nodes)" elif param "nodes" ; then offer_many "$(_nodes)" elif param "alternate" ; then offer_many "$(_partitions)" elif param "default" ; then offer_many "yes no" elif param "preemptmode" ; then offer_many "off cancel checkpoint\ requeue suspend" elif param "shared" ; then offer_many "yes no exclusive force" elif param "state" ; then offer_many "up down drain inactive" elif param "disablerootjobs" ; then offer_many "yes no" elif param "hidden" ; then offer_many "yes no" elif param "rootonly" ; then offer_many "yes no" elif param "reqresv" ; then offer_many "yes no" else offer "$(sed 's/\=[^ ]*/\=/g' <<< $remainings)" fi ;; reservationname) local parameters="accounts= corecnt=\ duration=[days-]hours:minutes:seconds\ endtime=yyyy-mm-dd[thh:mm[:ss]]\ features=\ flags=maint,overlap,ignore_jobs,daily,weekly\ licenses= nodecnt=\ nodes= users=\ partitionname=\ starttime=yyyy-mm-dd[thh:mm[:ss]]" remainings=$(compute_set_diff "$parameters" "${COMP_WORDS[*]}") # If a new named argument is about to be entered, serve the list of options [[ $cur == "" && $prev != "=" ]] && { offer "$remainings" ; return ; } # test all potential arguments and server corresponding values if param "accounts" ; then offer_many "$(_accounts)" elif param "licences" ; then offer_many "$(_licenses)" elif param "nodes" ; then offer_many "$(_nodes)" elif param "features" ; then offer_many "$(_features)" elif param "users" ; then offer_many "$(_users)" elif param "flags" ; then offer_many "daily first_cores\ ignore_jobs license_only\ maint overlap part_nodes\ spec_nodes static_alloc\ time_float weekly" elif param "partitioname" ; then offer_many "$(_partitions)" else offer "$(sed 's/\=[^ ]*/\=/g' <<< $remainings)" fi ;; step) local parameters="stepid=[.]\ CompFile= TimeLimit=> /Contents 434 0 R >> endobj 437 0 obj <> /Contents 438 0 R >> endobj 442 0 obj <> /Contents 443 0 R >> endobj 458 0 obj <> /Contents 459 0 R >> endobj 468 0 obj <> /Contents 469 0 R >> endobj 478 0 obj <> /Contents 479 0 R >> endobj 488 0 obj <> /Contents 489 0 R >> endobj 494 0 obj <> /Contents 495 0 R >> endobj 498 0 obj <> /Contents 499 0 R >> endobj 503 0 obj <> /Contents 504 0 R >> endobj 3 0 obj << /Type /Pages /Kids [ 4 0 R 153 0 R 159 0 R 196 0 R 218 0 R 224 0 R 237 0 R 249 0 R 257 0 R 266 0 R 270 0 R 289 0 R 303 0 R 312 0 R 316 0 R 335 0 R 339 0 R 343 0 R 421 0 R 425 0 R 429 0 R 433 0 R 437 0 R 442 0 R 458 0 R 468 0 R 478 0 R 488 0 R 494 0 R 498 0 R 503 0 R ] /Count 31 >> endobj 1 0 obj <> endobj 7 0 obj <>stream AdobedC    $.' ",#(7),01444'9=82<.342 " }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz?ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*9<.MzFcBYnX"+~}fmNYmܬpel?(Qm/Ԓʳi>CJ|K.,Ъ.kM3:ݻ\rrtP5vݛ3Vf=Si)n'KDLy2v ޵>-oDg+Zt 0Hz| ?o|y_fc|ֶm;ƽ²ڞZx#ַh(((kqs3_5& Sς1I9$xȭ=']ԥwoa\m.A+w5W~*.tL-nծp}W|aIPdgz]y5|gi:tx|<&q8 p,E+zsں&=?L;G~m-mxa` vڼ('8T:!pAgo$,DB:JV΄t?Q[w H-yfwF'S[mG[ƕ!I!H^?v0<`@E \i lmƽkIH]N⮻x_/ @X$fkV\@RS b1.B6q:x@E-&M@[nO3&w\`q#2xut5#| ªҴ—WGn#J^ 8ghm"2\ >A\}+]=JV$h$ F$9?ԥ8DkYC-c~u9p*^0ӮKmyH8%VT v>Q~$xQԵQa[XWT% `>3H{MVfWMJ >Y"P38X븞5f: vn&MUy-uPJ'#l.~ xj!yf[h!&8Rp~^&Ŀ $sN})|v2ɜHv\r253 8C's(&ŽtXBdBH21}Ex^#Crc|ڒ2Ӆ%ִ6KctyBY @R@Q9"+ xG{aɌZO ̜wu뜎ѵbߒd;';FI-5<O $h6ZL#ܺ;H;ysYxqRmJژPF Y|*>twF!𥆗uoon$tX},JyِrhhM0NxAMjzf1iI?֯sZyI Dy\>stҙxSďx{+}>Huid*B,lrƛ7Ze43@]:J߼+ `e \$ oά>(8]4 x&FK@gf!{$` rg>%5 fMtIt ;i "CJO)7imH%A;Z\e$wb8^c߇~)-,7ķWV5RNWC{zo (|!*q^KWaMBð[Kgm #j7g䠌XXP㜂z`?C.)v[h&$g+$p˩O±jZ=]L\ʲG;5|,{F:7u> xSƁ%4fD DmJA^a|HFèi͟-b\ᶞA1Ӹtƪ2~&:> <_^x# ;UG'rqգm3:m -4їy\pF2@*&>6Wuι{|lTc\OqZ[?kZV%ӓRwHsjn?*/:֭xƚ?xpj b/Y[S!L|˃N*IQ==E^5^W?ې񶣬A<>4kapؾ[v–(((2G90zy_vIlu O|(|s75 עO@}gkMu#Ǟ'w7A#=I=sRL:3"5$'{Q6׷- \3Aפ >^Ia]FB$}1QxQD43Zak4kj9zX%$'<[M)]V[;x,mYZ|*ó^Ijww_:ܨ70y1.qwm{xN5ig-ʉ(,dV$hm6vsݥ0H`ԦuRn|̖o\--ῇZ%v]ަ2vt7gq$zZкR1n80L+{g>=0;MŶvɩ^?*rݾP vӐ^E4qޗGaqwi</צ^z_ܝ>;W'|>bxQҺ#C27I/Θ ~nU'~8ox qq?O=R\6ߎxlu9q{^76翯NKCmcW _??3s- y^%,%rIJxUQm[cAҿ/~RG  E)lr@git%ij6A#А)M]4]MwrcE3g9߁g''Od|qjY>Pk Y/`S$czzҝ3O:ԍ 勯)|пߌ5j)+̾6>hO=AuǿNO؟ \lolt?k0%t> =OB?˦|:d7[zL?x!qult{WW ?o%Ñ ᳴l+CE9LxBI "c0cR>+W˛ﻺl$nB8gA[Rfo]Ooe3՞@PNOÍ3o[ox)}Wяr|ik_r}R3[~k?h]둯C2n?n|ݘz]GEQ+#x@?LTolu{c={{/pƛ6?ooy-Btcפ |/:<)a\43\ *.űϵnXڵ/6ZMcI+9d:bG-㕄 #>o ,:V v:VeYw0YYD1ŵgN+&|%EcyH/ikNJa,{H< [R ٺ ո-W\ѵ{J:!M$SʩUz 4'} 2tih\ (ڧi׎}gE[HByQe/R$ d)9mI^aH~hIe{6}Or|dk':?7 _%guҿQν ,hnOS__qqҬӓnCĻ=)bOESb=RNtsFCzںY|_O_^I]?>޾T0㞶L}kA^NI5Iws#:22GC} \-'f_&pQkW#dsR%y A\uӥI9##$>`9Onݰv8#wN?%{1_o^~4˛].Śݨ[ywI!$Ǵ ~A|1'z t>_7nj}}T>x>; \q<1^ǿ?N퍧co =?FD1`IF*p}sUt&E[++u$m+8\n'q֩MMԴt6 n(r34~te!]$::g3z)ik,Ibn0d1JzqS iGT}CyJ: /"- C*P Ii:ީoyw>CEtn[>޾g2Ǟ$~q3RL7|t$4:g~!揙ݿ#Kx9l#z}tE[?#sfBkł>9~{{r#]둯Ahlqe{B0vǨ}W"Ǿ#n Lt.wEȶ:~x>`\qϷ:qqϷݽ}z-o8c'5"OͥLJ{Gqϝ(,d5>zmk,+ʵ]dhř + mFq-l  &(tdz&jvmhx.ckrb6+iRWx;QԴ\YK%'ahF#$?ͷq+ K[^i2$7a!oȆD1( $ç?skkiaOڄΙvl@pOzqX}+ڻ4S5 COe8#`c簬w ׄͤt+"IehqV*vEOK̰Ǩ$vdW]me"fs]~h%yG+H^ݽ?k<m}{G:}[S[8}GJG:!)C}[k;,zs_sN#Fr>q?{֤m2ү=S};w:o''}{>%oz? C_ LQFG#Q(dz2=E#֩A[A-q 43jH%P1! 31v+~0q[y\>B[v\<z`?q78{s\HKJ\z8ޛ}@=G!c \\u:T>: hg`kGhx\{oeN{ X.p/qr| K(o칈'ڥ˧K"COڥB#Xڥ7>_> @\dGY _ 1>^zs$gj lR D F1c|652H5,4o^z?5kX}v1g8Gֽ "ˌoJ]67b=W?!i߁('ҥ)衱F3j802?8.7۴dv?2ʈU(8Fszs 65m^6zm߆HA9pOR?ݔxʻG>Qp,g]^0:鴞2 ƁTc9>}VILcz|=$DO)?;7̓GR䌂?nzg&] 7K]Ccd2GC:꺣׈yq)c'FzΓ(89I߼x_njO#-xqq?mbMctEՔs,zsҽ5RO7u>폑󤁓;}O?~#]7ke.2 >99]kPhswgᮭI[<B?& F{;o5qr:t<up:|ww)$|5s ?OJ'ůu ˭ǠcK)'\ g=O`< z}ú_> h qk?osAYJHoώ?2> SθG]⿂{YatdZ="(.kZĞU" b pOχq(eqŒ袊Jo{ i ?l{tϺs _N96+>k{$a ǿZCku sKnT67a!^#_xrSL.e 2N@&s*]7`sI< ';z9Ӵm>r =p?*Ԣ(5&YP,Ml \A921+2\w#='6\vǮsg=p77qx;E=ϗxIC;ZqqO0UQԃ|s0qc}w O@1.Ŷ,/lg8lsO{.>klv)N?Okj55#DO;?n͟-.=Lp" {خ o>z6]:4\Q②iT>!YX#z1rV<8Zf9<:ZLpfrV'63=s4AT!_Sc/Ӡ o 9}}O|U?fa}oU'|)<؏_Аz?0x\jzۼA}$ʏ<uQEQECws6U;`o$4^K)`͵_FNT3烂:|3[{ CZef;5AT9Wxu;~c_׿^rܞl|zr~?=cDžfkV"C+c; y= |}Ꚅ1޶sNrF>XGض?]Ě/#l$} Uϡx;bקLgx퐷,33 k%DcCm|OxAӭ|vu j쨵HZ@H :3^fE5SGRoY$DBBlEQET7?soY ÿ K^I?k !+cun !`8 d^5.ŷ gyDve F@8#|Yx:4oğ`-Ȇ-ݓBm`r:{h.cA,J7#)ayez#qiϒ?u}=NJKi1#?N0Kqg=\M&1N}G{wۺI 1n3N:g=! Ͽ9x=VouFC1bk/(2G  5I{`8v 1QEQE y\VGBRׅ~ϟ>.O{~7ƾc'Յךb]vrAw/lm5GLoxã`eO^y'í_W q}]RͿJ՝ r~B>e?SASF: vCNͅrq†;=?XеMnet)Q?ug1W ; ~2}=Km߼x#/?NW 1_2̰yMɎ<|W/E(-43kZ-A f#T㌌t^V߂&jCpeI'ˋ?urI3:Wc֊((*ȅ襯 ?}\6*Vq~+WGT[3MglV]@%TGww:$I+[om!}O` -/N2sUeF-?QPOA:.cVŜ>ڃsٷxğS}?? ce4\cTjxkZO5 J9u JYBC`qҖ((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏j OL6i9Bۓ<Aϭv^,n4Qqki$I|"ec<"2rOEtz/o CEM1-W 6yzEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+((((nʲ<"m+|rثjF/W&ЮKmv (k>oז_2ۯծXw\H@(((9*O^]ZGbXOo Wy`(?|7_xF_Mp/\Pk'!|-e_]?^EQEQEQECws6U;`o_ F^*#%jP?hotڍ^4_B/ɵ/OСO|C`^[9nV_ cs"(((<m#?!xwv)k¿go'c?UGD-_JW~6i7r^.k^C]rvcEzQEQEQE y\VGBRׅ~ϟ>.OzNJ|?ZA mj5zo ]&o`?B_= yo3/Z|1%D (((*ȅ襯 ?}\6*Vq~+WG^ACۥ׏jxMz v?{f_uڵcK띏QEQEQET7?soY ÿ K^>}`?lU)?!j?V?KI+w] |$?@5忳̿k?;+(((( endstream endobj 8 0 obj <> endobj 11 0 obj <> endobj 13 0 obj <> stream 0 0 0 0 125 184 d1 125 0 0 184 0 0 cm BI /IM true /W 125 /H 184 /BPC 1 /F /CCF /DP <> ID &I$VCUW eCf!O ?I 5 Ȑ2ֺ~@TĹ5) > ]` X/XKK -ցi~ZpZ;*}2fq, P.<{k \6 k4ov/ `al/&@!I+H EI endstream endobj 14 0 obj <> stream 0 0 0 6 110 178 d1 110 0 0 172 0 6 cm BI /IM true /W 110 /H 172 /BPC 1 /F /CCF /DP <> ID &;( zz  EI endstream endobj 15 0 obj <> stream 0 0 0 6 144 184 d1 144 0 0 178 0 6 cm BI /IM true /W 144 /H 178 /BPC 1 /F /CCF /DP <> ID &@AH m4j|<'O@=ފ/3x@jho aMA ~~ޗ??_Oki 0 EI endstream endobj 16 0 obj <> stream 144 0 0 0 0 0 d1 endstream endobj 17 0 obj <> stream 0 0 0 6 139 178 d1 139 0 0 172 0 6 cm BI /IM true /W 139 /H 172 /BPC 1 /F /CCF /DP <> ID &*p@=4__ׯ_zz_/_%^^ zz/_%;( <p>T>AX)}oo_ zx_K|m*. u,.v, gdWp EI endstream endobj 18 0 obj <> stream 0 0 0 6 197 178 d1 197 0 0 172 0 6 cm BI /IM true /W 197 /H 172 /BPC 1 /F /CCF /DP <> ID &# " AAa:|???? ?A[?L?oO-A?O[o'o=OkL-0DA8 EI endstream endobj 19 0 obj <> stream 0 0 0 64 33 178 d1 33 0 0 114 0 64 cm BI /IM true /W 33 /H 114 /BPC 1 /F /CCF /DP <> ID &5p8ɮ EI endstream endobj 20 0 obj <> stream 0 0 0 5 39 178 d1 39 0 0 173 0 5 cm BI /IM true /W 39 /H 173 /BPC 1 /F /CCF /DP <> ID &Bk ma$  EI endstream endobj 21 0 obj <> stream 0 0 0 61 186 178 d1 186 0 0 117 0 61 cm BI /IM true /W 186 /H 117 /BPC 1 /F /CCF /DP <> ID &A!B0AO-0?MyN.A a[#Ya5L-jMR  a h  d 6D? EI endstream endobj 22 0 obj <> stream 65 0 0 0 0 0 d1 endstream endobj 23 0 obj <> stream 0 0 0 61 114 226 d1 114 0 0 165 0 61 cm BI /IM true /W 114 /H 165 /BPC 1 /F /CCF /DP <> ID &Bzz Sl ?wû,##~t߇\/ m+X0X0ʘ@KkRk/ h5> stream 0 0 0 6 33 178 d1 33 0 0 172 0 6 cm BI /IM true /W 33 /H 172 /BPC 1 /F /CCF /DP <> ID &K zxO/5Kႀ EI endstream endobj 25 0 obj <> stream 138 0 0 0 0 0 d1 endstream endobj 26 0 obj <> stream 0 0 0 60 112 181 d1 112 0 0 121 0 60 cm BI /IM true /W 112 /H 121 /BPC 1 /F /CCF /DP <> ID &UG5Ed6H4aGzOuR:AA} _IaZ_쬰w}"@jp/^`.] rXom` k CISKAAW  EI endstream endobj 27 0 obj <> stream 55 0 0 0 0 0 d1 endstream endobj 28 0 obj <> stream 133 0 0 0 0 0 d1 endstream endobj 29 0 obj <> stream 0 0 0 61 110 178 d1 110 0 0 117 0 61 cm BI /IM true /W 110 /H 117 /BPC 1 /F /CCF /DP <> ID &APM$p0? }k ml)5H5Xh0,Hj EI endstream endobj 30 0 obj <> stream 0 0 0 64 110 181 d1 110 0 0 117 0 64 cm BI /IM true /W 110 /H 117 /BPC 1 /F /CCF /DP <> ID &Adq/PC(A=N?}"7ɪMkiP EI endstream endobj 31 0 obj <> stream 140 0 0 0 0 0 d1 endstream endobj 32 0 obj <> stream 0 0 0 64 112 178 d1 112 0 0 114 0 64 cm BI /IM true /W 112 /H 114 /BPC 1 /F /CCF /DP <> ID &V #_O ]tXaim%~v_[h-밻k am/][[Xkm~oXk𞞺|?zO 𞈚}zzPoA7׭7J7}zL<$[_ a EI endstream endobj 33 0 obj <> stream 130 0 0 0 0 0 d1 endstream endobj 34 0 obj <> stream 0 0 0 31 88 181 d1 88 0 0 150 0 31 cm BI /IM true /W 88 /H 150 /BPC 1 /F /CCF /DP <> ID &S0E"XP=ROX^EA??===_ kA5[X`  EI endstream endobj 35 0 obj <> stream 107 0 0 0 0 0 d1 endstream endobj 36 0 obj <> stream 60 0 0 0 0 0 d1 endstream endobj 37 0 obj <> stream 0 0 0 64 110 229 d1 110 0 0 165 0 64 cm BI /IM true /W 110 /H 165 /BPC 1 /F /CCF /DP <> ID &8#>>^)~ |~}/_____?׽x_H~a }/_?/ _𿷥W JZp_&> p EI endstream endobj 38 0 obj <> stream 96 0 0 0 0 0 d1 endstream endobj 39 0 obj <> stream 0 0 0 0 88 174 d1 88 0 0 174 0 0 cm BI /IM true /W 88 /H 174 /BPC 1 /F /CCF /DP <> ID &h@iX zz_l5. AP  EI endstream endobj 40 0 obj <> stream 0 0 0 56 122 177 d1 122 0 0 121 0 56 cm BI /IM true /W 122 /H 121 /BPC 1 /F /CCF /DP <> ID &T,iO5 KX#P͂xAzh~H7x@ a>oO[?k.Km/ \0z`,Xkk`,0D0Y 2 EI endstream endobj 41 0 obj <> stream 84 0 0 0 0 0 d1 endstream endobj 42 0 obj <> stream 0 0 0 57 73 174 d1 73 0 0 117 0 57 cm BI /IM true /W 73 /H 117 /BPC 1 /F /CCF /DP <> ID &K zz?`#h A2.{{Oh=x`Ć>  EI endstream endobj 43 0 obj <> stream 0 0 0 60 91 181 d1 91 0 0 121 0 60 cm BI /IM true /W 91 /H 121 /BPC 1 /F /CCF /DP <> ID &A\h3' ?5DȐ|&o x'׮ k Z 9 \Ao.-K-,.녥k. @P6|"X | . I^iQl{} )@f~CI\@ EI endstream endobj 44 0 obj <> stream 128 0 0 0 0 0 d1 endstream endobj 45 0 obj <> stream 0 0 0 56 104 177 d1 104 0 0 121 0 56 cm BI /IM true /W 104 /H 121 /BPC 1 /F /CCF /DP <> ID &T\`4E`<zu/K>~}_ZDn`_\8/k= , xa_ ` ?W' EI endstream endobj 46 0 obj <> stream 86 0 0 0 0 0 d1 endstream endobj 47 0 obj <> stream 0 0 0 60 107 181 d1 107 0 0 121 0 60 cm BI /IM true /W 107 /H 121 /BPC 1 /F /CCF /DP <> ID &jy!la|". 4@a:}DNA ~X~5_ac $6 ٸWKar'w~uuk @\( uu /,,JCP  EI endstream endobj 48 0 obj <> stream 137 0 0 0 0 0 d1 endstream endobj 49 0 obj <> stream 132 0 0 0 0 0 d1 endstream endobj 50 0 obj <> stream 0 0 0 61 127 230 d1 127 0 0 169 0 61 cm BI /IM true /W 127 /H 169 /BPC 1 /F /CCF /DP <> ID &EAZCP ӳ_elA4lxOzz|_P: >7~&w_Kd Ÿ2 x|0_]pXAs^k"@x54(+@_&KAzzz"@O[ zk  80O  >AP) H  EI endstream endobj 51 0 obj <> stream 127 0 0 0 0 0 d1 endstream endobj 52 0 obj <> stream 139 0 0 0 0 0 d1 endstream endobj 53 0 obj <> stream 134 0 0 0 0 0 d1 endstream endobj 54 0 obj <> stream 0 0 0 0 135 120 d1 135 0 0 120 0 0 cm BI /IM true /W 135 /H 120 /BPC 1 /F /CCF /DP <> ID :A { ׭ɯ?=$' o Xl?†7~wM [a{wWo[=}o{÷ EI endstream endobj 55 0 obj <> stream 0 0 0 43 72 122 d1 72 0 0 79 0 43 cm BI /IM true /W 72 /H 79 /BPC 1 /F /CCF /DP <> ID &܆ @<< z `/&ޓO߿o[]~kv { a-p+ ~ `, EI endstream endobj 56 0 obj <> stream 0 0 0 44 57 120 d1 57 0 0 76 0 44 cm BI /IM true /W 57 /H 76 /BPC 1 /F /CCF /DP <> ID &_ o}h>'l~ EI endstream endobj 57 0 obj <> stream 76 0 0 0 0 0 d1 endstream endobj 58 0 obj <> stream 0 0 0 6 39 120 d1 39 0 0 114 0 6 cm BI /IM true /W 39 /H 114 /BPC 1 /F /CCF /DP <> ID &{{o{CfÇ@ EI endstream endobj 59 0 obj <> stream 0 0 0 43 64 122 d1 64 0 0 79 0 43 cm BI /IM true /W 64 /H 79 /BPC 1 /F /CCF /DP <> ID &  <Ȁg@ __rA2YX iatZ 0^Rp_  >v W>a_hWҮ  EI endstream endobj 60 0 obj <> stream 33 0 0 0 0 0 d1 endstream endobj 61 0 obj <> stream 0 0 0 0 80 123 d1 80 0 0 123 0 0 cm BI /IM true /W 80 /H 123 /BPC 1 /F /CCF /DP <> ID &A9? ւ &p }>?> stream 118 0 0 0 0 0 d1 endstream endobj 63 0 obj <> stream 0 0 0 43 65 122 d1 65 0 0 79 0 43 cm BI /IM true /W 65 /H 79 /BPC 1 /F /CCF /DP <> ID &B h< xAz\/!\ ?__ɚ:?n׆ 3Bkk `0@@ EI endstream endobj 64 0 obj <> stream 80 0 0 0 0 0 d1 endstream endobj 65 0 obj <> stream 0 0 0 25 51 122 d1 51 0 0 97 0 25 cm BI /IM true /W 51 /H 97 /BPC 1 /F /CCF /DP <> ID &apo aA)?{E7&ӽ EI endstream endobj 66 0 obj <> stream 58 0 0 0 0 0 d1 endstream endobj 67 0 obj <> stream 0 0 0 43 66 122 d1 66 0 0 79 0 43 cm BI /IM true /W 66 /H 79 /BPC 1 /F /CCF /DP <> ID &0@R >ׇ%x ~=~%ʰl^ǃ0f 4 z _kkK\|/Xa`HL EI endstream endobj 68 0 obj <> stream 136 0 0 0 0 0 d1 endstream endobj 69 0 obj <> stream 74 0 0 0 0 0 d1 endstream endobj 70 0 obj <> stream 0 0 0 0 77 120 d1 77 0 0 120 0 0 cm BI /IM true /W 77 /H 120 /BPC 1 /F /CCF /DP <> ID &gP./_/]6__ Å/m~߇mo{ A\x@ EI endstream endobj 71 0 obj <> stream 56 0 0 0 0 0 d1 endstream endobj 72 0 obj <> stream 0 0 0 -1 98 122 d1 98 0 0 123 0 -1 cm BI /IM true /W 98 /H 123 /BPC 1 /F /CCF /DP <> ID &k1"K=@1A==p0A@}/ _ K_ ' ߿}{x}dcmwA ap0nʰ=xk ,/ + A EI endstream endobj 73 0 obj <> stream 142 0 0 0 0 0 d1 endstream endobj 74 0 obj <> stream 98 0 0 0 0 0 d1 endstream endobj 75 0 obj <> stream 0 0 0 44 71 120 d1 71 0 0 76 0 44 cm BI /IM true /W 71 /H 76 /BPC 1 /F /CCF /DP <> ID & @p6 l7÷}0\\V׆_Ap EI endstream endobj 76 0 obj <> stream 0 0 0 0 84 122 d1 84 0 0 122 0 0 cm BI /IM true /W 84 /H 122 /BPC 1 /F /CCF /DP <> ID & !Ay OOoomookü?0^IؽPa>^  @ EI endstream endobj 77 0 obj <> stream 83 0 0 0 0 0 d1 endstream endobj 78 0 obj <> stream 82 0 0 0 0 0 d1 endstream endobj 79 0 obj <> stream 0 0 0 0 86 99 d1 86 0 0 99 0 0 cm BI /IM true /W 86 /H 99 /BPC 1 /F /CCF /DP <> ID &H O EI endstream endobj 80 0 obj <> stream 0 0 0 -1 50 99 d1 50 0 0 100 0 -1 cm BI /IM true /W 50 /H 100 /BPC 1 /F /CCF /DP <> ID &2]/%xaxk /  EI endstream endobj 81 0 obj <> stream 102 0 0 0 0 0 d1 endstream endobj 82 0 obj <> stream 0 0 0 0 13 99 d1 13 0 0 99 0 0 cm BI /IM true /W 13 /H 99 /BPC 1 /F /CCF /DP <> ID &k EI endstream endobj 83 0 obj <> stream 71 0 0 0 0 0 d1 endstream endobj 84 0 obj <> stream 0 0 0 33 47 101 d1 47 0 0 68 0 33 cm BI /IM true /W 47 /H 68 /BPC 1 /F /CCF /DP <> ID & Xq}p$?_L?kr 1,.\-@]:n /!]}Y5 EI endstream endobj 85 0 obj <> stream 27 0 0 0 0 0 d1 endstream endobj 86 0 obj <> stream 0 0 0 33 51 101 d1 51 0 0 68 0 33 cm BI /IM true /W 51 /H 68 /BPC 1 /F /CCF /DP <> ID &D$A8_\& @_~ alp< r/B05氞+_0d3 EI endstream endobj 87 0 obj <> stream 93 0 0 0 0 0 d1 endstream endobj 88 0 obj <> stream 0 0 0 34 35 99 d1 35 0 0 65 0 34 cm BI /IM true /W 35 /H 65 /BPC 1 /F /CCF /DP <> ID &>j.p98 EI endstream endobj 89 0 obj <> stream 69 0 0 0 0 0 d1 endstream endobj 90 0 obj <> stream 0 0 0 17 44 101 d1 44 0 0 84 0 17 cm BI /IM true /W 44 /H 84 /BPC 1 /F /CCF /DP <> ID & S@A r {$& EI endstream endobj 91 0 obj <> stream 40 0 0 0 0 0 d1 endstream endobj 92 0 obj <> stream 57 0 0 0 0 0 d1 endstream endobj 93 0 obj <> stream 0 0 0 33 54 101 d1 54 0 0 68 0 33 cm BI /IM true /W 54 /H 68 /BPC 1 /F /CCF /DP <> ID &` x@A^ t>kP}~_=o O.]] =k`,` EI endstream endobj 94 0 obj <> stream 29 0 0 0 0 0 d1 endstream endobj 95 0 obj <> stream 0 0 0 -1 12 99 d1 12 0 0 100 0 -1 cm BI /IM true /W 12 /H 100 /BPC 1 /F /CCF /DP <> ID & EI endstream endobj 96 0 obj <> stream 68 0 0 0 0 0 d1 endstream endobj 97 0 obj <> stream 0 0 0 33 54 101 d1 54 0 0 68 0 33 cm BI /IM true /W 54 /H 68 /BPC 1 /F /CCF /DP <> ID &` |E A='P^@ >?O__@oAvz]axam~X0X0P EI endstream endobj 98 0 obj <> stream 28 0 0 0 0 0 d1 endstream endobj 99 0 obj <> stream 0 0 0 35 91 99 d1 91 0 0 64 0 35 cm BI /IM true /W 91 /H 64 /BPC 1 /F /CCF /DP <> ID &A 3G>> -X_xP_}~W > EI endstream endobj 100 0 obj <> stream 95 0 0 0 0 0 d1 endstream endobj 101 0 obj <> stream 91 0 0 0 0 0 d1 endstream endobj 102 0 obj <> stream 0 0 0 35 50 101 d1 50 0 0 66 0 35 cm BI /IM true /W 50 /H 66 /BPC 1 /F /CCF /DP <> ID &HaDM>5H? EI endstream endobj 103 0 obj <> stream 61 0 0 0 0 0 d1 endstream endobj 104 0 obj <> stream 0 0 0 -1 56 101 d1 56 0 0 102 0 -1 cm BI /IM true /W 56 /H 102 /BPC 1 /F /CCF /DP <> ID &`6|S ~.7a~}?߿M_ׅxiv 1W\/k ^A EI endstream endobj 105 0 obj <> stream 72 0 0 0 0 0 d1 endstream endobj 106 0 obj <> stream 0 0 0 34 91 99 d1 91 0 0 65 0 34 cm BI /IM true /W 91 /H 65 /BPC 1 /F /CCF /DP <> ID &3 \;Ph5tAȉkk / ^C! EI endstream endobj 107 0 obj <> stream 112 0 0 0 0 0 d1 endstream endobj 108 0 obj <> stream 50 0 0 0 0 0 d1 endstream endobj 109 0 obj <> stream 53 0 0 0 0 0 d1 endstream endobj 110 0 obj <> stream 0 0 0 -1 56 101 d1 56 0 0 102 0 -1 cm BI /IM true /W 56 /H 102 /BPC 1 /F /CCF /DP <> ID & |NAAD0|_^~ko/50|w2vd`  EI endstream endobj 111 0 obj <> stream 62 0 0 0 0 0 d1 endstream endobj 112 0 obj <> stream 0 0 0 33 61 101 d1 61 0 0 68 0 33 cm BI /IM true /W 61 /H 68 /BPC 1 /F /CCF /DP <> ID &`C` @'PzA 7>IOAI밿zm%pa`[_@ EI endstream endobj 113 0 obj <> stream 0 0 0 -2 73 101 d1 73 0 0 103 0 -2 cm BI /IM true /W 73 /H 103 /BPC 1 /F /CCF /DP <> ID &k>A&?<' Dc@7C5kI?@>~ֽֿ~5 ] `Aa>>`~ ֮  EI endstream endobj 114 0 obj <> stream 114 0 0 0 0 0 d1 endstream endobj 115 0 obj <> stream 0 0 0 -1 127 99 d1 127 0 0 100 0 -1 cm BI /IM true /W 127 /H 100 /BPC 1 /F /CCF /DP <> ID &Hr@fT'*S~[[p‡J+~߫p_ ׯ|?}}+_a?> P_z ]X EI endstream endobj 116 0 obj <> stream 131 0 0 0 0 0 d1 endstream endobj 117 0 obj <> stream 73 0 0 0 0 0 d1 endstream endobj 118 0 obj <> stream 48 0 0 0 0 0 d1 endstream endobj 119 0 obj <> stream 113 0 0 0 0 0 d1 endstream endobj 120 0 obj <> stream 85 0 0 0 0 0 d1 endstream endobj 121 0 obj <> stream 0 0 0 34 50 99 d1 50 0 0 65 0 34 cm BI /IM true /W 50 /H 65 /BPC 1 /F /CCF /DP <> ID &2]/^C EI endstream endobj 122 0 obj <> stream 77 0 0 0 0 0 d1 endstream endobj 123 0 obj <> stream 0 0 0 -2 46 99 d1 46 0 0 101 0 -2 cm BI /IM true /W 46 /H 101 /BPC 1 /F /CCF /DP <> ID &JP_0\w> stream 64 0 0 0 0 0 d1 endstream endobj 125 0 obj <> stream 45 0 0 0 0 0 d1 endstream endobj 126 0 obj <> stream 42 0 0 0 0 0 d1 endstream endobj 127 0 obj <> stream 66 0 0 0 0 0 d1 endstream endobj 128 0 obj <> stream 63 0 0 0 0 0 d1 endstream endobj 129 0 obj <> stream 100 0 0 0 0 0 d1 endstream endobj 130 0 obj <> stream 0 0 0 0 66 100 d1 66 0 0 100 0 0 cm BI /IM true /W 66 /H 100 /BPC 1 /F /CCF /DP <> ID &HH EI endstream endobj 131 0 obj <> stream 0 0 0 36 64 100 d1 64 0 0 64 0 36 cm BI /IM true /W 64 /H 64 /BPC 1 /F /CCF /DP <> ID $vKl/Kn 4]߆.k ox]=$OJA&& I'oIa EI endstream endobj 132 0 obj <> stream 0 0 0 35 56 128 d1 56 0 0 93 0 35 cm BI /IM true /W 56 /H 93 /BPC 1 /F /CCF /DP <> ID &?CT'?KXa}?߿W_{^]\0q0D@ EI endstream endobj 133 0 obj <> stream 0 0 0 0 78 149 d1 78 0 0 149 0 0 cm BI /IM true /W 78 /H 149 /BPC 1 /F /CCF /DP <> ID &DD,Y O&P(|; O {@ EI endstream endobj 134 0 obj <> stream 0 0 0 49 93 147 d1 93 0 0 98 0 49 cm BI /IM true /W 93 /H 98 /BPC 1 /F /CCF /DP <> ID &ܗ)  < xM@?}|y@OW_i EI endstream endobj 135 0 obj <> stream 108 0 0 0 0 0 d1 endstream endobj 136 0 obj <> stream 0 0 0 47 93 144 d1 93 0 0 97 0 47 cm BI /IM true /W 93 /H 97 /BPC 1 /F /CCF /DP <> ID &Kaza>uw_ ~k~װ am  1!  EI endstream endobj 137 0 obj <> stream 117 0 0 0 0 0 d1 endstream endobj 138 0 obj <> stream 0 0 0 45 94 147 d1 94 0 0 102 0 45 cm BI /IM true /W 94 /H 102 /BPC 1 /F /CCF /DP <> ID &KDA0@@@ Ƞ? 艁CLf -  ?Cս m/ m`] }Xal/A!7l EI endstream endobj 139 0 obj <> stream 110 0 0 0 0 0 d1 endstream endobj 140 0 obj <> stream 0 0 0 -5 93 144 d1 93 0 0 149 0 -5 cm BI /IM true /W 93 /H 149 /BPC 1 /F /CCF /DP <> ID & _!{|=7|=>}}=oo|0{}}=߽COkZ.|iC5axka 5kkk ᅵ iHi EI endstream endobj 141 0 obj <> stream 0 0 0 -5 95 149 d1 95 0 0 154 0 -5 cm BI /IM true /W 95 /H 154 /BPC 1 /F /CCF /DP <> ID &Kh@fR BxAOE| &#W ~ _^׋__i` Xp Peko!QATO <?O~Awk\0 b_ C,dl4@ EI endstream endobj 142 0 obj <> stream 0 0 0 116 29 166 d1 29 0 0 50 0 116 cm BI /IM true /W 29 /H 50 /BPC 1 /F /CCF /DP <> ID &@/Ao|+@@ EI endstream endobj 143 0 obj <> stream 122 0 0 0 0 0 d1 endstream endobj 144 0 obj <> stream 0 0 0 -5 97 149 d1 97 0 0 154 0 -5 cm BI /IM true /W 97 /H 154 /BPC 1 /F /CCF /DP <> ID &k ,@$>| ޓ}?O| }?OX?wkkkpk%[ammmm~Xaa4 @ EI endstream endobj 145 0 obj <> stream 115 0 0 0 0 0 d1 endstream endobj 168 0 obj <> stream 0 0 0 0 39 38 d1 39 0 0 38 0 0 cm BI /IM true /W 39 /H 38 /BPC 1 /F /CCF /DP <> ID &f?z k- EI endstream endobj 169 0 obj <> stream 0 0 0 -18 43 47 d1 43 0 0 65 0 -18 cm BI /IM true /W 43 /H 65 /BPC 1 /F /CCF /DP <> ID &@OO}N' o`.C`̂UxK\-  Q@fׅZ/$x^24{O EI endstream endobj 170 0 obj <> stream 99 0 0 0 0 0 d1 endstream endobj 171 0 obj <> stream 0 0 0 -18 38 45 d1 38 0 0 63 0 -18 cm BI /IM true /W 38 /H 63 /BPC 1 /F /CCF /DP <> ID &EjAy5^? EI endstream endobj 172 0 obj <> stream 0 0 0 2 55 45 d1 55 0 0 43 0 2 cm BI /IM true /W 55 /H 43 /BPC 1 /F /CCF /DP <> ID &];#g/Ça0g.HG[Mt @@ EI endstream endobj 173 0 obj <> stream 43 0 0 0 0 0 d1 endstream endobj 174 0 obj <> stream 0 0 0 2 47 67 d1 47 0 0 65 0 2 cm BI /IM true /W 47 /H 65 /BPC 1 /F /CCF /DP <> ID &j?$7 ?}? /\? (:&O EI endstream endobj 175 0 obj <> stream 0 0 0 -16 42 45 d1 42 0 0 61 0 -16 cm BI /IM true /W 42 /H 61 /BPC 1 /F /CCF /DP <> ID &kP EI endstream endobj 176 0 obj <> stream 0 0 0 2 40 45 d1 40 0 0 43 0 2 cm BI /IM true /W 40 /H 43 /BPC 1 /F /CCF /DP <> ID & ,7p[O5QɪCdp 'c A EI endstream endobj 177 0 obj <> stream 51 0 0 0 0 0 d1 endstream endobj 178 0 obj <> stream 54 0 0 0 0 0 d1 endstream endobj 179 0 obj <> stream 0 0 0 -10 43 45 d1 43 0 0 55 0 -10 cm BI /IM true /W 43 /H 55 /BPC 1 /F /CCF /DP <> ID &ܠ$ >[(^/5K! EI endstream endobj 180 0 obj <> stream 47 0 0 0 0 0 d1 endstream endobj 181 0 obj <> stream 0 0 0 2 48 67 d1 48 0 0 65 0 2 cm BI /IM true /W 48 /H 65 /BPC 1 /F /CCF /DP <> ID &fTCl> ?A{_7~}|% "_rj( EI endstream endobj 182 0 obj <> stream 0 0 0 -18 41 47 d1 41 0 0 65 0 -18 cm BI /IM true /W 41 /H 65 /BPC 1 /F /CCF /DP <> ID &p" ?0htOXad: EI endstream endobj 183 0 obj <> stream 0 0 0 2 43 45 d1 43 0 0 43 0 2 cm BI /IM true /W 43 /H 43 /BPC 1 /F /CCF /DP <> ID &C0" 5Z φ_uXa~ EI endstream endobj 184 0 obj <> stream 0 0 0 2 50 45 d1 50 0 0 43 0 2 cm BI /IM true /W 50 /H 43 /BPC 1 /F /CCF /DP <> ID &%TYd\xRj2p`@ EI endstream endobj 185 0 obj <> stream 106 0 0 0 0 0 d1 endstream endobj 186 0 obj <> stream 0 0 0 2 43 45 d1 43 0 0 43 0 2 cm BI /IM true /W 43 /H 43 /BPC 1 /F /CCF /DP <> ID & 1<x@E z>o}oOiv0K %_ P EI endstream endobj 187 0 obj <> stream 0 0 0 2 50 45 d1 50 0 0 43 0 2 cm BI /IM true /W 50 /H 43 /BPC 1 /F /CCF /DP <> ID &@A'CuuDHMc_T EI endstream endobj 188 0 obj <> stream 0 0 0 2 46 45 d1 46 0 0 43 0 2 cm BI /IM true /W 46 /H 43 /BPC 1 /F /CCF /DP <> ID & MRyia?h(cX5&  EI endstream endobj 199 0 obj <> stream 0 0 0 -16 45 45 d1 45 0 0 61 0 -16 cm BI /IM true /W 45 /H 61 /BPC 1 /F /CCF /DP <> ID &T9 Bo0p߃~zׅ(T4  EI endstream endobj 200 0 obj <> stream 97 0 0 0 0 0 d1 endstream endobj 201 0 obj <> stream 0 0 0 2 48 45 d1 48 0 0 43 0 2 cm BI /IM true /W 48 /H 43 /BPC 1 /F /CCF /DP <> ID &" a@P # :'cG4'4GpAix_kj EI endstream endobj 202 0 obj <> stream 0 0 0 -16 47 45 d1 47 0 0 61 0 -16 cm BI /IM true /W 47 /H 61 /BPC 1 /F /CCF /DP <> ID & 1 uAz}. }sY\/ 5J  EI endstream endobj 203 0 obj <> stream 49 0 0 0 0 0 d1 endstream endobj 206 0 obj <> stream 0 0 0 -16 37 45 d1 37 0 0 61 0 -16 cm BI /IM true /W 37 /H 61 /BPC 1 /F /CCF /DP <> ID &EjC)ɪP EI endstream endobj 207 0 obj <> stream 103 0 0 0 0 0 d1 endstream endobj 208 0 obj <> stream 46 0 0 0 0 0 d1 endstream endobj 209 0 obj <> stream 105 0 0 0 0 0 d1 endstream endobj 210 0 obj <> stream 0 0 0 -16 48 45 d1 48 0 0 61 0 -16 cm BI /IM true /W 48 /H 61 /BPC 1 /F /CCF /DP <> ID &qV4=a %O|'?_&av׵0\1% / _ڀ EI endstream endobj 211 0 obj <> stream 59 0 0 0 0 0 d1 endstream endobj 212 0 obj <> stream 52 0 0 0 0 0 d1 endstream endobj 213 0 obj <> stream 0 0 0 -16 46 45 d1 46 0 0 61 0 -16 cm BI /IM true /W 46 /H 61 /BPC 1 /F /CCF /DP <> ID &㾤r9A0OT EI endstream endobj 214 0 obj <> stream 0 0 0 -16 48 45 d1 48 0 0 61 0 -16 cm BI /IM true /W 48 /H 61 /BPC 1 /F /CCF /DP <> ID &ʰO! ~MPP EI endstream endobj 215 0 obj <> stream 0 0 0 -43 39 0 d1 39 0 0 43 0 -43 cm BI /IM true /W 39 /H 43 /BPC 1 /F /CCF /DP <> ID &ا@"\Q>u Az| ֤%d  fBa," :q4 .ɪ|>B]@ EI endstream endobj 216 0 obj <> stream 0 0 0 -18 48 45 d1 48 0 0 63 0 -18 cm BI /IM true /W 48 /H 63 /BPC 1 /F /CCF /DP <> ID &k>MVO _?m/]o  EI endstream endobj 221 0 obj <> stream 0 0 0 -43 50 22 d1 50 0 0 65 0 -43 cm BI /IM true /W 50 /H 65 /BPC 1 /F /CCF /DP <> ID &!??H.IנMz^kc ^ P EI endstream endobj 222 0 obj <> stream 0 0 0 -16 41 45 d1 41 0 0 61 0 -16 cm BI /IM true /W 41 /H 61 /BPC 1 /F /CCF /DP <> ID &*&bDc  EI endstream endobj 229 0 obj <> stream 0 0 0 -16 48 45 d1 48 0 0 61 0 -16 cm BI /IM true /W 48 /H 61 /BPC 1 /F /CCF /DP <> ID &jad/'-L'8}&@ EI endstream endobj 230 0 obj <> stream 0 0 0 -16 50 45 d1 50 0 0 61 0 -16 cm BI /IM true /W 50 /H 61 /BPC 1 /F /CCF /DP <> ID &%TYd^+^d?( EI endstream endobj 231 0 obj <> stream 0 0 0 -16 42 47 d1 42 0 0 63 0 -16 cm BI /IM true /W 42 /H 63 /BPC 1 /F /CCF /DP <> ID &?ȣ0zz zO/ ? EI endstream endobj 232 0 obj <> stream 0 0 0 2 48 45 d1 48 0 0 43 0 2 cm BI /IM true /W 48 /H 43 /BPC 1 /F /CCF /DP <> ID &looaooTkӓU EI endstream endobj 233 0 obj <> stream 0 0 0 -16 50 47 d1 50 0 0 63 0 -16 cm BI /IM true /W 50 /H 63 /BPC 1 /F /CCF /DP <> ID &|RpAd)I||/7 /I$@ EI endstream endobj 234 0 obj <> stream 0 0 0 -16 48 45 d1 48 0 0 61 0 -16 cm BI /IM true /W 48 /H 61 /BPC 1 /F /CCF /DP <> ID &(ɪ@^?A3Oa׏?T EI endstream endobj 235 0 obj <> stream 0 0 0 2 48 45 d1 48 0 0 43 0 2 cm BI /IM true /W 48 /H 43 /BPC 1 /F /CCF /DP <> ID &.8|."x31.K/mp~O?~a_Oa?  EI endstream endobj 243 0 obj <> stream 0 0 0 -18 45 47 d1 45 0 0 65 0 -18 cm BI /IM true /W 45 /H 65 /BPC 1 /F /CCF /DP <> ID &OOA>x_ ] k?| џӿ ~}[ .;VkO! EI endstream endobj 244 0 obj <> stream 0 0 0 -16 52 47 d1 52 0 0 63 0 -16 cm BI /IM true /W 52 /H 63 /BPC 1 /F /CCF /DP <> ID & O @ EI endstream endobj 245 0 obj <> stream 0 0 0 -16 48 45 d1 48 0 0 61 0 -16 cm BI /IM true /W 48 /H 61 /BPC 1 /F /CCF /DP <> ID &>(~MR^8KK___UF|€ EI endstream endobj 246 0 obj <> stream 101 0 0 0 0 0 d1 endstream endobj 247 0 obj <> stream 0 0 0 0 49 68 d1 49 0 0 68 0 0 cm BI /IM true /W 49 /H 68 /BPC 1 /F /CCF /DP <> ID &0@;a@L:Y5X]`.+k ɀft,> {0IH=0T ,@5 EI endstream endobj 260 0 obj <> stream 0 0 0 -2 35 37 d1 35 0 0 39 0 -2 cm BI /IM true /W 35 /H 39 /BPC 1 /F /CCF /DP <> ID &!P!tpO_%#!d@xKZ@N (c8H? θ_@ EI endstream endobj 261 0 obj <> stream 0 0 0 -2 42 37 d1 42 0 0 39 0 -2 cm BI /IM true /W 42 /H 39 /BPC 1 /F /CCF /DP <> ID &Cah<>QPaa5H_@ EI endstream endobj 262 0 obj <> stream 44 0 0 0 0 0 d1 endstream endobj 263 0 obj <> stream 0 0 0 -2 44 37 d1 44 0 0 39 0 -2 cm BI /IM true /W 44 /H 39 /BPC 1 /F /CCF /DP <> ID &䆰N~N2Ћkj  EI endstream endobj 264 0 obj <> stream 0 0 0 -2 44 37 d1 44 0 0 39 0 -2 cm BI /IM true /W 44 /H 39 /BPC 1 /F /CCF /DP <> ID &5G0fc^,8u\0  EI endstream endobj 273 0 obj <> stream 0 0 0 11 41 19 d1 41 0 0 8 0 11 cm BI /IM true /W 41 /H 8 /BPC 1 /F /CCF /DP <> ID &j  EI endstream endobj 274 0 obj <> stream 0 0 0 -63 43 0 d1 43 0 0 63 0 -63 cm BI /IM true /W 43 /H 63 /BPC 1 /F /CCF /DP <> ID &߃l5>{<7|7{ᇿ6_w. k ,5 !|@ EI endstream endobj 275 0 obj <> stream 0 0 0 32 14 45 d1 14 0 0 13 0 32 cm BI /IM true /W 14 /H 13 /BPC 1 /F /CCF /DP <> ID &_- EI endstream endobj 276 0 obj <> stream 70 0 0 0 0 0 d1 endstream endobj 277 0 obj <> stream 0 0 0 0 45 61 d1 45 0 0 61 0 0 cm BI /IM true /W 45 /H 61 /BPC 1 /F /CCF /DP <> ID &pO!&D{dIw @@ EI endstream endobj 278 0 obj <> stream 0 0 0 -2 12 61 d1 12 0 0 63 0 -2 cm BI /IM true /W 12 /H 63 /BPC 1 /F /CCF /DP <> ID &ۂ&/X'ɪ @ EI endstream endobj 279 0 obj <> stream 0 0 0 -9 41 70 d1 41 0 0 79 0 -9 cm BI /IM true /W 41 /H 79 /BPC 1 /F /CCF /DP <> ID &ү~߼?~ ?߽0~oP EI endstream endobj 280 0 obj <> stream 38 0 0 0 0 0 d1 endstream endobj 281 0 obj <> stream 0 0 0 -18 35 45 d1 35 0 0 63 0 -18 cm BI /IM true /W 35 /H 63 /BPC 1 /F /CCF /DP <> ID &> stream 0 0 0 -18 43 47 d1 43 0 0 65 0 -18 cm BI /IM true /W 43 /H 65 /BPC 1 /F /CCF /DP <> ID &1'. 7M}&OM /zw_ 4Ka @ EI endstream endobj 283 0 obj <> stream 0 0 0 -18 48 45 d1 48 0 0 63 0 -18 cm BI /IM true /W 48 /H 63 /BPC 1 /F /CCF /DP <> ID &pᬆoP$CLa?707ǿ EI endstream endobj 284 0 obj <> stream 0 0 0 -18 43 47 d1 43 0 0 65 0 -18 cm BI /IM true /W 43 /H 65 /BPC 1 /F /CCF /DP <> ID & 1@@|/A7&޿Km.׃aج0_@DA>I&K޸0]\xkk ,,/ EI endstream endobj 285 0 obj <> stream 0 0 0 -16 43 47 d1 43 0 0 63 0 -16 cm BI /IM true /W 43 /H 63 /BPC 1 /F /CCF /DP <> ID &c( =>Cp釯oI~\>]~0^!u ax&x EI endstream endobj 286 0 obj <> stream 0 0 0 -18 43 47 d1 43 0 0 65 0 -18 cm BI /IM true /W 43 /H 65 /BPC 1 /F /CCF /DP <> ID &o{M/p5<@ EI endstream endobj 287 0 obj <> stream 0 0 0 -18 43 47 d1 43 0 0 65 0 -18 cm BI /IM true /W 43 /H 65 /BPC 1 /F /CCF /DP <> ID & 1o00Apax5џc\x}kN@ EI endstream endobj 292 0 obj <> stream 0 0 0 2 14 45 d1 14 0 0 43 0 2 cm BI /IM true /W 14 /H 43 /BPC 1 /F /CCF /DP <> ID &_-'&l,5 EI endstream endobj 293 0 obj <> stream 88 0 0 0 0 0 d1 endstream endobj 294 0 obj <> stream 104 0 0 0 0 0 d1 endstream endobj 295 0 obj <> stream 0 0 0 -18 32 67 d1 32 0 0 85 0 -18 cm BI /IM true /W 32 /H 85 /BPC 1 /F /CCF /DP <> ID &!"'}O@0  EI endstream endobj 296 0 obj <> stream 0 0 0 -16 45 45 d1 45 0 0 61 0 -16 cm BI /IM true /W 45 /H 61 /BPC 1 /F /CCF /DP <> ID &ȾAG& EI endstream endobj 297 0 obj <> stream 0 0 0 -16 47 45 d1 47 0 0 61 0 -16 cm BI /IM true /W 47 /H 61 /BPC 1 /F /CCF /DP <> ID &dXA.o/__ }0~~)Rjp EI endstream endobj 298 0 obj <> stream 0 0 0 -16 46 45 d1 46 0 0 61 0 -16 cm BI /IM true /W 46 /H 61 /BPC 1 /F /CCF /DP <> ID &8w&G_^_x_AzxK(V: MPP EI endstream endobj 299 0 obj <> stream 0 0 0 2 50 45 d1 50 0 0 43 0 2 cm BI /IM true /W 50 /H 43 /BPC 1 /F /CCF /DP <> ID &k.% ;k EI endstream endobj 300 0 obj <> stream 0 0 0 -18 43 47 d1 43 0 0 65 0 -18 cm BI /IM true /W 43 /H 65 /BPC 1 /F /CCF /DP <> ID &c(= Np7.?륮xY .(!~x}>^Ar=5, EI endstream endobj 301 0 obj <> stream 0 0 0 -18 43 47 d1 43 0 0 65 0 -18 cm BI /IM true /W 43 /H 65 /BPC 1 /F /CCF /DP <> ID &I}~߾}<#[ u"@O_ aKkka` EI endstream endobj 306 0 obj <> stream 67 0 0 0 0 0 d1 endstream endobj 307 0 obj <> stream 36 0 0 0 0 0 d1 endstream endobj 308 0 obj <> stream 149 0 0 0 0 0 d1 endstream endobj 309 0 obj <> stream 94 0 0 0 0 0 d1 endstream endobj 310 0 obj <> stream 0 0 0 -75 39 26 d1 39 0 0 101 0 -75 cm BI /IM true /W 39 /H 101 /BPC 1 /F /CCF /DP <> ID &:AX_/_kKZ__kZ/_@ EI endstream endobj 319 0 obj <> stream 0 0 0 -69 72 0 d1 72 0 0 69 0 -69 cm BI /IM true /W 72 /H 69 /BPC 1 /F /CCF /DP <> ID #`5P v߇1{??'Y EI endstream endobj 320 0 obj <> stream 0 0 0 -45 39 2 d1 39 0 0 47 0 -45 cm BI /IM true /W 39 /H 47 /BPC 1 /F /CCF /DP <> ID &™ 0}~`\ ~R܆)op}￷]Xzn EI endstream endobj 321 0 obj <> stream 78 0 0 0 0 0 d1 endstream endobj 322 0 obj <> stream 0 0 0 -45 47 2 d1 47 0 0 47 0 -45 cm BI /IM true /W 47 /H 47 /BPC 1 /F /CCF /DP <> ID & `0h a ;A?ɯ>pOo.Cd  EI endstream endobj 323 0 obj <> stream 0 0 0 -63 31 2 d1 31 0 0 65 0 -63 cm BI /IM true /W 31 /H 65 /BPC 1 /F /CCF /DP <> ID &>0C}߿;"y5_!{ﰾ  EI endstream endobj 324 0 obj <> stream 0 0 0 -45 51 2 d1 51 0 0 47 0 -45 cm BI /IM true /W 51 /H 47 /BPC 1 /F /CCF /DP <> ID &sChXz ߇p;߿{}Uoئ` EI endstream endobj 325 0 obj <> stream 0 0 0 -45 40 2 d1 40 0 0 47 0 -45 cm BI /IM true /W 40 /H 47 /BPC 1 /F /CCF /DP <> ID &l>}rjB=s4t oNbb0 EI endstream endobj 326 0 obj <> stream 0 0 0 -39 66 -12 d1 66 0 0 27 0 -39 cm BI /IM true /W 66 /H 27 /BPC 1 /F /CCF /DP <> ID &?4@ EI endstream endobj 327 0 obj <> stream 119 0 0 0 0 0 d1 endstream endobj 328 0 obj <> stream 0 0 0 -71 60 3 d1 60 0 0 74 0 -71 cm BI /IM true /W 60 /H 74 /BPC 1 /F /CCF /DP <> ID &`6B F` 6 o  u ` XAh-tT| 9@i!k߰av{3A1?i EI endstream endobj 329 0 obj <> stream 79 0 0 0 0 0 d1 endstream endobj 330 0 obj <> stream 0 0 0 -67 34 0 d1 34 0 0 67 0 -67 cm BI /IM true /W 34 /H 67 /BPC 1 /F /CCF /DP <> ID &l/MB  EI endstream endobj 331 0 obj <> stream 0 0 0 -75 6 26 d1 6 0 0 101 0 -75 cm BI /IM true /W 6 /H 101 /BPC 1 /F /CCF /DP <> ID &?_ EI endstream endobj 332 0 obj <> stream 0 0 0 -67 41 0 d1 41 0 0 67 0 -67 cm BI /IM true /W 41 /H 67 /BPC 1 /F /CCF /DP <> ID &=Qa{7߶>{7.@ vvK0\ !@ EI endstream endobj 333 0 obj <> stream 0 0 0 -67 43 3 d1 43 0 0 70 0 -67 cm BI /IM true /W 43 /H 70 /BPC 1 /F /CCF /DP <> ID &bB Fz/I ~ %렺,.Ap <|<0%[` ج0[  EI endstream endobj 346 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID &` ɪY<7Qxwj  EI endstream endobj 347 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID 6 C{UBS'B<+C5T:@ EI endstream endobj 348 0 obj <> stream 0 0 0 8 24 30 d1 24 0 0 22 0 8 cm BI /IM true /W 24 /H 22 /BPC 1 /F /CCF /DP <> ID &ia5>'|4_<0a@@ EI endstream endobj 349 0 obj <> stream 26 0 0 0 0 0 d1 endstream endobj 350 0 obj <> stream 0 0 0 8 26 30 d1 26 0 0 22 0 8 cm BI /IM true /W 26 /H 22 /BPC 1 /F /CCF /DP <> ID 4 wG\9( EI endstream endobj 351 0 obj <> stream 25 0 0 0 0 0 d1 endstream endobj 352 0 obj <> stream 0 0 0 8 23 41 d1 23 0 0 33 0 8 cm BI /IM true /W 23 /H 33 /BPC 1 /F /CCF /DP <> ID (5`}O ~*@ EI endstream endobj 353 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &EɪY(5X EI endstream endobj 354 0 obj <> stream 0 0 0 8 21 30 d1 21 0 0 22 0 8 cm BI /IM true /W 21 /H 22 /BPC 1 /F /CCF /DP <> ID &48@>K]I~%+k@@ EI endstream endobj 355 0 obj <> stream 0 0 0 -5 21 34 d1 21 0 0 39 0 -5 cm BI /IM true /W 21 /H 39 /BPC 1 /F /CCF /DP <> ID &0!ɪ{x EI endstream endobj 356 0 obj <> stream 0 0 0 2 21 30 d1 21 0 0 28 0 2 cm BI /IM true /W 21 /H 28 /BPC 1 /F /CCF /DP <> ID &AU_ EI endstream endobj 357 0 obj <> stream 0 0 0 8 20 30 d1 20 0 0 22 0 8 cm BI /IM true /W 20 /H 22 /BPC 1 /F /CCF /DP <> ID &8 zzt|AdX᧿~ @@ EI endstream endobj 358 0 obj <> stream 0 0 0 8 19 30 d1 19 0 0 22 0 8 cm BI /IM true /W 19 /H 22 /BPC 1 /F /CCF /DP <> ID :;@&A ;_\{ EI endstream endobj 359 0 obj <> stream 0 0 0 8 23 30 d1 23 0 0 22 0 8 cm BI /IM true /W 23 /H 22 /BPC 1 /F /CCF /DP <> ID &h:قe EI endstream endobj 360 0 obj <> stream 0 0 0 8 23 30 d1 23 0 0 22 0 8 cm BI /IM true /W 23 /H 22 /BPC 1 /F /CCF /DP <> ID &1|}Bj EI endstream endobj 361 0 obj <> stream 0 0 0 23 8 30 d1 8 0 0 7 0 23 cm BI /IM true /W 8 /H 7 /BPC 1 /F /CCF /DP <> ID &k EI endstream endobj 362 0 obj <> stream 0 0 0 8 21 30 d1 21 0 0 22 0 8 cm BI /IM true /W 21 /H 22 /BPC 1 /F /CCF /DP <> ID &>8A=<#|'Ԛap[ € EI endstream endobj 363 0 obj <> stream 0 0 0 8 23 30 d1 23 0 0 22 0 8 cm BI /IM true /W 23 /H 22 /BPC 1 /F /CCF /DP <> ID (&`@ EI endstream endobj 364 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &Gjxj  EI endstream endobj 365 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID &Z| gx_j  EI endstream endobj 366 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID (&`+ EI endstream endobj 367 0 obj <> stream 0 0 0 8 8 30 d1 8 0 0 22 0 8 cm BI /IM true /W 8 /H 22 /BPC 1 /F /CCF /DP <> ID &k>E  EI endstream endobj 369 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &X &/"# j  EI endstream endobj 370 0 obj <> stream 0 0 0 0 22 30 d1 22 0 0 30 0 0 cm BI /IM true /W 22 /H 30 /BPC 1 /F /CCF /DP <> ID :8w&o zQ @ EI endstream endobj 371 0 obj <> stream 0 0 0 0 17 30 d1 17 0 0 30 0 0 cm BI /IM true /W 17 /H 30 /BPC 1 /F /CCF /DP <> ID &g  EI endstream endobj 372 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &>8A?H75&]z`,0 EI endstream endobj 373 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &3(O@߄|50XP5L@@ EI endstream endobj 374 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &95Xg߰dC[}o>߿<&0^<5  EI endstream endobj 375 0 obj <> stream 30 0 0 0 0 0 d1 endstream endobj 376 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &3*}!PMWmmπ EI endstream endobj 377 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID 4? _8 EI endstream endobj 378 0 obj <> stream 0 0 0 -1 19 30 d1 19 0 0 31 0 -1 cm BI /IM true /W 19 /H 31 /BPC 1 /F /CCF /DP <> ID &woGxO@ EI endstream endobj 379 0 obj <> stream 0 0 0 7 23 22 d1 23 0 0 15 0 7 cm BI /IM true /W 23 /H 15 /BPC 1 /F /CCF /DP <> ID &E,( EI endstream endobj 380 0 obj <> stream 0 0 0 8 23 30 d1 23 0 0 22 0 8 cm BI /IM true /W 23 /H 22 /BPC 1 /F /CCF /DP <> ID &Qɪ-zX|'z8?|A EI endstream endobj 381 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID &guOx~MP wǿ᙮?<  EI endstream endobj 382 0 obj <> stream 0 0 0 0 22 30 d1 22 0 0 30 0 0 cm BI /IM true /W 22 /H 30 /BPC 1 /F /CCF /DP <> ID &bpf}^O~qԚP EI endstream endobj 383 0 obj <> stream 0 0 0 0 24 30 d1 24 0 0 30 0 0 cm BI /IM true /W 24 /H 30 /BPC 1 /F /CCF /DP <> ID &ů&<__ xo~߰ ~T` EI endstream endobj 384 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID &PT,@@ EI endstream endobj 385 0 obj <> stream 0 0 0 8 23 41 d1 23 0 0 33 0 8 cm BI /IM true /W 23 /H 33 /BPC 1 /F /CCF /DP <> ID &<{_~~`MR EI endstream endobj 386 0 obj <> stream 0 0 0 -1 17 14 d1 17 0 0 15 0 -1 cm BI /IM true /W 17 /H 15 /BPC 1 /F /CCF /DP <> ID &ؿ_A EI endstream endobj 387 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID &95X3aN? !ɪ@ EI endstream endobj 388 0 obj <> stream 0 0 0 8 23 42 d1 23 0 0 34 0 8 cm BI /IM true /W 23 /H 34 /BPC 1 /F /CCF /DP <> ID &PP=85Z⶿x1H>>״9P EI endstream endobj 389 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID &8ɪ  g@@ EI endstream endobj 390 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID &c<:}τ?\5o  EI endstream endobj 391 0 obj <> stream 0 0 0 0 19 30 d1 19 0 0 30 0 0 cm BI /IM true /W 19 /H 30 /BPC 1 /F /CCF /DP <> ID &]d[Eɮ EI endstream endobj 392 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &h8A<<6ɪ  F/x EI endstream endobj 393 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &NP8?w~\u7[ EI endstream endobj 394 0 obj <> stream 0 0 0 0 26 30 d1 26 0 0 30 0 0 cm BI /IM true /W 26 /H 30 /BPC 1 /F /CCF /DP <> ID &‹Aj aO |€ EI endstream endobj 395 0 obj <> stream 31 0 0 0 0 0 d1 endstream endobj 396 0 obj <> stream 0 0 0 8 23 30 d1 23 0 0 22 0 8 cm BI /IM true /W 23 /H 22 /BPC 1 /F /CCF /DP <> ID & 6 qwɪU EI endstream endobj 397 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID jͅk  EI endstream endobj 398 0 obj <> stream 0 0 0 0 22 30 d1 22 0 0 30 0 0 cm BI /IM true /W 22 /H 30 /BPC 1 /F /CCF /DP <> ID &8MRX>}!TP EI endstream endobj 399 0 obj <> stream 0 0 0 0 22 30 d1 22 0 0 30 0 0 cm BI /IM true /W 22 /H 30 /BPC 1 /F /CCF /DP <> ID 4?B׸^so~C_p EI endstream endobj 400 0 obj <> stream 0 0 0 0 26 30 d1 26 0 0 30 0 0 cm BI /IM true /W 26 /H 30 /BPC 1 /F /CCF /DP <> ID &l[k[?a8 € EI endstream endobj 401 0 obj <> stream 24 0 0 0 0 0 d1 endstream endobj 402 0 obj <> stream 0 0 0 8 22 30 d1 22 0 0 22 0 8 cm BI /IM true /W 22 /H 22 /BPC 1 /F /CCF /DP <> ID j߆d9oo|7~/ EI endstream endobj 403 0 obj <> stream 0 0 0 0 26 30 d1 26 0 0 30 0 0 cm BI /IM true /W 26 /H 30 /BPC 1 /F /CCF /DP <> ID &PQiMR8_//8Rj EI endstream endobj 404 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID &Y|} EI endstream endobj 405 0 obj <> stream 0 0 0 -1 21 30 d1 21 0 0 31 0 -1 cm BI /IM true /W 21 /H 31 /BPC 1 /F /CCF /DP <> ID &l/߿6#q|0  EI endstream endobj 406 0 obj <> stream 0 0 0 0 26 30 d1 26 0 0 30 0 0 cm BI /IM true /W 26 /H 30 /BPC 1 /F /CCF /DP <> ID & =^> qh>MRa@@ EI endstream endobj 407 0 obj <> stream 0 0 0 0 26 30 d1 26 0 0 30 0 0 cm BI /IM true /W 26 /H 30 /BPC 1 /F /CCF /DP <> ID &g&.S _~_A&@ EI endstream endobj 408 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &hP>| x'5k 0Kk|a?5A\xkk @ EI endstream endobj 409 0 obj <> stream 0 0 0 -5 15 34 d1 15 0 0 39 0 -5 cm BI /IM true /W 15 /H 39 /BPC 1 /F /CCF /DP <> ID ? EI endstream endobj 410 0 obj <> stream 34 0 0 0 0 0 d1 endstream endobj 411 0 obj <> stream 0 0 0 12 21 17 d1 21 0 0 5 0 12 cm BI /IM true /W 21 /H 5 /BPC 1 /F /CCF /DP <> ID &E( EI endstream endobj 412 0 obj <> stream 0 0 0 -5 15 34 d1 15 0 0 39 0 -5 cm BI /IM true /W 15 /H 39 /BPC 1 /F /CCF /DP <> ID j EI endstream endobj 413 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &f-<߇jŹ*}@'ɪk\{[ j  EI endstream endobj 414 0 obj <> stream 0 0 0 0 21 30 d1 21 0 0 30 0 0 cm BI /IM true /W 21 /H 30 /BPC 1 /F /CCF /DP <> ID &?jMSK@ EI endstream endobj 415 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID &4?O_a@ EI endstream endobj 416 0 obj <> stream 0 0 0 8 23 30 d1 23 0 0 22 0 8 cm BI /IM true /W 23 /H 22 /BPC 1 /F /CCF /DP <> ID &l=h _~  EI endstream endobj 417 0 obj <> stream 0 0 0 0 23 30 d1 23 0 0 30 0 0 cm BI /IM true /W 23 /H 30 /BPC 1 /F /CCF /DP <> ID &3=pOZW޿!MW} EI endstream endobj 418 0 obj <> stream 0 0 0 23 10 38 d1 10 0 0 15 0 23 cm BI /IM true /W 10 /H 15 /BPC 1 /F /CCF /DP <> ID &rjx>߅ @@ EI endstream endobj 419 0 obj <> stream 0 0 0 0 9 17 d1 9 0 0 17 0 0 cm BI /IM true /W 9 /H 17 /BPC 1 /F /CCF /DP <> ID 6aɭ߿a EI endstream endobj 440 0 obj <> stream 0 0 0 0 6 90 d1 6 0 0 90 0 0 cm BI /IM true /W 6 /H 90 /BPC 1 /F /CCF /DP <> ID &Mu EI endstream endobj 452 0 obj <> stream 0 0 0 -18 37 37 d1 37 0 0 55 0 -18 cm BI /IM true /W 37 /H 55 /BPC 1 /F /CCF /DP <> ID &bP EI endstream endobj 453 0 obj <> stream 0 0 0 -2 50 37 d1 50 0 0 39 0 -2 cm BI /IM true /W 50 /H 39 /BPC 1 /F /CCF /DP <> ID &ђ iþj`τ/4 pB=(5A EI endstream endobj 454 0 obj <> stream 0 0 0 -2 36 37 d1 36 0 0 39 0 -2 cm BI /IM true /W 36 /H 39 /BPC 1 /F /CCF /DP <> ID &@@@E z^]` @ EI endstream endobj 455 0 obj <> stream 0 0 0 -12 38 37 d1 38 0 0 49 0 -12 cm BI /IM true /W 38 /H 49 /BPC 1 /F /CCF /DP <> ID &XA<'>axPCMP_ڀ EI endstream endobj 456 0 obj <> stream 0 0 0 -18 43 37 d1 43 0 0 55 0 -18 cm BI /IM true /W 43 /H 55 /BPC 1 /F /CCF /DP <> ID &N8΃' ?_k_a|>r?( EI endstream endobj 501 0 obj <> stream 0 0 0 -34 62 -12 d1 62 0 0 22 0 -34 cm BI /IM true /W 62 /H 22 /BPC 1 /F /CCF /DP <> ID &p> -|'??> oa@>`} {P EI endstream endobj 506 0 obj <>stream AdobedC    $.' ",#(7),01444'9=82<.342 @" }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz?((((lG M$1ԚH%He]GOV9Qu ⤣2)c7"H#r#"Q-p4m4`0,$u-QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE4_[Em,&[aEoWasg!j2,.#ld4waĒӎ;WI֮-.$kw):yo<55c]\[[4kb%$ܣqA M2#.$rOw(j3=[i!E,5:L~2k+=N-T/q!u,A/ |&ҵ/E[+̯^9NE׆%Xkkjamw1czVFKekd`Msm$Qǐ@ힽơ-'KmtԆ d_vtk$|HӥbyR} njY~ xZN=>M^5f0$۷9+CWЭj+NIn CY#yZYe$B;#RsStm{L]iwkq9Yu ؊Ѣ(((((((((((((((((((((((((((((((((((((((((tO 8UD'ЎO5&&HErvep;e5|3{l40Km8*|=֐!Pz{槣h>{IZZMd}4"?6 \ԳxwM.2^k &l<}^ӹ/ j1xO1/m0CV~u>$UQk̳k7qprOxkK}/GU%.n+ϠG>%\;6=] ~l~'k߂!ۡt HvOz5QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEU-SHӵ6졻<}G|=r6z<~jM#Ͱv!O}[Eӵamڭ|c@<j)|;͢-I3ņ q8#d%G`dyZkZ Q0ٱ`Wz4 m2 (Vh$qEQڳ5oh:^_ѽt6h#вX{Z5暚t|&3Si%4tdQa+Ng ^TV2O:-N3UCšCiv6#cTXRpѻ)y-l%Ӓ+I~(Ϲҏh#Jm,i!qqV=3O5qlc 1ۥS|i.ifFsKq$Dv*@? hW:\M߻%29˄%χn]M->lK0 Mei>?->< \iKrnrG>QrrO=3r팖pk$h'6rk7۫3LKd 9f ʻt U}SgjX>Iiq~,Hw9c[蚶y+ua&d.8@-T^k\bd&ܛ>0^2W'O I.yp%Fpmʃ׮}pxW_OԬt#c (EgfBI UgkVQ~åVPTt#ͺ薾m"YVA),|+V7YZ5/uy2I~`p9MnU."IH`J zjH"@K}f)W7Đj|%?ٗnF917O_|8fPK$Ļp\qO!ukU!.c}eV].<=C|EkTPL̑³43!9$VI d+_>&iI kKg`)3pF}kHyZk>#,bvb'9OXFKdZ[q8W[kviICwR_G#>|ZM{ǵ76gQHw,FZVqiHufdN?GxP5M'Qҍ`'e6 :ÑmjZψ}&h5yc{n#" 6AE D'hU  fI)tz6z/4șw#6?، e:~|4_N\|~*֏Q|`̲]Bmr9 >[ǶPXG.эY\=@fW&2i[ZY. s- dby3`&KiFG[Uυog+$DګEڿGk_ XjHnl.YF-¸ sΧo[GK85[y r <P3n Ԃ"| Hԓ\.$v,*#ŕ,:իݵ5ի*1"Ӧ0~Ns:Ԛݕ ۛkaq8X 'imó#NLp0&ҵEY]F[]BHl >+%A6,kpmnݟ !7wQ]oMc hՊ0 0QGq?.Gi:Sb 1kWk`ٷ-@Ͻ؀3Oa F׮`!SNF[-*qDbn/-])3ޯCz#N-Dխ[5eU pǀ;_x/YCl0øַl3Ѵ mB+d@(b3w'U/u 8~M$' []K^y+fx0c'|2c/`EHM1p[YTO{Wzܗ2Y'΅X!sd: "A"3c R%P#rW/[69M UVOe2aeaqzU6[ƱDQ%QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEEs+k,:!eHː8=JlRXEML#@g )QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEqxQ[MN/ -5rN.,)Ojeu "VO \.rхp=OvW?;WL[5/; 2oURp0˴*ogԼ+0kfYp:rkRԵӵ+Mmgpml袹Ou M`}\}`$1ckikxI@Qܠ??*<%|XGX-.FpdP @"zkƌQFYkGuM_P[ӠMxL^~es :9q]QEUK-NQ{Ja`uSҩhk316}e+k$=Zآ+ڜ~+ðxvkbݴ6JSoosK]債RG[op:w9dE!>(KHU@I=6PUdє"(K,pB")gw8 $TVWڍWs$ȇ!QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEbx(Vƨ %'|9|n!Gkb$m(nuVт`mf@'q9}zZ hd%ysvP5(J)`E0 â`A;;V ?u큟xy8P6Qo"G)׮vul}@#?|1s/8ޯK+D7i GN{zT$g KTQ,m0H@OCҍ?Cg]iuz}ݣ`]u^Oݦ-hhb*.88;8xV _\(V{w,]5cWxwÞ".u+1=-0"}qŲek4xv'Xᶖ}7ȯKHSGBuX(n4*ƈ|I⺯\Ooh5mkyײ,7?7$r9xT]jI H{MAYteA#瞕Kᕾd,X#^Yܳ +y(pCt<-wZMw,i## NN;j֛Q}vMC[Mo?t>,c'# |w|&ZtYj kwH$u}E] COw_TF~jSKD/'5t!_R9?ZxBiW8oe+bYǩ>Ai<[XXE9^ʸ18 1ixEŵV VnRDby=r,sWt\K_]Ku۬Kݴ*1+R:,0ƫIu zmcHKs;¬aIOU|Mgx|]Rh'AŸ-9L]l>[\GرpQVቔ=98772?ZPIwOb-B%߂ p8+𕿍u8En o|'Z"v-Vs5xBK1 t5[DGE?6+SX i<-%VeOm L6 I<9W/jFY%,+sN3կ J/qwi%Y!(Y$Rp~g2(ʊ((((((((((((((((((((((((((((((((((((((wwBѮn.@P)d#s#{dW?xRxJCLeRhW1Ӵ4K=kWVF0qsP~ %/lD]Cd2<jxSY:٥k{t7Qe_ jmR3xf-E4Ų&q\}p}w^'%iMR!w9}qWj^#Z}7_r"+\g8Nt_Ug;]*or$1ުQ0si&_kjVzmZetVBx=GxÖ 2fb63\LI|$uSOesY!1p:$EPK6::uCJ[{ ?KDq,y ⻳uBΐ3T3}7?3\?+ZݎIP3 ѰÅNXs*Í#SYfYrȎ`?QXƿ|)5>Ms$є>b ;1ߏL{iO+wD~_Ak Hd 'iۃj뷊5O s<&kG)ZEwHRs|E+ h1hm,XdrAKg=ׂ,e7W4K,X~$_BxvMu,aђsvcY񶍫6vVEo)@@Cr zOg;-.+k{v]\,mo!'˜8]rFkX[ ʣ# C' ݤdڔx@0#$uO>QeeX6"ʎQ$0@<0 GXO GKiu,S';wLp^{_7tquw*<# ~cq@3[4mO>{+24o"NH$v hI!xBi#"Ė}]X5 7hp{dڭV~$UU흾6\H>uĉTڹI'QxOhRxhC *2#_lizofot7%z9-<t[ftmkq?J,#w 5Y77!.,͜ [XX XS's⺏X\ڕqkY"D2TkHC2Cb,]i9Q']+V@lPjc۸FvwEhƥ٪%$p70`H3UoF <F@WQ{ |2|A07͋G,j{5cĚ ;_k4mrn_%rj&w+;W6o+麆Ռ$irB#8lqal[o1ʒ_aU4O GXKmj?gY+e[D899|'x'[tS&sCZ"pT$NzG+=sOZkô:a+92j>A\̡|~z]cX:nS% CD~?n{c2IԖ-L~be'WiZjVZ%33:Arưxii:hqv5)uX.IWNYFjԅ6[;u346H;=p=\VizwOs7Ѧ[:~x9^3\F$ y';IZnwis%NN||߮3mRe5נQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEqׄ[xZkHaoI2Ib:.9,Ḏy fB CPXxCUVqy[afg H#ǥx:/iWBn,;LHbHn}kcZ-uKV61i{W?x{Zd0i D?%A\䩦ce }bI#]W;Ȕn?LUcPMpΗ ^Ycܟz? k~"mI5}>4˔l\%Co #𶣭:)-jAwi/|+&o{,  `0*zgԯ5 QXa@IUTݒ2rr>_>ִ+ln会4xdpGך tmn:6{;o:1)]0GkMjM宥w۩DךJaXSSb@Urxv(s%\mͦkv^¡?xuǽ6N ׈t|?JM_Ú^kVoe9M܇\ Mtze͵,P4ʂ =s-lcI|c5QHy$(>?CNѴKvs$Ys;1X0oZwQi$.77'֭i!K;MZ-N$xZ٧B_ ;VqV4 Qkk3AycURrOV$8BDU#OoY=ĔYڄ}_n?l[?zfZ[hC4{"/O] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fg] ˟j?vx+.fH!lHA3V}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~bXأV}}abZϬ?Q+O>i>߱Gح?(~yů^ M۔ʕ95c?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟z?yOc?^^~1G/?ף}t >]?񏮟zxF񬶲jtjᱜA\(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MOG(MO]7/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_oO ͣ|_UxWm¿3hG&F?4 7Q fѿ>/*M<+C6Ti_o5 h~$x_XӢh$m}d¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7 _/M4¯W BG*пm}7,޵KjO.4UXHÿ''^kw)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lߌj (w)?]fW~1'lUKvm(&1> endobj 240 0 obj <>stream KBVXMQ+Helvetica-Narrow-Bold+cL  '((URW)++,Copyright 1999 by (URW)++ Design & Development. See the file COPYING (GNU General Public License) for license conditions. As a special exception, permission is granted to include this font program in a Postscript or PDF file that consists of a document that contains text to be displayed or printed using this font, regardless of the conditions or license applying to the document itself.Nimbus Sans L Bold CondensedNimbus Sans LCopyright (URW)++,Copyright 1999 by (URW)++ Design & DevelopmentnodpePrtiSJab.1234 OPEQF1SUJ4+BCOO*hF.P(dgx, i;7ׯĿlAP/0MbnlI}  f 4@===4 ΰQ#"fRGGf}, TXbhgSB9BƴħEb<ȮO#&gQKPfcAS 5wNsƋC9BPfpjHoȰO('fKPKh\}1eqwdMlQA=#BH%6ئf@hbM!"CbhتnAAmhMӋ"iȿ᯴"~Sloi:6I8$P'7l~cCxUx qQng?b($_$^zԋj? *9! 76 MZkqzvysw\Q_i?9b\y!b>ypXQ~ZbQmً,\}0(wcr}o$x9|]N.+OËĎnqxJ@Xul]xloP=Q^\Sivė},T 8bbRpƋB9BSfrNigǰO'(eKQKgx'C''\}}Z-:Za3{\i _ꢱċbLTsbHN,jR#"ЋNj: GTmkSr`c9\x' CꊮoAFh]WWm5]bbtɋ mP༪ BGRllUudhF٤gQLlmIy\zCJQjE z11(ɉ0gJxw:\   endstream endobj 508 0 obj 2117 endobj 446 0 obj <> endobj 368 0 obj <>/FontBBox[0 -23 62 90]/FontMatrix[1 0 0 1 0 0]/FirstChar 0/LastChar 76/Widths[ 0 0 46 0 53 0 49 27 0 26 0 0 30 0 0 0 29 0 0 25 28 107 0 0 0 50 0 0 0 0 0 0 0 0 0 0 0 31 0 0 0 0 0 24 0 0 0 0 0 0 52 54 0 0 34 0 0 55 51 0 0 0 0 0 0 0 0 0 0 43 0 0 44 0 42 45 0] >> endobj 12 0 obj <>/FontBBox[0 -116 197 184]/FontMatrix[1 0 0 1 0 0]/FirstChar 0/LastChar 255/Widths[ 0 0 0 144 0 0 0 0 0 65 0 0 138 0 55 133 0 0 140 0 130 0 107 60 0 96 0 0 84 0 0 128 0 86 0 137 132 0 127 139 134 0 0 0 76 0 0 33 0 118 0 80 0 58 0 136 74 0 56 0 142 98 0 0 83 82 0 0 102 0 71 0 27 0 93 0 69 0 40 57 0 29 0 68 0 28 0 95 91 0 61 0 72 0 112 50 53 0 62 0 0 114 0 131 73 48 113 85 0 77 0 64 45 42 66 63 100 0 0 0 0 0 108 0 117 0 110 0 0 0 122 0 115 0 0 99 0 0 43 0 0 0 51 54 0 47 0 0 0 0 106 0 0 0 0 97 0 0 49 0 103 46 105 0 59 52 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 101 0 0 0 44 0 0 0 0 0 70 0 0 0 38 0 0 0 0 0 0 0 0 88 104 0 0 0 0 0 0 0 67 36 149 94 0 0 0 78 0 0 0 0 0 119 0 79 0 0 0 0 0 0 0 26 0 25 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] >> endobj 476 0 obj <> endobj 204 0 obj <> endobj 193 0 obj <> endobj 164 0 obj <> endobj 205 0 obj <> endobj 450 0 obj <> endobj 242 0 obj <> endobj 194 0 obj <> endobj 10 0 obj <> endobj 464 0 obj <> endobj 447 0 obj <> endobj 192 0 obj <> endobj 509 0 obj <> endobj 485 0 obj <> endobj 475 0 obj <> endobj 465 0 obj <> endobj 190 0 obj <> endobj 462 0 obj <> endobj 484 0 obj <> endobj 492 0 obj <> endobj 466 0 obj <> endobj 491 0 obj <> endobj 481 0 obj <> endobj 473 0 obj <> endobj 461 0 obj <> endobj 449 0 obj <> endobj 252 0 obj <> endobj 191 0 obj <> endobj 189 0 obj <> endobj 166 0 obj <> endobj 162 0 obj <> endobj 146 0 obj <> endobj 167 0 obj <> endobj 482 0 obj <> endobj 451 0 obj <> endobj 165 0 obj <> endobj 483 0 obj <> endobj 471 0 obj <> endobj 463 0 obj <> endobj 445 0 obj <> endobj 254 0 obj <> endobj 486 0 obj <> endobj 255 0 obj <> endobj 163 0 obj <> endobj 472 0 obj <> endobj 253 0 obj <> endobj 158 0 obj <> endobj 152 0 obj <> endobj 510 0 obj <> endobj 147 0 obj <> endobj 511 0 obj <> endobj 474 0 obj <> endobj 448 0 obj <> endobj 227 0 obj <> endobj 157 0 obj <> endobj 151 0 obj <> endobj 9 0 obj <> endobj 228 0 obj <> endobj 2 0 obj <>endobj xref 0 512 0000000000 65535 f 0000161169 00000 n 0000340050 00000 n 0000160869 00000 n 0000156147 00000 n 0000000015 00000 n 0000000788 00000 n 0000161217 00000 n 0000194478 00000 n 0000338851 00000 n 0000324934 00000 n 0000194547 00000 n 0000317334 00000 n 0000195785 00000 n 0000196233 00000 n 0000196515 00000 n 0000196872 00000 n 0000196937 00000 n 0000197329 00000 n 0000197765 00000 n 0000197976 00000 n 0000198250 00000 n 0000198593 00000 n 0000198657 00000 n 0000198984 00000 n 0000199200 00000 n 0000199265 00000 n 0000199608 00000 n 0000199672 00000 n 0000199737 00000 n 0000200017 00000 n 0000200300 00000 n 0000200365 00000 n 0000200707 00000 n 0000200772 00000 n 0000201076 00000 n 0000201141 00000 n 0000201205 00000 n 0000201567 00000 n 0000201631 00000 n 0000201936 00000 n 0000202278 00000 n 0000202342 00000 n 0000202594 00000 n 0000202956 00000 n 0000203021 00000 n 0000203346 00000 n 0000203410 00000 n 0000203770 00000 n 0000203835 00000 n 0000203900 00000 n 0000204344 00000 n 0000204409 00000 n 0000204474 00000 n 0000204539 00000 n 0000204930 00000 n 0000205218 00000 n 0000205466 00000 n 0000205530 00000 n 0000205775 00000 n 0000206069 00000 n 0000206133 00000 n 0000206415 00000 n 0000206480 00000 n 0000206760 00000 n 0000206824 00000 n 0000207085 00000 n 0000207149 00000 n 0000207446 00000 n 0000207511 00000 n 0000207575 00000 n 0000207885 00000 n 0000207949 00000 n 0000208290 00000 n 0000208355 00000 n 0000208419 00000 n 0000208687 00000 n 0000209004 00000 n 0000209068 00000 n 0000209132 00000 n 0000209363 00000 n 0000209620 00000 n 0000209685 00000 n 0000209897 00000 n 0000209961 00000 n 0000210235 00000 n 0000210299 00000 n 0000210567 00000 n 0000210631 00000 n 0000210859 00000 n 0000210923 00000 n 0000211164 00000 n 0000211228 00000 n 0000211292 00000 n 0000211555 00000 n 0000211619 00000 n 0000211837 00000 n 0000211901 00000 n 0000212168 00000 n 0000212232 00000 n 0000212521 00000 n 0000212586 00000 n 0000212651 00000 n 0000212890 00000 n 0000212955 00000 n 0000213241 00000 n 0000213306 00000 n 0000213579 00000 n 0000213645 00000 n 0000213710 00000 n 0000213775 00000 n 0000214052 00000 n 0000214117 00000 n 0000214391 00000 n 0000214689 00000 n 0000214755 00000 n 0000215116 00000 n 0000215182 00000 n 0000215247 00000 n 0000215312 00000 n 0000215378 00000 n 0000215443 00000 n 0000215681 00000 n 0000215746 00000 n 0000215995 00000 n 0000216060 00000 n 0000216125 00000 n 0000216190 00000 n 0000216255 00000 n 0000216320 00000 n 0000216386 00000 n 0000216624 00000 n 0000216898 00000 n 0000217178 00000 n 0000217453 00000 n 0000217727 00000 n 0000217793 00000 n 0000218070 00000 n 0000218136 00000 n 0000218451 00000 n 0000218517 00000 n 0000218853 00000 n 0000219223 00000 n 0000219438 00000 n 0000219504 00000 n 0000219844 00000 n 0000329705 00000 n 0000337276 00000 n 0000000807 00000 n 0000000838 00000 n 0000000869 00000 n 0000338789 00000 n 0000336066 00000 n 0000156334 00000 n 0000000924 00000 n 0000002703 00000 n 0000002725 00000 n 0000338727 00000 n 0000334927 00000 n 0000156481 00000 n 0000002773 00000 n 0000005285 00000 n 0000329642 00000 n 0000333639 00000 n 0000321478 00000 n 0000332027 00000 n 0000329579 00000 n 0000329768 00000 n 0000219910 00000 n 0000220122 00000 n 0000220400 00000 n 0000220465 00000 n 0000220690 00000 n 0000220933 00000 n 0000220998 00000 n 0000221249 00000 n 0000221466 00000 n 0000221699 00000 n 0000221764 00000 n 0000221829 00000 n 0000222058 00000 n 0000222123 00000 n 0000222379 00000 n 0000222625 00000 n 0000222864 00000 n 0000223093 00000 n 0000223159 00000 n 0000223397 00000 n 0000223625 00000 n 0000329516 00000 n 0000327644 00000 n 0000329453 00000 n 0000326208 00000 n 0000321414 00000 n 0000323800 00000 n 0000005307 00000 n 0000156636 00000 n 0000005453 00000 n 0000009744 00000 n 0000223846 00000 n 0000224087 00000 n 0000224152 00000 n 0000224401 00000 n 0000224652 00000 n 0000321350 00000 n 0000321542 00000 n 0000224717 00000 n 0000224935 00000 n 0000225001 00000 n 0000225066 00000 n 0000225132 00000 n 0000225385 00000 n 0000225450 00000 n 0000225515 00000 n 0000225748 00000 n 0000225973 00000 n 0000226225 00000 n 0000009766 00000 n 0000156791 00000 n 0000009836 00000 n 0000014165 00000 n 0000226472 00000 n 0000226727 00000 n 0000014187 00000 n 0000156946 00000 n 0000014268 00000 n 0000056373 00000 n 0000338665 00000 n 0000338911 00000 n 0000226957 00000 n 0000227221 00000 n 0000227464 00000 n 0000227698 00000 n 0000227927 00000 n 0000228186 00000 n 0000228423 00000 n 0000056396 00000 n 0000157101 00000 n 0000056492 00000 n 0000060722 00000 n 0000313798 00000 n 0000313495 00000 n 0000322751 00000 n 0000228665 00000 n 0000228921 00000 n 0000229171 00000 n 0000229433 00000 n 0000229499 00000 n 0000060744 00000 n 0000157256 00000 n 0000060853 00000 n 0000065665 00000 n 0000329392 00000 n 0000334852 00000 n 0000333417 00000 n 0000333561 00000 n 0000065687 00000 n 0000157403 00000 n 0000065796 00000 n 0000070445 00000 n 0000229781 00000 n 0000230023 00000 n 0000230243 00000 n 0000230308 00000 n 0000230532 00000 n 0000070467 00000 n 0000157558 00000 n 0000070576 00000 n 0000074721 00000 n 0000074743 00000 n 0000157705 00000 n 0000074804 00000 n 0000078756 00000 n 0000230757 00000 n 0000230939 00000 n 0000231191 00000 n 0000231382 00000 n 0000231447 00000 n 0000231687 00000 n 0000231901 00000 n 0000232141 00000 n 0000232206 00000 n 0000232431 00000 n 0000232691 00000 n 0000232938 00000 n 0000233215 00000 n 0000233468 00000 n 0000233702 00000 n 0000078778 00000 n 0000157860 00000 n 0000078848 00000 n 0000083148 00000 n 0000233973 00000 n 0000234177 00000 n 0000234242 00000 n 0000234308 00000 n 0000234546 00000 n 0000234768 00000 n 0000235023 00000 n 0000235270 00000 n 0000235510 00000 n 0000235775 00000 n 0000083170 00000 n 0000158015 00000 n 0000083266 00000 n 0000087070 00000 n 0000236044 00000 n 0000236109 00000 n 0000236174 00000 n 0000236240 00000 n 0000236305 00000 n 0000087092 00000 n 0000158170 00000 n 0000087162 00000 n 0000090953 00000 n 0000090975 00000 n 0000158317 00000 n 0000091056 00000 n 0000095065 00000 n 0000236560 00000 n 0000236814 00000 n 0000237066 00000 n 0000237131 00000 n 0000237388 00000 n 0000237628 00000 n 0000237886 00000 n 0000238124 00000 n 0000238320 00000 n 0000238386 00000 n 0000238685 00000 n 0000238750 00000 n 0000238971 00000 n 0000239190 00000 n 0000239449 00000 n 0000095087 00000 n 0000158472 00000 n 0000095157 00000 n 0000099734 00000 n 0000099756 00000 n 0000158619 00000 n 0000099813 00000 n 0000103095 00000 n 0000103117 00000 n 0000158766 00000 n 0000103174 00000 n 0000107107 00000 n 0000239723 00000 n 0000239936 00000 n 0000240153 00000 n 0000240362 00000 n 0000240427 00000 n 0000240632 00000 n 0000240697 00000 n 0000240906 00000 n 0000241105 00000 n 0000241314 00000 n 0000241525 00000 n 0000241728 00000 n 0000241933 00000 n 0000242137 00000 n 0000242336 00000 n 0000242539 00000 n 0000242717 00000 n 0000242924 00000 n 0000243123 00000 n 0000243326 00000 n 0000243537 00000 n 0000243741 00000 n 0000316096 00000 n 0000243926 00000 n 0000244130 00000 n 0000244335 00000 n 0000244532 00000 n 0000244744 00000 n 0000244961 00000 n 0000245173 00000 n 0000245238 00000 n 0000245449 00000 n 0000245662 00000 n 0000245863 00000 n 0000246052 00000 n 0000246260 00000 n 0000246471 00000 n 0000246683 00000 n 0000246897 00000 n 0000247099 00000 n 0000247315 00000 n 0000247508 00000 n 0000247717 00000 n 0000247941 00000 n 0000248149 00000 n 0000248357 00000 n 0000248556 00000 n 0000248769 00000 n 0000248983 00000 n 0000249199 00000 n 0000249264 00000 n 0000249470 00000 n 0000249666 00000 n 0000249876 00000 n 0000250085 00000 n 0000250307 00000 n 0000250372 00000 n 0000250574 00000 n 0000250793 00000 n 0000251002 00000 n 0000251206 00000 n 0000251420 00000 n 0000251641 00000 n 0000251861 00000 n 0000252052 00000 n 0000252117 00000 n 0000252299 00000 n 0000252493 00000 n 0000252710 00000 n 0000252911 00000 n 0000253118 00000 n 0000253324 00000 n 0000253538 00000 n 0000253732 00000 n 0000107129 00000 n 0000158921 00000 n 0000107209 00000 n 0000111322 00000 n 0000111344 00000 n 0000159068 00000 n 0000111401 00000 n 0000115747 00000 n 0000115769 00000 n 0000159215 00000 n 0000115837 00000 n 0000119513 00000 n 0000119535 00000 n 0000159362 00000 n 0000119616 00000 n 0000123319 00000 n 0000123341 00000 n 0000159509 00000 n 0000123422 00000 n 0000127064 00000 n 0000253918 00000 n 0000127086 00000 n 0000159664 00000 n 0000127192 00000 n 0000132021 00000 n 0000333353 00000 n 0000316018 00000 n 0000326149 00000 n 0000338592 00000 n 0000329331 00000 n 0000322676 00000 n 0000330978 00000 n 0000254106 00000 n 0000254320 00000 n 0000254561 00000 n 0000254789 00000 n 0000255015 00000 n 0000132043 00000 n 0000159819 00000 n 0000132188 00000 n 0000136950 00000 n 0000329270 00000 n 0000328779 00000 n 0000333289 00000 n 0000326071 00000 n 0000327578 00000 n 0000329007 00000 n 0000136972 00000 n 0000159966 00000 n 0000137104 00000 n 0000141833 00000 n 0000333225 00000 n 0000334774 00000 n 0000329209 00000 n 0000338517 00000 n 0000327512 00000 n 0000321270 00000 n 0000141855 00000 n 0000160113 00000 n 0000141974 00000 n 0000146871 00000 n 0000329148 00000 n 0000330903 00000 n 0000333161 00000 n 0000328854 00000 n 0000327446 00000 n 0000333481 00000 n 0000146893 00000 n 0000160260 00000 n 0000147015 00000 n 0000150353 00000 n 0000329087 00000 n 0000328932 00000 n 0000150375 00000 n 0000160407 00000 n 0000150473 00000 n 0000154045 00000 n 0000154067 00000 n 0000160562 00000 n 0000154161 00000 n 0000155869 00000 n 0000255260 00000 n 0000155891 00000 n 0000160717 00000 n 0000155949 00000 n 0000156092 00000 n 0000255490 00000 n 0000156112 00000 n 0000315996 00000 n 0000327360 00000 n 0000337221 00000 n 0000338428 00000 n trailer << /Size 512 /Root 1 0 R /Info 2 0 R >> startxref 340102 %%EOF slurm-slurm-15-08-7-1/doc/html/slurm_logo.png000066400000000000000000000417201265000126300207320ustar00rootroot00000000000000PNG  IHDR fsRGBbKGD pHYs.#.#x?vtIME ).%p IDATxwdU?'9hdQPa1ʬ+]PYCd1uE0U ((U "Y|E=0LLM=3Uիgs={n8BLKn<x!l?|3ZI\yhӪWTPBL&#`m8j.E8edq3{8}8LIH'M 6Ӆ[I}@&Mъ#j`safa8_v5fԄMA.5kpv̇mdO$^[miŚS˄L /K1H_'1Єsj! 1Z( 1b#/bRaV^$14!}19 bS ch~6fz UHЄC ;Fc**U!Ab6YXcKKy$}Tb\/qW5`MЄ? 1O+ic 1w !$BHЄЄ !4!lMDB !$BHЄP<# xF-MQwr4!I !$BHЄЄ !4!(D rh!$hBI4!"4!$hB!$hB(4($hB%Z"M!B&P&M! 1%7jo!G1̗ l6SZqjtaI &+]^L!-ni۽H;LXsVO$SЉވZ#53 ~;ضfe\\ЪGM\.mZqZ펍RCh$qtb2xv3lAnu:IwjeB&z'bKc-Z̀n+?mԪf( تW>]jkļ`[;̊v\lF=[ (6W$[jČ12DMff/~Wm'ZW;ϔ>Q)'8KqLrl axj%fB& #+Ԛ\Q}ͳ@3ފ+TE>9bOWyRk5 s~tbٗjB&rSm60A6YhaGU늘Ka׳or1:6T̶ ! ݺw՚(M̉V#sfXSjѹ/V|`}6\jjsΑY0É"ZHЄ8+!3eKgxIñzeE+R$Pk[PdWJMQHĜ6?lы;faF{˼ovc^o'QQwlUU9.P3\bSLSBΰ]GWo5vxT.  8.2`D3h!AbքyIϤBL&1=6g*w9+:4S)R? ZMHĨŘBEhB&B&(dKO!G!AyѢ)R?Mz]I-T&$hb{bLj"4 )D)mH83t r/L*%!$hb8>6hFJR6G3sgAEݶ7xZ3=%6áK[5(6'F1ijxk095 '՚͵faʼnhQF$6Kp>FK׿}w+YdZMHĨ a %f|h2 +5Tᨭ{_ךcTB&&L!zj]wxc) 0XwktZK]5(6V9MQ],ޖE;C9ea(ފ+gG"4KDr?Tx<#0QhzN1p7,!AyKp?|?Q A6{&s\ʟĕ5@RX^U +taN.A328f̍%}p[&)$hbMɆ䂺^tpQqK8?,3)YqV=R3E!lĕ}ݷȻ![U©7isY.`vjGQ!9j n{Q׭6۴N6`y`2{ٳ6(fD?,,FL׺ͱf~f^T8[8\nK$f\9Bˁ4]r.bK"3W'q+F;/kt_8wXu8S[ \VLN6T=-Xow~pR\[`TyïlK=[\,sS#=Q[VNA y1槕EܪW,>UgwO`W4G4,8WA̼Gi[.6ӽ:Zm$ڰ9\CvH_`Є;8?vEٺ6Lvfz-3x6IFuKsܽF' sw0hZq%srqMC~֯CyNtxB^@ԐCz8z?p b7Uс N(\3j6yb{/3oПvwp{S}wI1`[1 X BMtOkq6wcD>`y8j#ȕF*Q}Jtd>>eG{xy;f8L?Yjd~Ukٌ-Ȍ",3vW<8UXC,HQh֚%` ,oRx ,o_4UhyWQo99.T픠 fPw(cN~JK-qzhWV aUQ jRwm%{C>8``m~ָf$HBNɼLJ4;ZY ga`G/ ;'3 ()Yf|ر!8L0՚Ϋ1G~aρ(1/rh/6үqN~WHYmS!~+E7֚UBZat>o8z|ӝd|+-Z#L01;J;:fv۠4 ucb*"djF{yd~8FBV֚mm{Un(a_̉: /|^随foy*a (FX9({0{ %^\0 E?O gl^mjѢUpksPWݱ7vN-/@F:̶%u.o5DP6zgS񾴦ly\Iiʽ`Ά*XO I\=.ժ#_jq7_%qݜZkЂvAͬ/pwуQ8w.,lħϐ7b(O6fF%;=fJ+k{h> $Ϊ6F6 MG+Ҩ}$+T^=Fd3NX%HH>'%qj# 3~^ 9v6vswe~lm!ZGj3}` 2f,7S8h}$hK+TZ#%GW6[6D+ܽ>HIE6$gT`*cQm;EPV}jml.ZVm%f1֦N^W.?٩(iGWpYeztOq0 [3hD*$hBlQ{8;M:۷_/ʌ($h"'Uz|0GIޒ.u=44QH0tN6Cm;ޱЄMMSm$qt#8'҅=+q?!%qE L:تG<+e&ƙz~ش{jQlCXD^Wh/l(N6žu\뫓rpZ`ǃvf`烟܊++ݭl v5Ss۠1ي5NFq"$hV=UPk:v H |+dhK>>6K 4ks_ר$[rgGw~W|&!)(H:)tClmUfz;ukPZ3],<ɴ'sC"ثKlݒ8jU!r]9%}ĻIzZd ʹjyghZ;1CŁ6[B z11 Q H6&q=SmOB3dx0Ԛinn zXn5#qxX6X3ǞDfSÚkQy&LL)T`Q0qN5҅[Q?9BEs.v9"BWdz:mP2M;D V=:W7dQ?͛=s3~n^Y=_o2ktc;8JS K?VMVrW١inz-.#{y$6O38xmhxSWnٻ+՚72hcӡ.WgPhQljMGc/ 1ÌIW+pHhPklK"3k՚釪N!AyW>v3gjBZZGԾiA87zfOf׊+7)zD~~w+$H6eqQ_tNdػ0p}aۣ'OMBDG 9 w\W^k_[_BO̯ݭzZXQSgHOp͆yG?JY?ڞSkOHyp+ ئLߋًqǘj"n+0;[Չ m- 7_rpZd=:}qni~RJu="@.T}8ȋ#VhIPmz2X)>ӝlUwiF{wO KÇ_̾TmGm ܷz6GHXSm_>BYttUB&UK1kcC s /xJmP_6jh^t%4 *2A5~ ZOD)xHw6/UJCǕ^^v4;KԻ&#pٺ&r-I*`{ζ8k$Fv tL;va'՚IxHR߁ RЎHUa}s{16oh+?6RZѪ.#'h6:?8Wـl|tN"}w*sy_F؇6fR|s8DŽsb?3;LIH=!Ln<]N3,υzt k%/2 `ٓ~ɫYq:}*[iw͹G9غdw^,F{ `$'sv$Wm̬ ֣th3ʥTT֝mL1x(79~e!s_'ˈ.0_Mk!o5kg&q1'1;ųs,9qQ##75T fV\@];_k8g~>O5_JU͸Hǯ5ҧuFdr JwqJ}8DUZZB&k @`[JLI$nwq;{nIFl,[ogBF{3LQ vOte L!@L&<S6>N P${X19%<<L&^FUΒ8yXr8!G ڨ!Gu-ZIs9%hCLTU5 \?HgWjz壣xCr ssD:GU%fC:X) l̹r Ks@>w=vWV7.!\Rm5҅`_]984̶3u2,A+c1{H;3@?fgԚҮ?7Y.Ivtr3{pv=:{Z^&vImf_ͽ*~w恿a~ $h=lsxz2ҩH/W~K8n\\}g"5w$^W5Ù,a5w.h+k %2v]aыМ nLX?l"Y<٪Gk\U9zw~a o'u$.¸1{l`f1@i`Z m~ٰ!^5IƓOA 9/qN/U|(e3~[lQ s*"+O7S'&qԪټR@$hC!g%CC% ow08'+]-rvLWݪGwrrZ2̑_lC =iG#AMA+cY;}0AϞ76V\Yы [!I\YѳVv|b_?m;0H}MӔ7'\~tW-/P6A%FhV5 6R 8zO064KlwK-F6VmZqVrpQm0[n|-KhU&:{6گ~1Oe%ʲN-b,Rhp$SD@/dz #Wm@'6rn#_՚ہsYPlU+q7fu Sg^LU48n>8=1Yqp\^H-9[Ϣ2EhEYι.I鵠uM.4xME؄_|>;Q,ǟ`X_$q%T$>|(d0g"G!Cef&qt;nZ$Y;G+7ך;ܽj/v݄˒8Z3}Xطq/]=#6O4[ll2l ^ ĕzЎ{Tkt7?8u6Rwϲ~nWYOINP+#g%X 6C_fA+AA>ݿۯw>ުGo̍IwpGo|%y0mp;2cu+L{)ֽ&{㘙{8C0t$ս fnŕ5Z< ĐI}̾-GČ3(Ğ1n/66#Ϧu=/ü^A^~0stw6g\.J@_Њ;ɚw8zu['=#^\Fano) 418`"4$j ;_f3=W_k1QpG5ȶj/3]I\yZ+ʄ4g,}آ߫E!ntZd\r#K CGU0/7^M 1JQZԣ_6}<2ys0KUL1$r0|W&ݳЮp.\̴ pCJ hB)%]BӪGWWo9N.? rmcX 8ku`eB摽Uk:!N^ YA_s}W׻OkϲB&zP"(1b5dEb7bd#N@F&.у+L&P= 2ôb8 ǭ7A[X1|F6("+E/ܢ,Cr2oDd) j+TNB -+dVh,Sʯ.h-C̍2AphA[2bhY.̈ΊŃz!pL  r B!AB!J(heR! h{ȤB!FAЎIB 9Y*b#4Q' !VAay2p+_bUO"KsY;)I#ȗI;T5Bf?ZHgȚ n+NpB X=)t FUMb|m鈱*E!$h.l /I܂v̰Q:ǯVB 29[aEp`7C!)hZ+L!Dl&!D9-~~vd,!D9 2f0@!+hmy g~L 4pl~#%4OU&b q9Cn0b1_$K%;قQw ;!c,h˟{->  ,T1 !3W^<B6 ; !mY!$hUB!A*f!Z"4!B&BHЄ( !Ma9؄*ҰtÀmTlBHBE\v ˳9,#K-T̥}!z?BH?KAgqCE5!A+ ;e e2 m3*2!$heDhu٢o[fG D!!$h~Ev&(5઒ρdÍ3$0x񐿃F xρ2.I;gˁ+ߐ,]$ g B]2N3vStOVqF>,c\ll(׈,E'Kp'NE (I~|j;».EO9<9?s#{` ,m. }aʵ^oϚrKE]ٍ\ QZ:TĿ+>} |d d-z ,;uCGnaY~tSl_ :2_ )yםX~ wIEwXcv:jp}{>=ϖzp8=y^ ?O#>7GA{AAB63s;WmyqAjtwF]yMb= xhuꔍs!ȔaHbrO ײM4}K= ep`k7s;xp7<+Zˮ˜=0gqfqs&wUkg;!^xqb3~77lP!G[Y(』Mv(?tEI3v(gWY:psɑe^$7;wу І >;B_>.<&<þC)7si[QTÓ=sO C؝"u+ h8ך.AuK giytu7m3'|)u>vxb|_бCڑ]utFF6gXsp@tXz.]SL3׊kgOt*ϪL˧̷QHCk*'uwrmZ4 L "3 ,ꉝ?w_ l' 1?O;0N7vhN34$/3:jySl}SlZ{뽙AKB\s{w;JBԒ'Bl|f-_;Qt#a]OX1ϴjS=$^(0Lpywov>jՔv}7LwgaU׀SeWD\Ɇ>M~|烳xeMkl잾zIi+yy)يжx|Ͳؘa!(2YWǮdZO b6I6gBbLP*d{:KG!ll.dg/!E̼us0[ %,Y-VيGk[" !A+#t l#K/.ɲȈ%a$>D-?EmØ<$h#$p4Y/w߷@1#D*5†~%l|.JVѿ|e`~ ׺~>L glOu=LFt۵dYk,ѿ̳e.YYe"7mY@rosfw?D< 1ALiٽbP*5vd&UIպY,-cRλ&mS@IiUxb `9x?d{?<(҉LP\ax`XXK|{sO!4/"O|a,fDžqg]d)zKkt vB1(,*SH8x듁. #B fyD!lz#4_18$`zǛw&[As̄.߼OvO.slj.od߽"3U_ dC 6)~+OnENt2&w\(,ljw.;TBf ڽ]xvr} o{KS6\z0١Elޭ_΢0] :t)?OwF=S3oOw;}Xs7|{+JKɟþ]?0Ϳ$;=cOR֟I1蓆gн9V]x Ү۵dY;4Bx[Ρ m[Y k"Oe5obneOgym2sc}*2S/ dOgyuV x_]L6O h哜 Bݩ7_s1q@#S >t;Fu;>qIy!:3ne(~ <$|\%kN 6>dla9ss;nwɷrl?.Z|jNO="_v}uS7 QA:3wL e.ڮodHf գ6Nǧ:kE}-A|{6j箿o2yw6.ѣ SkuϝSȲsg'(>}g}=tz1>˹Fh>x*ٜBzXBu=7Mɗ3|ݍ|Pbvczk?gw6ݺz{.<#H2%[xrT9]HA| [ ,Ѝ8p7Y̳67F66znd [}zYMY߷Ba 9d66췊lI=wn6gw,\K9%/|6\)8d?F Wp˺8#СB!$hB!M!ĠX AB1L4Qy1#_H`sq`KC{v*S!ĠXBc:n!4hQ!<Y,kyIENDB`slurm-slurm-15-08-7-1/doc/html/slurm_ug_agenda.shtml000066400000000000000000001141421265000126300222460ustar00rootroot00000000000000

Slurm User Group Meeting 2015

Registration

The conference cost is

  • $250 per person for early registration by 31 July 2015
  • $350 per person for standard registration by 31 August 2015
  • $600 per person for late registration starting 1 September 2015

This includes presentations, tutorials, lunch and snacks on both days, plus dinner on Tuesday evening.
Register here.

Agenda

Hosted by the The George Washington University

The 2015 Slurm User Group Meeting will be held on September 15 and 16 in Washington, DC. The meeting will include an assortment of tutorials, technical presentations, and site reports. The schedule and abstracts are shown below.

The meeting will be held at the The Marvin Center Grand Ballroom, 800 21st St NW, Washington, DC 20052.

Hotel Information

The official George Washington University Hotels

Schedule

15 September 2015

Time Theme Speaker Title
08:00 - 08:30Registration
08:30 - 08:45WelcomeWickbergWelcome
08:45 - 09:30KeynotePutman10-years of computing and atmospheric research at NASA: 1 day per day
09:30 - 10:00TechnicalJette, Auble, GeorgiouOverview of Slurm Version 15.08
10:00 - 10:15Break
10:15 - 11:00TechnicalChristiansen, AubleTrackable Resources (TRES)
11:00 - 11:30TechnicalAuble, PerryMessage Aggregation
11:30 - 12:00TechnicalJette and WickbergBurst Buffer Support
12:00 - 12:15TechnicalAubleQuality Of Service Attached to a Partition
12:15 - 13:15Lunch
13:15 - 13:45TechnicalJettePower Management Support for Cray Systems
13:45 - 14:15TechnicalHautreuxSlurm Layouts Framework
14:15 - 14:45TechnicalGeorgiou, HautreuxPower adaptive scheduling based on layouts
14:45 - 15:15TechnicalJacobsen, Botts, CanonNever port your code again: Docker deployment with Slurm
15:15 - 15:30Break
15:30 - 16:00TechnicalSillaIncreasing cluster throughput with Slurm and rCUDA
16:00 - 16:30TechnicalMarkwardtRunning Virtual Machines using Slurm
16:30 - 17:00TechnicalLu, Zhang, et. al.Extending Slurm with Support for SR-IOV and IVShmem
19:00 Dinner Old Ebbitt Grill (Partial Atrium 1) 675 15th Street, NW, Washington, DC 20005

16 September 2015

Time Theme Speaker Title
08:00 - 08:30TechnicalSchultz, PerrySupport for heterogeneous resources and MPMD model
08:30 - 09:00TechnicalRajagopal, GlesserTowards a multi-constraints resources selection within Slurm
09:00 - 09:30TechnicalGlesser, GeorgiouImproving Job Scheduling by using Machine Learning
09:30 - 10:00TechnicalChakraborty, et.al.Enhancing Startup Performance of Parallel Applications in Slurm
10:00 - 10:15Break
10:15 - 10:45TechnicalHaymoreProfile-driven testbed
10:45 - 11:15TechnicalBenini, Trofinoff Workload Simulator
11:15 - 11:45TechnicalAuble, GeorgiouSlurm Roadmap
11:45 - 12:15TechnicalChristiansen and JetteFederated Cluster Scheduling
12:15 - 13:15Lunch
13:15 - 13:45TechnicalJacobsen, BottsExperiences of Native SLURM on the NERSC Edison Cray XC30
13:45 - 14:05Site ReportCoxBrigham Young University
14:05 - 14:25Site ReportDesantisUniversity of South Florida
14:25 - 14:45Site ReportPfaffNASA Center for Climate Simulation (NCCS)
14:45 - 15:05Site ReportKrauseJülich Supercomputing Center
15:05 - 15:20Break
15:20 - 15:40Site ReportToro, HernandezSlurm Experiences on GUANE-1
15:40 - 16:00Site ReportWickbergThe George Washington University
16:00 - 17:00ClosingAubleClosing discussions


Abstracts

15 September 2015

Keynote: 10-years of computing and atmospheric research at NASA: 1 day per day

William Putman (NASA Center for Climate Simulation, NCCS)

Global weather and climate models have evolved dramatically from their origins as basic mathematical models in the 1950s and 1960s. The growth and availability of more advanced computing over the latter part of the twentieth century lead to interactive atmosphere and land models and eventually fully coupled ocean/atmosphere Earth system models. The twentieth century has seen these global climate/weather models evolve into massive numerical missions including more components of the Earth system and representing more processes at much finer scales. Throughout this evolution of development, scientists have been willing explore the boundaries of computational capacity to push these models beyond their limitations. Often times scientists are willing to explore at a rate of just a single simulation day per day to see features never before seen in these models.

At Goddard’s NASA Center for Climate Simulation (NCCS) and the Global Modeling and Assimilation Office (GMAO) the development of the Goddard Earth Observing System model (GEOS-5) over the last 10-years serves as a microcosm of this evolution. From spiraling storms in the tropics, to the fidelity of clouds over the North Pacific, this global atmospheric model has evolved into an Earth system simulator depicting global weather and climate at resolutions never before explored on a global scale. This evolution takes us from Hurricane Katrina in 2005 to Superstorm Sandy in 2012. We will explore stratocumulus clouds across the North Pacific and tornadoes in the Midwest United States. With branches along the way to explore fine particles across the globe and the first global view of waves of carbon dioxide leaving their sources and engulfing the world.

Overview of Slurm Version 15.08

Morris Jette and Danny Auble (SchedMD)
Yiannis Georgiou (Bull)

This presentation will describe a multitude of new capabilities provided in Slurm version 15.08 (released August 2015) which are not covered in a separate talk. These enhancements include:

  • Resource allocation optimization for both Dragonfly and SGI Hypercube networks
  • Dedication of nodes to a single user, with the ability to run multiple jobs per node
  • Archiving job accounting information to Elasticsearch with it’s powerful analytic tools
  • Job dependencies joined with OR operator
  • Automatic replacement of resources in advanced reservation with idle resources
  • sbcast support for file transfers based upon job step allocation
  • Additional options automatically distributing a job’s tasks across allocated nodes (pack/no_pack options)
  • Reservation of hyperthreads (or cores) for system use
  • Permit QOS based preemption with job suspend/resume

Trackable Resources (TRES)

Brian Christiansen and Danny Auble (SchedMD)

Resource accounting has been largely built on CPU utilization. As we see more heterogeneous workloads come, we see some that take up all the memory on a node, but only a few CPUs. The user is only charged for a fraction of the node, even though they make it so only a job with small memory requirements would be use the remaining CPUs. The same kind issue can exist on nodes with accelerators or system licenses. Energy is also becoming of interest to account for against power caps. Basically any resource that has a limit can now tracked by Slurm. Each of these resources is tracked separately in the Slurm database, making this information available to tools like sreport. A variety of system metrics can be displayed to help determine where the bottleneck is in terms of hardware for the workload of the system. This presentation will describe the design and implementation of this functionality as well as specific information about its configuration and use.

Burst Buffer Support

Morris Jette (SchedMD) and
Tim Wickberg (The George Washington University)

Slurm version 15.08 includes support for burst buffers, a shared high-speed storage resource. Slurm provides support for allocating these resources, staging files in, scheduling compute nodes for jobs using these resources, then staging files out. Burst buffers can also be used as temporary storage during a job’s lifetime, without file staging. Slurm also supports the concept of a persistent burst buffer, which are not associated with any specific job. A typical use of persistent burst buffers is to maintain datasets used by multiple programs. Slurm support for burst buffers is provided using a plugin mechanism so that a various infrastructures may be easily configured. Two plugins are currently available: one for Cray (DataWarp) systems and a second which relies upon scripts to provide a generic interface. This presentation will describe the design and implementation of Slurm’s burst buffer support as well as specific information about its configuration and use.

Message Aggregation

Martin Perry (Bull)
Danny Auble (SchedMD)

In efforts to support bigger systems we look at potential scaling issues. One of these deals with messages not originating with the slurmctld that potentially create a many to one scenario. A couple of these exist in previous versions of Slurm with the epilog complete and node registration messages. These messages come from each slurmd when they are ready to send. What has been added is the ability to "route" these messages through other slurmds to gather them up only delivering the slurmctld a small subset of the messages drastically reducing the amount of connections the slurmctld has to service and respond to as well as limiting the contention dealing with locks in the slurmctld. This presentation will describe the design and implementation of this functionality as well as specific information about its configuration and use.

Quality Of Service Attached to a Partition

Danny Auble (SchedMD)

A partition can now have an associated Quality Of Service (QOS). This will allow a partition to have all the limits available to a QOS. If a limit is set in both, the partition QOS will take precedence over the job’s QOS unless the job’s QOS has the ’OverPartQOS’ flag set. This also allows for truly floating partitions where a partition can have access to all the nodes in the system you can set a GrpCPU limit in the Partition QOS making it so only so many CPUs can be used at once it just doesn’t matter which ones. This can improve utilization as well since nodes aren’t carved off for debugging or such. This presentation will describe the design and implementation of this functionality as well as specific information about its configuration and use.

Power Management Support for Cray Systems

Morris Jette (SchedMD)

Power consumption has become a critical factor in high performance computer management. Slurm version 15.08 provides an integrated power management system for power capping on Cray systems. The mode of operation is to take the configured power cap for the system and distribute it across the compute nodes under Slurm control. Initially that power is distributed evenly across all compute nodes. Slurm then monitors actual power consumption and redistributes power as appropriate. Specifically, Slurm lowers the power caps on nodes using less than their cap and redistributes that power across the other nodes. The thresholds at which a node’s power cap are raised or lowered are configurable as are the rate of change the power cap. In addition, starting a job on a node immediately triggers resetting the node’s power cap to a higher level. A variety of configuration parameters are available to control the rate of change permitted in a node’s power cap, triggers for changing a node’s power cap, and how power caps are managed across resources allocated to each job. This presentation will describe the design and implementation of Slurm’s power management support as well as specific information about its configuration and use.

Slurm Layouts Framework, latest evolutions

Matthieu Hautreux (CEA)

Looking at HPC and data centers past years trends, pressure has been pushed on the ability to make the most of the available resources while minimizing the associated exploitation costs. New workload and or system characteristics have been studied or even added in resource managers, showing their benefits when selecting the best places to spread an ever increasing demand of IT tasks.

The Slurm layouts framework aims at providing a new and generic way to describe resource characteristics, related collateral resources as well as the relations between them. By giving Slurm an extensible stack of consistent layers to represent the different aspects of systems, the framework goal is to ease the management of multiple objectives scheduling and to better integrate Slurm with its future hosting systems for smart interactions.

This talk will present the latest evolutions in the layouts framework: the newly introduced API and the layout consistency logic.

Power adaptive scheduling based on layouts

Yiannis Georgiou (BULL)
Matthieu Hautreux (CEA)

The power consumption of a supercomputer needs to be adjusted based on varying power budget or electricity availabilities. As a consequence, Slurm has to be adequately adapted in order to efficiently schedule jobs with optimized performance while limiting power usage whenever needed. Based on last years prototype and theoretical studies along with the latest evolutions of the layouts framework within Slurm we have developed a power adaptive scheduling that uses layouts for the description of the power characteristics and the dynamic calculation of the varying power budget. This presentation will provide a description of the developments’ internals, along with various use cases for administrators and users.

Never port your code again: Docker deployment with Slurm

Douglas Jacobsen, James Botts, Shane Canon (NERSC, Lawrence Berkeley National Laboratory)

Linux container technology has been transforming many aspects of software engineering, testing, and delivery. The promise of this technology is extreme portability, reproducibility of code and, for the a large-scale HPC facility, ease of new application deployment. The application of Docker containers to scientific codes could enable portability of applications between HPC centers, repeatability of data analysis, and increased ease-of-use. The authors have developed Shifter, a software mechanism for importing Docker and other user-defined images to scalably and securely run them across thousands of nodes. An additional benefit of the Shifter approach is improved I/O performance of starting large applications with many shared-library or other dependencies. Using Slurm plugin capabilities, Shifter is tightly integrated into Slurm enabling a seamless user experience.

Running Virtual Machines using Slurm

Ulf Markwardt (Technische Universität Dresden)

The diversity of user requests in high-throughput computing sometimes are easier to be met with user-specific operating systems. Some communities even need a certain version Scientific Linux of e.g. in order to create reproducible results. From this we derive a demand for running virtual machines on compute nodes. This is justified by the low overhead of virtualization infrastructure in terms of CPU usage and memory footprint. Now, the challenge is to manage these virtual nodes with the same batch system as the real nodes, with minimal requirements for the users.

We are proposing a lightweight infrastructure based on Slurm to manage virtual machines based on users’ demands. This infrastructure runs in a test cluster, and is about to be installed in our new Bull petaflop system. We will present our experiences, as well as the working scheme, benchmarks and figures.

Challenges and Designs of Extending Slurm with Support for SR-IOV and IVShmem

Xiaoyi Lu, Jie Zhang, Sourav Chakraborty, Hari Subramoni, Hari, Mark Arnold, Jonathan Perkins, Dhabaleswar Panda (Ohio State University)

Significant growth has been witnessed during the last few years in HPC clusters with multi-/many-core processors, accelerators, and high-performance interconnects (e.g. InfiniBand). To alleviate the cost burden, sharing HPC cluster resources to end users through virtualization is becoming more and more attractive. Due to the lower performance of virtualized I/O devices, the adoption of virtualization in the HPC domain still remains low. The recently introduced Single Root I/O Virtualization (SR-IOV) technique for InfiniBand and High Speed Ethernet provides native I/O virtualization capabilities and is changing the landscape of HPC virtualization. However, achieving near native throughput for HPC applications that use both MPI point-to-point and collective operations on the virtualized systems presents a new set of challenges for the designers of high performance middleware such as Slurm, MPI libraries, etc.

First of all, our earlier studies have shown that SR-IOV lacks locality-aware communication support, which leads to performance overheads for inter-VM communication within the same physical node. In this context, another novel feature, Inter-VM Shared Memory (IVShmem), is proposed to support shared memory backed intra-node-inter-VM communication. Our enhanced MVAPICH2 MPI library can fully take advantage of SR-IOV and IVShmem to deliver near-native performance for HPC applications. Through these studies, we find that there is a significant requirement of managing and isolating virtualized resources of SR-IOV and IVShmem to support running multiple concurrent MPI jobs and such an isolation is hard to be achieved by MPI library alone. Furthermore, modern multi-core architectures allow users to have the flexibility to choose from various VM subscription policies, ranging from one VM per node, to one VM per CPU socket and one VM per CPU core. The choices allow for finer-grained resource management and scheduling, depending on the resource requirements of various applications and workloads. These issues lead us to the following broad challenges:

  • Can Slurm be extended to support SR-IOV and IVShmem for running concurrent MPI jobs efficiently?
  • Can critical HPC resources be efficiently shared among multiple users by using extended Slurm with support for SR-IOV and IVShmem based virtualization?
  • Can SR-IOV and IVShmem enabled Slurm and MPI library provide bare-metal performance for end HPC applications?

In this talk, we will first discuss all these technical requirements and challenges of extending Slurm with support for SR-IOV and IVShmem. Then, we will present the alternative designs of enhancing Slurm with virtualization-oriented capabilities such as job submission to dynamically created virtual machines with SR-IOV and IVShmem resources on InfiniBand clusters. Some preliminary performance evaluation results will be shared with the Slurm community.

16 September 2015

Increasing cluster throughput with Slurm and rCUDA

Federico Silla (Technical University of Valencia, Spain)

In this presentation we will introduce a modified version of Slurm supporting the use of the remote GPU virtualization mechanism, using the rCUDA framework as an example. Furthermore, we will present an extensive performance evaluation carried out in a 16-node 16-GPU cluster, and using workloads up to 400 jobs long composed of a mix of 8 different applications. Results show that by combining the use of rCUDA with a modified version of Slurm, cluster throughput is increased up to 45%. Similar reductions are attained in overall power consumption. Additionally, GPU utilization is noticeably increased. The use of less GPUs than nodes has also been considered. In this case, results show that cluster throughput is maintained when the rCUDA middleware is used thanks to the combined ability of rCUDA+Slurm of sharing GPUs among jobs.

Towards a multi-constraints resources selection within Slurm

Dineshkumar Rajagopal, David Glesser, Yiannis Georgiou (Bull)

The selection of resources within Slurm is currently done through the select plugins. Those plugins are efficient and scalable but are not very easily extensible to take into account new and multiple type of constraints such as power or temperature for example. This presentation will investigate a new flexible method of resources selection based on the layouts framework that has the ability to support multi-constraints of resources within the selection algorithms. It will provide a description of the prototype development upon Slurm and show performance evaluation results in terms of efficiency and scalability.

Improving Job Scheduling by using Machine Learning

David Glesser, Yiannis Georgiou (BULL)
Denis Trystram (INRIA)

More and more data are produced within Slurm by monitoring the system and the jobs. The methods studied in the field of big data, including Machine Learning, could be used to improve the scheduling. This talk will investigate the following question: to what extent Machine Learning techniques can be used to improve job scheduling? We will focus on two main approaches. The first one, based on an online supervised learning algorithm, we try to predict the execution time of jobs in order to improve backfilling. In the second approach a particular ’Learning2Rank’ algorithm is implemented within Slurm as a priority plugin to sort jobs in order to optimize a given objective.

Enhancing Startup Performance of Parallel Applications in Slurm

Sourav Chakraborty, Hari Subramoni, Jonathan Perkins, Adam Moody and Dhabaleswar K. Panda (Ohio State University)

As system sizes continue to grow, time taken to launch a parallel application on large number of cores becomes an important factor affecting the overall system performance. Slurm is a popular choice to launch parallel applications written in Message Passing Interface (MPI), Partitioned Global Address Space (PGAS) and other programming models. Most of the libraries use the Process Management Interface (PMI) to communicate with the process manager and bootstrap themselves. The current PMI protocol suffers from several bottlenecks due to its design and implementation, and adversely affects the performance and scalability of launching parallel applications at large scale.

In our earlier work, we identified several of these bottlenecks and evaluated different designs to address them. We also showed how the proposed designs can improve performance and scalability of the startup mechanism of MPI and hybrid MPI+PGAS applications. Some of these designs are already available as part of the MVAPICH2 MPI library and pre-release version of Slurm. In this work we present these designs to the Slurm community. We also present some newer designs and how they can accelerate startup of large scale MPI and PGAS applications.

Experiences using the Adaptable Profile-driven Testbed (Apt) and Slurm to dynamically provision & schedule Bare Metal HPC resources

Brian Haymore (University of Utah)

In a traditional HPC environment, cluster resources have a fixed configuration into which usage has to fit. In collaboration with both the Apt project (http://www.flux.utah.edu/project/apt) at Utah and SchedMD, CHPC has been exploring ways to utilize dynamic provisioning of a set of resources that are being shared by a number of very different missions, to better deliver HPC resources to researchers.

We are using tools within the Apt facility to dynamically provision bare metal compute, network and storage resources to meet the needs of a job. The extensible dynamic "cloud" support built into Slurm is being used to manage and schedule the workloads. Key areas of interest have been robustness and ease of support of the system, job turnaround, effectiveness in handling "bursting", and resource contention.

Results on our current implementation have been positive and there are usage models for which we see clear benefit to this manner of operation. We have been in operational status for about 8 months now with only minor issues. There are, however, several areas in which improvements in the integration between Slurm and Apt would would lead to fewer failed job starts and therefore a more robust system. In addition, we have identified usage cases for which dynamic provisioning of HPC resources may be of great value, for example, supporting compliance regulated content and integrating software defined network controls.

Using the Barcelona Slurm Workload Simulator at CSCS--Porting, Re-engineering and Set up

Massimo Benini, Stephen Trofinoff and Gilles Fourestey (CSCS)

Several years ago, the BSC produced a beta-version of a Slurm workload simulator. Consisting of some modifications to the Slurm code base at the time and some additional daemons and tools, it can simulate the scheduling of various workloads in reduced time. Thus, it can provide its user with an idea of how Slurm will perform given its current configuration under different workloads. For any site that uses Slurm, especially at sites such as CSCS where there are varied and sometimes complex Slurm installations, this tool has the potential to give valuable insight into what works best and what does not. New development, however, had ceased with it still being in a beta-format. This report/presentation aims to discuss some of the technical details and challenges encountered as part of CSCS’s effort to utilize this tool.

Slurm Roadmap

Morris Jette and Danny Auble (SchedMD)
Yiannis Georgiou (Bull)

This presentation will describe new capabilities planned in future releases of Slurm

  • Xeon Phi Knights Landing support
  • Greater control over a computer’s power consumption including power floor and controlling the rate of change (Cray only)
  • Control over a job’s frequency limits based upon its QOS
  • Inter-cluster job management supporting jobs submitted to multiple clusters and started on the first resource actually available and job dependencies between jobs on different clusters
  • Dynamic runtime settings environment
  • Support of VM and containers management within Slurm (HPC, Cloud/Big Data)
  • Deploy Big Data workflow upon HPC infrastructure

Federated Cluster Scheduling

Brian Christiansen and Morris Jette (SchedMD)

Slurm has provided limited support for resource management across multiple cluster, but with notable limitations. We have designed Slurm enhancements to eliminate these limitations in a scalable and reliable fashion while increasing both system utilization and responsiveness. This design allows jobs to be submitted to multiple clusters with their execution host determination delayed until execution can actually begin, which optimizes responsiveness in the face of workload changes. Unique enterprise-wide jobs IDs will be used to permit rapid enterprise-wide job operations such as job dependencies, status reports, and cancellation. Finally, each cluster operates with a great deal of autonomy. A limited number of inter-cluster operations are coordinated directly between the Slurm daemons managing each individual cluster. A single centralized daemon is only required to provide initial message routing information when the individual clusters start. We anticipate the overhead of this design to be sufficiently low for Slurm to retain the ability to executing hundreds of jobs per second per cluster. An overview of the design will be presented along with an analysis of its capabilities.

Support for heterogeneous resources and MPMD model

Rod Schultz, Martin Perry, Bill Brophy, Doug Parisek, Yiannis Georgiou (BULL)
Matthieu Hautreux (CEA)

Slurm, in its current stable versions, provides the support of SPMD model (Single Program Multiple Data) as well as a limited MPMD model (Multiple Program Multiple Data) support. By limited MPMD support, we mean that despite users can specify different binaries to be used within an parallel job, all the tasks are currently associated with the same resources requirements. These approaches are not very well suited to manage complex jobs with different heterogeneous resources requirements per tasks. For example, users willing to leverage different types of hardware resources inside the same MPI application, having part of their code running on GPUs while another is running on standard CPUs with 2GB per core and a last part on CPUs with 8GB per core, has to request for the most complete set of resources for each task wasting some of the hardware with tasks that will not need all of them. In some cases, the total configuration required to run such a job does not even exist as all the nodes of the cluster may not provide all the hardware features. The presentation will provide the current progress of our studies and developments towards the support of heterogeneous resources and MPMD model within Slurm.

Experiences of Native SLURM on the NERSC Edison Cray XC30

Douglas Jacobsen, James Botts (NERSC, Lawrence Berkeley National Laboratory)

The authors deployed Native SLURM on a Cray test system earlier this year and have been able to evaluate most aspects of running SLURM in this environment. Leveraging that experience, the authors deployed Native SLURM on the NERSC production XC30, edison, during a day of dedicated time. The primary goal was to measure the effectiveness of Native SLURM on a large, 5572 compute node, production system.

A simulation of the production workflow was used to load the system to typical NERSC production levels and measurements were taken of system efficiency and responsiveness. The SLURM configuration was tuned during this run and much was learned how to effectively un Native SLURM on a large system.

Brigham Young University Site Report

Ryan Cox (Brigham Young University)

  • BYU’s Slurm configuration, including information about our Lua job_submit script
  • Development work to catch ssh-launched processes and adopt them into Slurm for resource tracking/enforcement
  • New web-based tools for account coordinators to manage their accounts, soon to be available on github
  • A new tool to gather job performance data, visualize the data on a web page, and automatically alert admins about anomalies
  • Questions and answers about Fair Tree

University of South Florida Site Report

John Desantis (University of South Florida)

Research Computing, an organization within Information Technology, University of South Florida, was facing several issues with the production scheduler:

  • Long delays in dispatching jobs;
  • Scheduler processes continually consuming large amounts of CPU resources;
  • Reservations needing to be scheduled via cron jobs;
  • Lots of fragmentation in resources;
  • Difficult to explain priority calculation to users;
  • Complicated preemption policies. Research Computing began looking at Slurm as a replacement scheduling system in late July, 2014. An alpha test cluster was deployed on old hardware and Slurm testers were recruited. Within 1 month, the benefits of Slurm over the previous scheduler were clear:
  • Ease of deployment and upgrades;
  • Rich accounting system;
  • Predictable preemption;
  • Multifactor plugins (MaxJobAge, FavorSmallJobs, etc) work as expected;
  • QOS system controlled jobs as configured;
  • Priorities via QOS and other multifactor weights are easily explainable to users;
  • Low system overhead for scheduler processes.

Research Computing moved its full production environment to Slurm March of 2015. In addition to providing insights to the benefits of Slurm, this presentation will discuss implementation issues and solutions.

NASA Center for Climate Simulation (NCCS) Site Report

Bruce Pfaff (NASA Center for Climate Simulation)

The NASA Center for Climate Simulation (NCCS) converted to Slurm in the fall of 2013. Since then, we have migrated from version 2.x to 14.x and have converted from a partition centric scheduling regime to a QOS centric regime.

While serving a diverse user base, and supporting a variety of local modeling and scientific research codes, we have attempted to achieve our scheduling goals by using the existing Slurm features, w/o making extensive local modifications.

This site report will review our migration, user experiences, and our current QOS based approach to scheduling, as well as configuration changes made since then to support multiple hardware upgrades and running with multiple versions of the OS.

Jülich Supercomputing Center Site Report

Dorian Krause (Jülich Supercomputing Center)

The Jülich Supercomputing Center (JSC) in the research center Jülich is operating several top-class supercomputers with a number of different workload and resource management systems. Since 2013 JSC is evaluating Slurm as a workload manager for future cluster systems in combination with JSC’s custom software stack. In autumn 2014 the first user accessible cluster with the Slurm workload manager was deployed. Currently JSC is in the process of installing its next-generation, 2 Petaflops per second peak, general-purpose supercomputer which will be the first large-scale production system at JSC to leverage Slurm. In this site report we will discuss our batch system setup, experiences with Slurm gained since 2013 and possibly outline our ideas for the evolution of our Slurm deployments.

Slurm Experiences on GUANE-1 (GPUS Unified Advanced Environment for Supercomputing)

Gilberto Javier Diaz Toro, Carlos Jaime Barrios Hernandez (Universidad Industrial de Santander)

GUANE-1 is the main supercomputing platform based in NVIDIA GPUs with 128 TESLA cards. The platform was launch in operation in 2012 (Initially with 64 GPUs) and with a reload in 2013 to upgrade the performance in processing, network and storage, ensuring the same power consumption. At date, GUANE-1 offers 205 TFLOPS of peak of performance

In December 2014, after an extensive evaluation procedure, GUANE-1 uses in co-­habitation Slurm with OAR Workload Manager at same time. Experiences of the use of Slurm among these 6 months show the preferential use of the users (Today, it exists 687 active users, then 70% use Slurm). On the other hand, for the management and support, maintenance process, fault‐tolerance implementation, implementation of the in-house plugins, interconnection with other platforms (and interoperability in Grid or large scale platforms) and scalability allows a good performance and relative easy operation of the platform.

This proposal presents performance evaluation, experiences and open questions about the Slurm use in co-habitation with other workload schedulers and in alone experiences performed in the SC3UIS platforms, mainly in GUANE-1 supercomputing infrastructure.

The George Washington University Site Report

Tim Wickberg (The George Washington University)

The George Washington University is proud to host the 2015 user group meeting in Washington DC. We present a brief overview of our user of Slurm on Colonial One, our University-wide shared HPC cluster. We present both a detailed overview of our use and configuration of the "fairshare" priority model to assign resources across disparate participating schools, colleges, and research centers, as well as some novel uses of the scheduler for non-traditional tasks such as file system backups.

Last modified 17 August 2015

slurm-slurm-15-08-7-1/doc/html/slurm_ug_cfp.shtml000066400000000000000000000033211265000126300215730ustar00rootroot00000000000000

CALL FOR ABSTRACTS

Slurm User Group Meeting 2015
15-16 September 2015
Washington DC, USA

You are invited to submit an abstract of a tutorial, technical presentation or site report to be given at the Slurm User Group Meeting 2015. This event is sponsored and organized by The George Washington University and SchedMD. It will be held in Washington DC, USA on 15-16 September 2015.

This international event is opened to everyone who wants to:

  • Learn more about Slurm, a highly scalable Resource Manager and Job Scheduler
  • Share their knowledge and experience with other users and administrators
  • Get detailed information about the latest features and developments
  • Share requirements and discuss future developments

Everyone who wants to present their own usage, developments, site report, or tutorial about Slurm is invited to send an abstract to slugc@schedmd.com.

Important Dates:
1 June 2015: Abstracts due
15 June 2015: Notification of acceptance
15-16 September 2015: Slurm User Group Meeting 2015

Program Committee:
Yiannis Georgiou (Bull)
Brian Gilmer (Cray)
Matthieu Hautreux (CEA)
Morris Jette (SchedMD)
Bruce Pfaff (NASA Goddard Space Flight Center)
Tim Wickberg (The George Washington University)

Last modified 20 March 2015

slurm-slurm-15-08-7-1/doc/html/slurmctld_plugstack.shtml000066400000000000000000000061251265000126300232010ustar00rootroot00000000000000

Slurmctld Generic Plugin Programmer Guide

Overview

This document describes slurmctld daemon's generic plugins and the API that defines them. It is intended as a resource to programmers wishing to write their own slurmctld generic plugins. This is version 100 of the API.

The slurmctld generic plugin must conform to the Slurm Plugin API with the following specifications:

const char plugin_name[]="full text name"

A free-formatted ASCII text string that identifies the plugin.

const char plugin_type[]="major/minor"

The major type must be "slurmctld_plugstack." The minor type can be any suitable name for the type of slurmctld package. Slurm can be configured to use multiple slurmctld_plugstack plugins if desired.

const uint32_t plugin_version
If specified, identifies the version of Slurm used to build this plugin and any attempt to load the plugin from a different version of Slurm will result in an error. If not specified, then the plugin may be loadeed by Slurm commands and daemons from any version, however this may result in difficult to diagnose failures due to changes in the arguments to plugin functions or changes in other Slurm functions used by the plugin.

API Functions

int init (void)

Description:
Called when the plugin is loaded, before any other functions are called. Put global initialization here.

Returns:
SLURM_SUCCESS on success, or
SLURM_ERROR on failure.

void fini (void)

Description:
Called when the plugin is removed. Clear any allocated storage here.

Returns: None.

Note: These init and fini functions are not the same as those described in the dlopen (3) system library. The C run-time system co-opts those symbols for its own initialization. The system _init() is called before the SLURM init(), and the SLURM fini() is called before the system's _fini().

Only the init and fini functions of the plugin will be called. The init function will be called when the slurmctld daemon begins accepting RPCs. The fini function will be called when the slurmctld daemon stops accepting RPCs. In the case of the backup slurmctld daemon, the init and fini functions may be called multiple times (when it assumes control functions and then when it reliquishes them to the primary slurmctld daemon).

Last modified 27 March 2015

slurm-slurm-15-08-7-1/doc/html/slurmstyles.css000066400000000000000000000313761265000126300211700ustar00rootroot00000000000000div.figure { /*float: right;*/ margin: 0 0 15px 20px; padding: 15px; /*border: 1px solid gray;*/ text-align: center; font-style: italic; overflow: auto; } pre { /*width: 90%;*/ margin: 0 0 15px 20px; padding: 15px; border: 1px solid gray; background-color: #ddd; white-space: pre; overflow: auto; } a:visited { /*color: #6633FF;*/ color: #581C90; } a.nav:visited { /*color: #6633FF;*/ color: #581C90; } /* When printing, eliminate the nagivation column on the left, * and make the content section fill the width of the page. */ @media print { #container { border: 0px solid gray; } #content { margin-left: 0px; border: 0px solid gray; width: 95%; } #cse { display: none } #footer { display: none } #footer2 { display: none } #navigation { display: none } } /** * Default Theme, v2. * */ /* Slight reset to make the preview have ample padding. */ .cse .gsc-control-cse, .gsc-control-cse { margin-top: 13px; margin-bottom: 13px; } .cse .gsc-control-wrapper-cse, .gsc-control-wrapper-cse { width: 100%; } .cse .gsc-branding, .gsc-branding { display: none; } .cse .gsc-control-cse div, .gsc-control-cse div { position: normal; } /* Selector for entire element. */ .cse .gsc-control-cse, .gsc-control-cse { background-color: #fff; border: 1px solid #fff; } .cse .gsc-control-cse:after, .gsc-control-cse:after { content:"."; display:block; height:0; clear:both; visibility:hidden; } .cse .gsc-resultsHeader, .gsc-resultsHeader { border: block; } table.gsc-search-box td.gsc-input { padding-right: 24px; } .gsc-search-box-tools .gsc-search-box .gsc-input { padding-right: 12px; } input.gsc-input { font-size: 16px; padding: 4px 9px; border: 1px solid #D9D9D9; width: 99%; } .gsc-input-box { border: 1px solid #D9D9D9; background: #fff; height: 25px; } .gsc-search-box .gsc-input>input:hover, .gsc-input-box-hover { border: 1px solid #b9b9b9; border-top-color: #a0a0a0; -moz-box-shadow: inset 0 1px 2px rgba(0,0,0,.1); -webkit-box-shadow: inset 0 1px 2px rgba(0,0,0,.1); box-shadow: inset 0 1px 2px rgba(0,0,0,.1); outline: none; } .gsc-search-box .gsc-input>input:focus, .gsc-input-box-focus { border: 1px solid #4d90fe; -moz-box-shadow: inset 0 1px 2px rgba(0,0,0,.3); -webkit-box-shadow: inset 0 1px 2px rgba(0,0,0,.3); box-shadow: inset 0 1px 2px rgba(0,0,0,.3); outline: none; } /* Search button */ .cse input.gsc-search-button, input.gsc-search-button { font-family: inherit; font-size: 11px; font-weight: bold; color: #fff; padding: 0 8px; height: 29px; min-width: 54px; border: 1px solid #666666; border-radius: 2px; -moz-border-radius: 2px; -webkit-border-radius: 2px; border-color: #3079ed; background-color: #4d90fe; background-image: -webkit-gradient(linear,left top,left bottom,from(#4d90fe),to(#4787ed)); background-image: -webkit-linear-gradient(top,#4d90fe,#4787ed); background-image: -moz-linear-gradient(top,#4d90fe,#4787ed); background-image: -ms-linear-gradient(top,#4d90fe,#4787ed); background-image: -o-linear-gradient(top,#4d90fe,#4787ed); background-image: linear-gradient(top,#4d90fe,#4787ed); filter: progid:DXImageTransform.Microsoft.gradient(startColorStr='#4d90fe',EndColorStr='#4787ed'); } .cse input.gsc-search-button:hover, input.gsc-search-button:hover { border-color: #2f5bb7; background-color: #357ae8; background-image: -webkit-gradient(linear,left top,left bottom,from(#4d90fe),to(#357ae8)); background-image: -webkit-linear-gradient(top,#4d90fe,#357ae8); background-image: -moz-linear-gradient(top,#4d90fe,#357ae8); background-image: -ms-linear-gradient(top,#4d90fe,#357ae8); background-image: -o-linear-gradient(top,#4d90fe,#357ae8); background-image: linear-gradient(top,#4d90fe,#357ae8); filter: progid:DXImageTransform.Microsoft.gradient(startColorStr='#4d90fe',EndColorStr='#357ae8'); } .cse input.gsc-search-button:focus, input.gsc-search-button:focus { box-shadow:inset 0 0 0 1px rgba(255,255,255,0.5); -webkit-box-shadow:inset 0 0 0 1px rgba(255,255,255,0.5); -moz-box-shadow:inset 0 0 0 1px rgba(255,255,255,0.5); } .cse .gsc-search-button input.gsc-search-button-v2, input.gsc-search-button-v2 { width: 13px; height: 13px; padding: 6px 27px; min-width: 13px; margin-top: 2px; } .gsc-refinementHeader { text-decoration: none; font-weight: bold; color: #666; } .gsc-refinementHeader.gsc-refinementhActive { text-decoration: none; color: #DD4B39; } .gsc-refinementHeader.gsc-refinementhInactive { text-decoration: none; cursor: pointer; } .gsc-refinementHeader.gsc-refinementhInactive>span:hover { text-decoration: underline; } .gsc-refinementhActive>span { border-bottom: 3px solid; padding-bottom: 2px; } .gsc-refinementsArea { margin-top: 0; padding-bottom: 4px; padding-top: 10px; } /* Foont size for refinements */ .gsc-tabsArea { font-size: 11px; } /* For searcher tabs */ .gsc-tabsArea > .gsc-tabHeader { height: 27px; } .gsc-tabsArea > div { height: 30px; overflow: auto; } /* No spacers needed for keneddy refinements */ .gsc-tabsArea .gs-spacer { display: none; } .gsc-tabsArea .gs-spacer-opera { display: none; } .gsc-tabsArea { margin-top: 12px; margin-bottom: 0; height: 29px; border-bottom: 1px solid #CCC; } /* Refinement tab properties */ .gsc-tabHeader { display: inline-block; padding: 0 8px 1px 8px; margin-right: 0px; margin-top: 0px; font-weight: bold; height: 27px; line-height: 27px; min-width: 54px; text-align: center; } /* Active refinement tab properties */ .gsc-tabHeader.gsc-tabhActive { border: 1px solid #ccc; border-bottom-color: #fff; color: #202020; } /* Inactive refinement tab properties */ .gsc-tabHeader.gsc-tabhInactive { background: #fff; color: #666; border-left: 0; border-right: 0; border-top: 0; } /* Inner wrapper for an image result */ .gsc-imageResult-column, .gsc-imageResult-classic { padding: .25em; border: 1px solid #fff; margin-bottom: 1em; } /* Inner wrapper for a result */ .gsc-webResult.gsc-result { padding: .25em; border: 1px solid #fff; margin-bottom: 0; } /* Inner wrapper for a result */ .cse .gsc-webResult.gsc-result { border: 1px solid #fff; margin-bottom: 0; } /* Wrapper for a result. */ .gsc-webResult .gsc-result { padding: 10px 0 10px 0; } /* Result hover event styling */ .cse .gsc-webResult.gsc-result:hover, .gsc-webResult.gsc-result:hover, .gsc-webResult.gsc-result.gsc-promotion:hover, .gsc-results .gsc-imageResult-classic:hover, .gsc-results .gsc-imageResult-column:hover { border: 1px solid #fff; } .gs-web-image-box, .gs-promotion-image-box { padding: 2px 0; } .gs-promotion-image-box img.gs-promotion-image { max-width: 50px; } .gs-promotion-image-box img.gs-promotion-image, .gs-promotion-image-box { width: 50px; } .gs-web-image-box img.gs-image { max-width: 70px; max-height: 70px; } .gs-web-image-box-landscape img.gs-image { max-width: 70px; max-height: 50px; } .gs-web-image-box-portrait img.gs-image { max-width: 50px; max-height: 120px; } .gs-image-box.gs-web-image-box.gs-web-image-box-landscape { width: 80px; } .gs-image-box.gs-web-image-box.gs-web-image-box-portrait { width: 60px; height: 50px; overflow: hidden; } .gs-web-image-box { text-align: inherit; } .gs-promotion-image-box img.gs-promotion-image { border: 1px solid #ebebeb; } /*Promotion Settings*/ /* The entire promo */ .cse .gsc-webResult.gsc-result.gsc-promotion, .gsc-webResult.gsc-result.gsc-promotion { background-color: #F6F6F6; margin-top: 5px; margin-bottom: 10px; } .gsc-result-info { margin-top: 0; margin-bottom: 0; padding: 8px; padding-bottom: 10px; } .gs-promotion-text-cell .gs-visibleUrl, .gs-promotion-text-cell .gs-snippet { font-size: 13px; } .gsc-table-result, .gsc-thumbnail-inside, .gsc-url-top { padding-left: 8px; padding-right: 8px; } .gs-promotion-table { margin-left: 8px; margin-right: 8px; } .gs-promotion table { padding-left: 8px; padding-right: 8px; } table.gs-promotion-table-snippet-with-image{ padding-left: 0; padding-right: 0; } .gs-promotion-text-cell { margin-left: 8px; margin-right: 8px; } .gs-promotion-text-cell-with-image { padding-left: 10px; padding-right: 10px; vertical-align: top; } /* Promotion links */ .cse .gs-promotion a.gs-title:link, .gs-promotion a.gs-title:link, .cse .gs-promotion a.gs-title:link *, .gs-promotion a.gs-title:link *, .cse .gs-promotion .gs-snippet a:link, .gs-promotion .gs-snippet a:link { color: #15C; } .cse .gs-promotion a.gs-title:visited, .gs-promotion a.gs-title:visited, .cse .gs-promotion a.gs-title:visited *, .gs-promotion a.gs-title:visited *, .cse .gs-promotion .gs-snippet a:visited, .gs-promotion .gs-snippet a:visited { color: #15C; } .cse .gs-promotion a.gs-title:hover, .gs-promotion a.gs-title:hover, .cse .gs-promotion a.gs-title:hover *, .gs-promotion a.gs-title:hover *, .cse .gs-promotion .gs-snippet a:hover, .gs-promotion .gs-snippet a:hover { color: #15C; } .cse .gs-promotion a.gs-title:active, .gs-promotion a.gs-title:active, .cse .gs-promotion a.gs-title:active *, .gs-promotion a.gs-title:active *, .cse .gs-promotion .gs-snippet a:active, .gs-promotion .gs-snippet a:active { color: #15C; } /* Promotion snippet */ .cse .gs-promotion .gs-snippet, .gs-promotion .gs-snippet, .cse .gs-promotion .gs-title .gs-promotion-title-right, .gs-promotion .gs-title .gs-promotion-title-right, .cse .gs-promotion .gs-title .gs-promotion-title-right *, .gs-promotion .gs-title .gs-promotion-title-right * { color: #000; } /* Promotion url */ .cse .gs-promotion .gs-visibleUrl, .gs-promotion .gs-visibleUrl { color: #093; } /* Style for auto-completion table * .gsc-completion-selected : styling for a suggested query which the user has moused-over * .gsc-completion-container : styling for the table which contains the completions */ .gsc-completion-selected { background: #EEE; } .gsc-completion-container { font-family: Arial, sans-serif; font-size: 16px; background: white; border: 1px solid #CCC; border-top-color: #D9D9D9; margin: 0; } .gsc-completion-title { color: #15C; } .gsc-completion-snippet { color: #000; } /* Full URL */ .gs-webResult div.gs-visibleUrl-short, .gs-promotion div.gs-visibleUrl-short { display: none; } .gs-webResult div.gs-visibleUrl-long, .gs-promotion div.gs-visibleUrl-long { display: block; } /* Keneddy shows url at the top of the snippet, after title */ .gsc-url-top { display: block; } .gsc-url-bottom { display: none; } /* Keneddy shows thumbnail inside the snippet, under title and url */ .gsc-thumbnail-left { display: none; } .gsc-thumbnail-inside { display: block; } .gsc-result .gs-title { height: 1.2em; } .gs-result .gs-title, .gs-result .gs-title * { color: #15C; } .gs-result a.gs-visibleUrl, .gs-result .gs-visibleUrl { color: #093; text-decoration: none; padding-bottom: 2px; } .gsc-results .gsc-cursor-box { margin: 10px; } .gsc-results .gsc-cursor-box .gsc-cursor-page { text-decoration: none; } .gsc-results .gsc-cursor-box .gsc-cursor-page:hover { text-decoration: underline; } .gsc-results .gsc-cursor-box .gsc-cursor-current-page { text-decoration: none; color: #DD4B39; } .gsc-preview-reviews, .gsc-control-cse .gs-snippet, .gsc-control-cse .gs-promotion em, .gsc-control-cse .gs-snippet, .gsc-control-cse .gs-promotion em { color: #333; } .gsc-control-cse-zh_CN .gs-snippet b, .gsc-control-cse-zh_CN .gs-promotion em, .gsc-control-cse-zh_TW .gs-snippet b, .gsc-control-cse-zh_TW .gs-promotion em { color: #C03; } .gsc-snippet-metadata, .gsc-role, .gsc-tel, .gsc-org, .gsc-location, .gsc-reviewer, .gsc-author { color: #666; } .gsc-wrapper.gsc-thinWrapper { border-right: 1px solid #e9e9e9; } .gs-spelling a { color: #15C; } .gs-spelling { color: #333; padding-left: 7px; padding-right: 7px; } .gs-snippet { margin-top: 1px; } div.gsc-clear-button { background-image: url('//www.google.com/uds/css/v2/clear.png'); } div.gsc-clear-button:hover { background-image: url('//www.google.com/uds/css/v2/clear-hover.png'); } .gsc-preview-reviews ul { padding-left: 0; padding-right: 0; } .gsc-completion-container .gsc-completion-icon-cell { width: 42px; height: 42px; padding-right: 10px; } td.gsc-branding-text, td.gcsc-branding-text { color: #666; } .gcsc-branding { padding-top: 4px; padding-left: 8px; padding-right: 8px; } .gsc-adBlock { padding-bottom: 5px; } .gsc-table-cell-snippet-close, .gsc-table-cell-snippet-open { padding-left: 0; padding-right: 0; } .gsc-selected-option-container { background-color: whiteSmoke; background-image: linear-gradient(top,whiteSmoke,#F1F1F1); background-image: -webkit-linear-gradient(top,whiteSmoke,#F1F1F1); background-image: -moz-linear-gradient(top,whiteSmoke,#F1F1F1); background-image: -ms-linear-gradient(top,whiteSmoke,#F1F1F1); background-image: -o-linear-gradient(top,whiteSmoke,#F1F1F1); } slurm-slurm-15-08-7-1/doc/html/sponsors.gif000066400000000000000000000056511265000126300204220ustar00rootroot00000000000000GIF89ajsǣl8NSN'n^]Wa4kjIS4ȮiOç ̘;zRoCDxww0QK gùece53inbPWk#yLbk@׿)K U}A~(WlOuߚly'mgz[_A5U-q\jx`2 t]\Z?dt[@^I8P>g|=drg[(CPooo881etQ:Ǩ!UG^BpDFB' mI*6w^æ!,}}~~}T1 Bq~z.EMME.z~T99q@ 9q ;;B8kAHHYCTܢT rb!cHnq2A:[4ldK4E\,`*? % *L)A0!NRNC@l@AHr-PB"S(F~F1Y0[$֍"#GN,\1$L!i s4LmN7`?CArLJN2gΓ!O_qH_< KEE~\4OR<ɝƁZ dĸ+HZ[py]Qy>=A>$&|w9یLJd , ȀqkD ,p&`Qa\Ņ=Ibg='>(p]s C@BKw8(@Rex!zv]v~ /VM,Golig!`skM~uG,9ȡr@Z k1R&t4j$@ Z D'Bp`Ae>Ǝ,C\gT~"@le! -q& $Gaђ\ЅQx4H1ܳT0ClklP-r(M\ @7TpC sO|`@ ]0@ >@ ]>\raS9i$[0}'T ln暯!來9N;LP7ذH!A qrEhF FTLL1F]DE,HB_ e6yМhM!e` PɉЃ0PQ`.q,(@ 0#1 z{^.<? Z0hd(%!l >1Hj̀ 1 "ŜN BP8:Zb0ESth)c:P*ZBB[tDdtk&Wgj~hZHVA 2=pX X;/!.`"( *3SP4ibmړ ?TҁI02 Pz"<ԟ!&g -,v(A=x=0@h$ `3`24!.8P+'S;9ÃN*x[GpeaE80Ddp ?aNN)AO^ JjPCD[P zC(xK(:C'dp,PaN5- @! '̇N$ vqv'XD.YaH P@*@L@-A_l3[5ۤ,#Yb0`@MQBZ­3,]f-xkxBA0ʷ%)9t@dH Tp q@\r@(>h?]/Hp)Xw7XPc'EHJ Q v[ޢT l@``ΠJ82#9XzANx>c8;k@h zei\ K0*)"x! pĀmȁxUPu*H!>P8( ~@І*6P*@p@*x Uq8؁ hc?H!]aW0Ղ!l`?!`3aO8z +a/t"<2@iM %_,ğm~'a<Aؓ^:GClpx O@q"L]G}BpqP`

Sun Constellation Administrator Guide

Overview

This document describes the unique features of Slurm on Sun Constellation computers. You should be familiar with the Slurm's mode of operation on Linux clusters before studying the relatively few differences in Sun Constellation system operation described in this document.

Slurm's primary mode of operation is designed for use on clusters with nodes configured in a one-dimensional space. A topology plugin was developed to optimize resource allocations in three dimensions. Changes were also made for hostlist parsing to support hostnames of an appropriate format.

Configuration

Two variables must be defined in the config.h file: HAVE_SUN_CONST and SYSTEM_DIMENSIONS=4 (more on that value later). This can be accomplished in several different ways depending upon how Slurm is being built.

  1. Execute the configure command with the option --enable-sun-const OR
  2. Execute the rpmbuild command with the option --with sun_const OR
  3. Add %with_sun_const 1 to your ~/.rpmmacros file.

Node names must have a four-digit suffix describing identifying their location (this is why SYSTEM_DIMENSIONS is configured to be 4). The first three digits specify the node's zero-origin position in the X-, Y- and Z-dimension respectively. This is followed by one digit specifying the node's sequence number at that coordinate (e.g. "tux0123" for X=0, Y=1, Z=2, sequence_number=3; "tux1234" for X=1, Y=2, Z=3, sequence_number=4). The coordinate location should be zero-origin (starting at X=0, Y=0, Z=0). The sequence number should also start at zero and can include upper case letters for higher values, for up to 36 nodes at a specific coordinate (e.g. 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, ... Z). To avoid confusion, we recommend that the node name prefix consist of lower case letters. Numerically sequential node names may specified by in Slurm commands and configuration files using the system name prefix with the end-points enclosed in square brackets and separated by an "-". For example "tux[0000-000B]" is used to represent the twelve nodes with sequence numbers from 0 to B, all at coordinate X=0, Y=0 and Z=0. Alternately, rectangular prisms of node names can be specified using the system name prefix with the end-points enclosed in square brackets and separated by an "x". For example "tux[0000x0111]" is used to represent the eight nodes in a block with endpoints at "tux0000" and "tux0111" (tux0000, tux0001, tux0010, tux0011, tux0100, tux0101, tux0110 and tux0111). Viewed another way, these eight nodes have sequence numbers 0 or 1 and have four distinct coordinates (000, 001, 010 and 011). While node names of this form are required for Slurm's internal use, it need not be the name returned by the hostlist -s command. See man slurm.conf for details on how to use the NodeName, NodeAddr and NodeHostName configuration parameters for flexibility in this matter.

Next you need to select from two options for the resource selection plugin (the SelectType option in Slurm's slurm.conf configuration file):

  1. select/cons_res - Performs a best-fit algorithm based upon a one-dimensional space to allocate whole nodes, sockets, or cores to jobs based upon other configuration parameters.
  2. select/linear - Performs a best-fit algorithm based upon a one-dimensional space to allocate whole nodes to jobs.

In order for select/cons_res or select/linear to allocate resources physically nearby in four-dimensional space, the nodes be specified in Slurm's slurm.conf configuration file in such a fashion that those nearby in slurm.conf (managed internal to Slurm as a one-dimensional space) are also nearby in the physical four-dimensional space.

Slurm can automatically perform that conversion using a Hilbert curve. Set TopologyPlugin=topology/3d_torus in Slurm's slurm.conf configuration file for nodes to be reordered appropriately. First a three-dimensional Hilbert curve is constructed through all coordinates in the system such that every coordinate in the order list physically adjacent. The node list are then re-ordered following that Hilbert curve while maintaining the node's sequence number (i.e. not building a Hilbert curve through that fourth dimension). If the number of nodes at each coordinate varies, it may be necessary to put a separate node definition line in the slurm.conf file. If that is the case, put them in numeric order for the topology/3d_torus plugin to function properly.

Alternately configure TopologyPlugin=topology/none and construct your own node ordering sequence as desired in slurm.conf. Note that each node must be listed exactly once and consecutive nodes should be nearby in three-dimensional space. The open source code used by Slurm to generate the Hilbert curve is included in the distribution at contribs/skilling.c in the event that you wish to experiment with it to generate your own node ordering. Two examples of Slurm configuration files are shown below:

# slurm.conf for Sun Constellation system of size 4x4x4
# with eight nodes at each coordinate (512 nodes total)

# Configuration parameters removed here

# Automatic orders nodes following a Hilbert curve
NodeName=DEFAULT CPUs=8 RealMemory=2048 State=Unknown
NodeName=tux[0000x3337]
PartitionName=debug Nodes=tux[0000x3337] Default=Yes State=UP
# slurm.conf for Sun Constellation system of size 1x2x2
# with a different count of nodes at each coordinate

# Configuration parameters removed here

# Manual ordering of nodes following a space-filling curve
NodeName=DEFAULT CPUs=8 RealMemory=2048 State=Unknown
NodeName=tux[0000-0007]  #  8 nodes at 0,0,0
NodeName=tux[0010-001B]  # 12 nodes at 0,0,1
NodeName=tux[0100-0107]  #  8 nodes at 0,1,0
NodeName=tux[0110-0115]  #  6 nodes at 0,1,1
PartitionName=DEFAULT Default=Yes State=UP
PartitionName=debug Nodes=tux[0000-0007,0010-001B,0100-0107,0110-0115]

Tools

The node names output by the scontrol show nodes command will be ordered as defined (sequentially along the Hilbert curve) rather than in numeric order (e.g. "tux0010" may follow "tux1010" rather than "tux0000"). The output of the smap and sview commands will also display nodes ordered by the Hilbert curve so that nodes appearing adjacent in the display will be physically adjacent. This permits the locality of a job, partition or reservation to be easily determined. In order to locate specific nodes with the sview command, select Actions, Search and Node(s) Name then enter the desired node names. The output of other Slurm commands (e.g. sinfo and squeue) will use a Slurm hostlist expression with the node names numerically ordered). Slurm partitions should contain nodes which are defined sequentially by that ordering for optimal performance.

Last modified 4 August 2009

slurm-slurm-15-08-7-1/doc/html/switchplugins.shtml000066400000000000000000001116361265000126300220220ustar00rootroot00000000000000

Slurm Switch Plugin API

Overview

This document describe. Slurm switch (interconnect) plugins and the API that defines them. It is intended as a resource to programmers wishing to write their own SLURM switch plugins. Note that many of the API functions are used only by one of the daemons. For example the slurmctld daemon builds a job step's switch credential (switch_p_build_jobinfo) while the slurmd daemon enables and disables that credential for the job step's tasks on a particular node(switch_p_job_init, etc.).

Slurm switch plugins are Slurm plugins that implement the Slurm switch or interconnect API described herein. They must conform to the Slurm Plugin API with the following specifications:

const char plugin_type[]
The major type must be "switch." The minor type can be any recognizable abbreviation for the type of switch. We recommend, for example:

  • none—A plugin that implements the API without providing any actual switch service. This is the case for Ethernet and Myrinet interconnects.
  • nrt—IBM Network Resource Table API.

const char plugin_name[]
Some descriptive name for the plugin. There is no requirement with respect to its format.

const uint32_t plugin_version
If specified, identifies the version of Slurm used to build this plugin and any attempt to load the plugin from a different version of Slurm will result in an error. If not specified, then the plugin may be loadeed by Slurm commands and daemons from any version, however this may result in difficult to diagnose failures due to changes in the arguments to plugin functions or changes in other Slurm functions used by the plugin.

Data Objects

The implementation must support two opaque data classes. One is used as an job step's switch "credential." This class must encapsulate all job step specific information necessary for the operation of the API specification below. The second is a node's switch state record. Both data classes are referred to in Slurm code using an anonymous pointer (void *).

The implementation must maintain (though not necessarily directly export) an enumerated errno to allow Slurm to discover as practically as possible the reason for any failed API call. Plugin-specific enumerated integer values should be used when appropriate. It is desirable that these values be mapped into the range ESLURM_SWITCH_MIN and ESLURM_SWITCH_MAX as defined in slurm/slurm_errno.h. The error number should be returned by the function switch_p_get_errno() and this error number can be converted to an appropriate string description using the switch_p_strerror() function described below.

These values must not be used as return values in integer-valued functions in the API. The proper error return value from integer-valued functions is SLURM_ERROR. The implementation should endeavor to provide useful and pertinent information by whatever means is practical. In some cases this means an errno for each credential, since plugins must be re-entrant. If a plugin maintains a global errno in place of or in addition to a per-credential errno, it is not required to enforce mutual exclusion on it. Successful API calls are not required to reset any errno to a known value. However, the initial value of any errno, prior to any error condition arising, should be SLURM_SUCCESS.

API Functions

The following functions must appear. Functions which are not implemented should be stubbed.

int init (void)

Description:
Called when the plugin is loaded, before any other functions are called. Put global initialization here.

Returns:
SLURM_SUCCESS on success, or
SLURM_ERROR on failure.

void fini (void)

Description:
Called when the plugin is removed. Clear any allocated storage here.

Returns: None.

Note: These init and fini functions are not the same as those described in the dlopen (3) system library. The C run-time system co-opts those symbols for its own initialization. The system _init() is called before the SLURM init(), and the SLURM fini() is called before the system's _fini().

Global Switch State Functions

int switch_p_libstate_save (char *dir_name);

Description: Save any global switch state to a file within the specified directory. The actual file name used is plugin specific. It is recommended that the global switch state contain a magic number for validation purposes. This function is called by the slurmctld deamon on shutdown. Note that if the slurmctld daemon fails, this function will not be called. The plugin may save state independently and/or make use of the switch_p_job_step_allocated function to restore state.

Arguments: dir_name    (input) fully-qualified pathname of a directory into which user SlurmUser (as defined in slurm.conf) can create a file and write state information into that file. Cannot be NULL.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_libstate_restore(char *dir_name, bool recover);

Description: Restore any global switch state from a file within the specified directory. The actual file name used is plugin specific. It is recommended that any magic number associated with the global switch state be verified. This function is called by the slurmctld deamon on startup.

Arguments:
dir_name    (input) fully-qualified pathname of a directory containing a state information file from which user SlurmUser (as defined in slurm.conf) can read. Cannot be NULL.
recover  true of restart with state preserved, false if no state recovery.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_libstate_clear (void);

Description: Clear switch state information.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

Node's Switch State Monitoring Functions

Nodes will register with current switch state information when the slurmd daemon is initiated. The slurmctld daemon will also request that slurmd supply current switch state information on a periodic basis.

int switch_p_clear_node_state (void);

Description: Initialize node state. If any switch state has previously been established for a job step, it will be cleared. This will be used to establish a "clean" state for the switch on the node upon which it is executed.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_alloc_node_info(switch_node_info_t *switch_node);

Description: Allocate storage for a node's switch state record. It is recommended that the record contain a magic number for validation purposes.

Arguments: switch_node    (output) location for writing location of node's switch state record.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_build_node_info(switch_node_info_t switch_node);

Description: Fill in a previously allocated switch state record for the node on which this function is executed. It is recommended that the magic number be validated.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_pack_node_info (switch_node_info_t switch_node, Buf buffer);

Description: Pack the data associated with a node's switch state into a buffer for network transmission.

Arguments:
switch_node    (input) an existing node's switch state record.
buffer    (input/output) buffer onto which the switch state information is appended.

Returns: The number of bytes written should be returned if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_unpack_node_info (switch_node_info_t switch_node, Buf buffer);

Description: Unpack the data associated with a node's switch state record from a buffer.

Arguments:
switch_node    (input/output) a previously allocated node switch state record to be filled in with data read from the buffer.
buffer    (input/output) buffer from which the record's contents are read.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

void switch_p_free_node_info (switch_node_info_t switch_node);

Description: Release the storage associated with a node's switch state record.

Arguments: switch_node    (input/output) a previously allocated node switch state record.

Returns: None

char * switch_p_sprintf_node_info (switch_node_info_t switch_node, char *buf, size_t size);

Description: Print the contents of a node's switch state record to a buffer.

Arguments:
switch_node    (input) a node's switch state record.
buf    (input/output) point to buffer into which the switch state record is to be written.
of buf in bytes.
size    (input) size of buf in bytes.

Returns: Location of buffer, same as buf.

Job's Switch Credential Management Functions

int switch_p_alloc_jobinfo(switch_jobinfo_t *switch_job, uint32_t job_id, uint32_t step_id);

Description: Allocate storage for a job step's switch credential. It is recommended that the credential contain a magic number for validation purposes.

Arguments:
switch_job    (output) location for writing location of job step's switch credential. job_id    (input) the job id for this job step NO_VAL for not set.
step_id    (input) the step id for this job step NO_VAL for not set.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_build_jobinfo (switch_jobinfo_t switch_job, slurm_step_layout_t *step_layout, char *network);

Description: Build a job's switch credential. It is recommended that the credential's magic number be validated.

Arguments:
switch_job    (input/output) Job's switch credential to be updated
step_layout    (input) the layout of the step with at least the node_list, tasks and tids set.
network    (input) Job step's network specification from srun command.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

switch_jobinfo_t switch_p_copy_jobinfo (switch_jobinfo_t switch_job);

Description: Allocate storage for a job's switch credential and copy an existing credential to that location.

Arguments: switch_job    (input) an existing job step switch credential.

Returns: A newly allocated job step switch credential containing a copy of the function argument.

void switch_p_free_jobinfo (switch_jobinfo_t switch_job);

Description: Release the storage associated with a job's switch credential.

Arguments: switch_job    (input) an existing job step switch credential.

Returns: None

int switch_p_pack_jobinfo (switch_jobinfo_t switch_job, Buf buffer);

Description: Pack the data associated with a job step's switch credential into a buffer for network transmission.

Arguments:
switch_job    (input) an existing job step switch credential.
buffer    (input/output) buffer onto which the credential's contents are appended.

Returns: The number of bytes written should be returned if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_unpack_jobinfo (switch_jobinfo_t switch_job, Buf buffer);

Description: Unpack the data associated with a job's switch credential from a buffer.

Arguments:
switch_job    (input/output) a previously allocated job step switch credential to be filled in with data read from the buffer.
buffer    (input/output) buffer from which the credential's contents are read.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_get_jobinfo (switch_jobinfo_t switch_job, int data_type, void *data);

Description: Get some specific data from a job's switch credential.

Arguments:
switch_job    (input) a job's switch credential.
data_type    (input) identification as to the type of data requested. The interpretation of this value is plugin dependent.
data    (output) filled in with the desired data. The form of this data is dependent upon the value of data_type and the plugin.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_job_step_complete (switch_jobinfo_t switch_job, char *nodelist);

Description: Note that the job step associated with the specified nodelist has completed execution.

Arguments:
switch_job    (input) The completed job step's switch credential.
nodelist    (input) A list of nodes on which the job step has completed. This may contain expressions to specify node ranges. (e.g. "linux[1-20]" or "linux[2,4,6,8]").

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_job_step_part_comp (switch_jobinfo_t switch_job, char *nodelist);

Description: Note that the job step has completed execution on the specified node list. The job step is not necessarily completed on all nodes, but switch resources associated with it on the specified nodes are no longer in use.

Arguments:
switch_job    (input) The completed job's switch credential.
nodelist    (input) A list of nodes on which the job step has completed. This may contain expressions to specify node ranges. (e.g. "linux[1-20]" or "linux[2,4,6,8]").

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

bool switch_p_part_comp (void);

Description: Indicate if the switch plugin should process partial job step completions (i.e. switch_g_job_step_part_comp). Support of partition completions is compute intensive, so it should be avoided unless switch resources are in short supply (e.g. switch/federation).

Returns: True if partition step completions are to be recorded. False if only full job step completions are to be noted.

void switch_p_print_jobinfo(FILE *fp, switch_jobinfo_t switch_job);

Description: Print the contents of a job's switch credential to a file.

Arguments:
fp    (input) pointer to an open file.
switch_job    (input) a job's switch credential.

Returns: None.

char *switch_p_sprint_jobinfo(switch_jobinfo_t switch_job, char *buf, size_t size);

Description: Print the contents of a job's switch credential to a buffer.

Arguments:
switch_job    (input) a job's switch credential.
buf    (input/output) pointer to buffer into which the job credential information is to be written.
size    (input) size of buf in bytes

Returns: location of buffer, same as buf.

int switch_p_get_data_jobinfo(switch_jobinfo_t switch_job, int key, void *resulting_data);

Description: Get data from a job step's switch credential.

Arguments:
switch_job    (input) a job step's switch credential.
key    (input) identification of the type of data to be retrieved from the switch credential. NOTE: The interpretation of this key is dependent upon the switch type.
resulting_data    (input/output) pointer to where the requested data should be stored.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

Node Specific Switch Management Functions

int switch_p_node_init (void);

Description: This function is run from the top level slurmd only once per slurmd run. It may be used, for instance, to perform some one-time interconnect setup or spawn an error handling thread.

Arguments: None

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_node_fini (void);

Description: This function is called once as slurmd exits (slurmd will wait for this function to return before continuing the exit process).

Arguments: None

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

Job Step Management Functions

=========================================================================
Process 1 (root)        Process 2 (root, user)  |  Process 3 (user task)
                                                |
switch_p_job_preinit                            |
fork ------------------ switch_p_job_init       |
waitpid                 setuid, chdir, etc.     |
                        fork N procs -----------+--- switch_p_job_attach
                        wait all                |    exec mpi process
                        switch_p_job_fini*      |
switch_p_job_postfini                           |
=========================================================================

int switch_p_job_preinit (switch_jobinfo_t jobinfo switch_job);

Description: Preinit is run as root in the first slurmd process, the so called job step manager. This function can be used to perform any initialization that needs to be performed in the same process as switch_p_job_fini().

Arguments: switch_job    (input) a job's switch credential.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_job_init (stepd_step_rec_t *job, uid_t uid);

Description: Initialize interconnect on node for a job. This function is run from the second slurmd process (some interconnect implementations may require the switch_p_job_init functions to be executed from a separate process than the process executing switch_p_job_fini() [e.g. Quadrics Elan]).

Arguments:
job    (input) structure representing the slurmstepd's view of the job step.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_job_attach ( switch_jobinfo_t switch_job, char ***env, uint32_t nodeid, uint32_t procid, uint32_t nnodes, uint32_t nprocs, uint32_t rank );

Description: Attach process to interconnect (Called from within the process, so it is appropriate to set interconnect specific environment variables here).

Arguments:
switch_job    (input) a job's switch credential.
env    (input/output) the environment variables to be set upon job step initiation. Switch specific environment variables are added as needed.
nodeid    (input) zero-origin id of this node.
procid    (input) zero-origin process id local to slurmd and not equivalent to the global task id or MPI rank.
nnodes    (input) count of nodes allocated to this job step.
nprocs    (input) total count of processes or tasks to be initiated for this job step.
rank    (input) zero-origin id of this task.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_job_fini (switch_jobinfo_t jobinfo switch_job);

Description: This function is run from the same process as switch_p_job_init() after all job tasks have exited. It is *not* run as root, because the process in question has already setuid to the job step owner.

Arguments: switch_job    (input) a job step's switch credential.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_job_postfini ( stepd_step_rec_t *job );

Description: This function is run from the initial slurmd process (same process as switch_p_job_preinit()), and is run as root. Any cleanup routines that need to be run with root privileges should be run from this function.

Arguments:
job    (input) structure representing the slurmstepd's view of the job step.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int switch_p_job_step_allocated (switch_jobinfo_t jobinfo switch_job, char *nodelist);

Description: Note that the identified job step is active at restart time. This function can be used to restore global switch state information based upon job steps known to be active at restart time. Use of this function is preferred over switch state saved and restored by the switch plugin. Direct use of job step switch information eliminates the possibility of inconsistent state information between the switch and job steps.

Arguments:
switch_job    (input) a job's switch credential.
nodelist    (input) the nodes allocated to a job step.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

Job Management Suspend/Resume Functions

int switch_p_job_suspend_test(switch_jobinfo_t *switch_job);

Description: Determine if a specific job step can be preempted.

Arguments:
switch_job    (input) a job step's switch credential.

Returns: SLURM_SUCCESS if the job step can be preempted and SLURM_ERROR otherwise.

void switch_p_job_suspend_info_get(switch_jobinfo_t *switch_job, void **suspend_info);

Description: Pack any information needed for a job step to be preempted into an opaque data structure.
NOTE: Use switch_p_job_suspend_info_free() to free the opaque data structure.

Arguments:
switch_job    (input) a job step's switch credential.
suspend_info    (input/output) information needed for a job to be preempted. This should be NULL for the first call and data about job steps will be added to the opaque data structure for addition function call (i.e. for each addition job step).

void switch_p_job_suspend_info_pack(void *suspend_info, Buf buffer);

Description: Pack the information needed for a job to be preempted into a buffer

Arguments:
suspend_info    (input) information needed for a job to be preempted, including information for all steps in that job.
buffer    (input/output) the buffer that has suspend_info added to it.

int switch_p_job_suspend_info_unpack(void **suspend_info, Buf buffer);

Description: Unpack the information needed for a job to be preempted from a buffer.
NOTE: Use switch_p_job_suspend_info_free() to free the opaque data structure.

Arguments:
suspend_info    (output) information needed for a job to be preempted, including information for all steps in that job.
buffer    (input/output) the buffer that has suspend_info extracted from it.

Returns: SLURM_SUCCESS if the suspend_info data was successfully read from buffer and SLURM_ERROR otherwise.

int switch_p_job_suspend(void *suspend_info, int max_wait);

Description: Suspend a job's use of switch resources. This may reset MPI timeout values and/or release switch resources.

Arguments:
suspend_info    (input) information needed for a job to be preempted, including information for all steps in that job.
max_wait    (input) maximum time interval to wait for the operation to complete, in seconds

Returns: SLURM_SUCCESS if job's switch resources suspended and SLURM_ERROR otherwise.

int switch_p_job_resume(void *suspend_info, int max_wait);

Description: Resume a job's use of switch resources. This may reset MPI timeout values and/or release switch resources.

Arguments:
suspend_info    (input) information needed for a job to be resumed, including information for all steps in that job.
max_wait    (input) maximum time interval to wait for the operation to complete, in seconds

Returns: SLURM_SUCCESS if job's switch resources resumed and SLURM_ERROR otherwise.

void switch_p_job_suspend_info_free(void *suspend_info);

Description: Free the resources allocated to store job suspend/resume information as generated by the switch_p_job_suspend_info_get() and switch_p_job_suspend_info_unpack() functions.

Arguments:
suspend_info    (input) information needed for a job to be preempted, including information for all steps in that job.

Job Step Management Suspend/Resume Functions

int switch_p_job_step_pre_suspend (stepd_step_rec_t *jobstep);

Description: Perform any job step pre-suspend functionality (done before the application PIDs are stopped).

Arguments:
job    (input) structure representing the slurmstepd's view of the job step.

Returns: SLURM_SUCCESS if the job step can be suspended and SLURM_ERROR otherwise.

int switch_p_job_step_post_suspend (stepd_step_rec_t *jobstep);

Description: Perform any job step post-suspend functionality (done after the application PIDs are stopped).

Arguments:
job    (input) structure representing the slurmstepd's view of the job step.

Returns: SLURM_SUCCESS if the job step has been suspended and SLURM_ERROR otherwise.

int switch_p_job_step_pre_resume (stepd_step_rec_t *jobstep);

Description: Perform any job step pre-resume functionality (done before the application PIDs are re-started).

Arguments:
job    (input) structure representing the slurmstepd's view of the job step.

Returns: SLURM_SUCCESS if the job step can be resumed and SLURM_ERROR otherwise.

int switch_p_job_step_post_resume (stepd_step_rec_t *jobstep);

Description: Perform any job step post-resume functionality (done after the application PIDs are re-started).

Arguments:
job    (input) structure representing the slurmstepd's view of the job step.

Returns: SLURM_SUCCESS if the job step has been resumed and SLURM_ERROR otherwise.

Error Handling Functions

int switch_p_get_errno (void);

Description: Return the number of a switch specific error.

Arguments: None

Returns: Error number for the last failure encountered by the switch plugin.

char *switch_p_strerror(int errnum);

Description: Return a string description of a switch specific error code.

Arguments: errnum    (input) a switch specific error code.

Returns: Pointer to string describing the error or NULL if no description found in this plugin.

Last modified 27 March 2015

slurm-slurm-15-08-7-1/doc/html/taskplugins.shtml000066400000000000000000000306761265000126300214670ustar00rootroot00000000000000

Slurm Task Plugin Programmer Guide

Overview

This document describes Slurm task management plugins and the API that defines them. It is intended as a resource to programmers wishing to write their own Slurm scheduler plugins.

Slurm task management plugins are Slurm plugins that implement the Slurm task management API described herein. They would typically be used to control task affinity (i.e. binding tasks to processors). They must conform to the Slurm Plugin API with the following specifications:

const char plugin_type[]
The major type must be "task." The minor type can be any recognizable abbreviation for the type of task management. We recommend, for example:

  • affinity—A plugin that implements task binding to processors. The actual mechanism used to task binding is dependent upon the available infrastructure as determined by the "configure" program when Slurm is built and the value of the TaskPluginParam as defined in the slurm.conf (Slurm configuration file).
  • cgroup—Use Linux cgroups for binding tasks to resources.
  • none—A plugin that implements the API without providing any services. This is the default behavior and provides no task binding.

const char plugin_name[]
Some descriptive name for the plugin. There is no requirement with respect to its format.

const uint32_t plugin_version
If specified, identifies the version of Slurm used to build this plugin and any attempt to load the plugin from a different version of Slurm will result in an error. If not specified, then the plugin may be loadeed by Slurm commands and daemons from any version, however this may result in difficult to diagnose failures due to changes in the arguments to plugin functions or changes in other Slurm functions used by the plugin.

Data Objects

The implementation must maintain (though not necessarily directly export) an enumerated errno to allow Slurm to discover as practically as possible the reason for any failed API call. These values must not be used as return values in integer-valued functions in the API. The proper error return value from integer-valued functions is SLURM_ERROR.

API Functions

The following functions must appear. Functions which are not implemented should be stubbed.

int init (void)

Description:
Called when the plugin is loaded, before any other functions are called. Put global initialization here.

Returns:
SLURM_SUCCESS on success, or
SLURM_ERROR on failure.

void fini (void)

Description:
Called when the plugin is removed. Clear any allocated storage here.

Returns: None.

Note: These init and fini functions are not the same as those described in the dlopen (3) system library. The C run-time system co-opts those symbols for its own initialization. The system _init() is called before the Slurm init(), and the Slurm fini() is called before the system's _fini().

int task_p_slurmd_batch_request (uint32_t job_id, batch_job_launch_msg_t *req);

Description: Prepare to launch a batch job. Establish node, socket, and core resource availability for it. Executed by the slurmd daemon as user root.

Arguments:
job_id   (input) ID of the job to be started.
req   (input/output) Batch job launch request specification. See src/common/slurm_protocol_defs.h for the data structure definition.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int task_p_slurmd_launch_request (uint32_t job_id, launch_tasks_request_msg_t *req, uint32_t node_id);

Description: Prepare to launch a job. Establish node, socket, and core resource availability for it. Executed by the slurmd daemon as user root.

Arguments:
job_id   (input) ID of the job to be started.
req   (input/output) Task launch request specification including node, socket, and core specifications. See src/common/slurm_protocol_defs.h for the data structure definition.
node_id   (input) ID of the node on which resources are being acquired (zero origin).

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int task_p_slurmd_reserve_resources (uint32_t job_id, launch_tasks_request_msg_t *req, uint32_t node_id);

Description: Reserve resources for the initiation of a job. Executed by the slurmd daemon as user root.

Arguments:
job_id   (input) ID of the job being started.
req   (input) Task launch request specification including node, socket, and core specifications. See src/common/slurm_protocol_defs.h for the data structure definition.
node_id   (input) ID of the node on which the resources are being acquired (zero origin).

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int task_p_slurmd_suspend_job (uint32_t job_id);

Description: Temporarily release resources previously reserved for a job. Executed by the slurmd daemon as user root.

Arguments: job_id   (input) ID of the job which is being suspended.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int task_p_slurmd_resume_job (uint32_t job_id);

Description: Reclaim resources which were previously released using the task_p_slurmd_suspend_job function. Executed by the slurmd daemon as user root.

Arguments: job_id   (input) ID of the job which is being resumed.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int task_p_slurmd_release_resources (uint32_t job_id);

Description: Release resources previously reserved for a job. Executed by the slurmd daemon as user root.

Arguments: job_id   (input) ID of the job which has completed.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int task_p_pre_setuid (stepd_step_rec_t *job);

Description: task_p_pre_setuid() is called before setting the UID for the user to launch his jobs. Executed by the slurmstepd program as user root.

Arguments: job   (input) pointer to the job to be initiated. See src/slurmd/slurmstepd/slurmstepd_job.h for the data structure definition.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int task_p_pre_launch_priv (stepd_step_rec_t *job);

Description: task_p_pre_launch_priv() is called by each forked task just after the fork. Note that no particular task related information is available in the job structure at that time. Executed by the slurmstepd program as user root.

Arguments: job   (input) pointer to the job to be initiated. See src/slurmd/slurmstepd/slurmstepd_job.h for the data structure definition.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int task_p_pre_launch (stepd_step_rec_t *job);

Description: task_p_pre_launch() is called prior to exec of application task. Executed by the slurmstepd program as the job's owner. It is followed by TaskProlog program (as configured in slurm.conf) and --task-prolog (from srun command line).

Arguments: job   (input) pointer to the job to be initiated. See src/slurmd/slurmstepd/slurmstepd_job.h for the data structure definition.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int task_p_post_term (stepd_step_rec_t *job, slurmd_task_p_info_t *task);

Description: task_p_term() is called after termination of job step. Executed by the slurmstepd program as the job's owner. It is preceded by --task-epilog (from srun command line) followed by TaskEpilog program (as configured in slurm.conf).

Arguments:
job   (input) pointer to the job which has terminated. See src/slurmd/slurmstepd/slurmstepd_job.h for the data structure definition.
task   (input) pointer to the task which has terminated. See src/slurmd/slurmstepd/slurmstepd_job.h for the data structure definition.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

int task_p_post_step (stepd_step_rec_t *job);

Description: task_p_post_step() is called after termination of all the tasks of the job step. Executed by the slurmstepd program as user root.

Arguments: job   (input) pointer to the job which has terminated. See src/slurmd/slurmstepd/slurmstepd_job.h for the data structure definition.

Returns: SLURM_SUCCESS if successful. On failure, the plugin should return SLURM_ERROR and set the errno to an appropriate value to indicate the reason for failure.

Last modified 27 March 2015

slurm-slurm-15-08-7-1/doc/html/team.shtml000066400000000000000000000247071265000126300200470ustar00rootroot00000000000000

Slurm Team

Slurm development has been a joint effort of many companies and organizations around the world. Over 170 individuals have contributed to the project. Lead Slurm developers are:

  • Danny Auble (SchedMD)
  • Morris Jette (SchedMD)

Slurm contributors include:

  • Barcelona Supercomputing Center
  • Bull
  • CEA
  • Cray
  • Fred Hutchinson Cancer Research Center
  • HP
  • Intel
  • Lawrence Livermore National Laboratory
  • Los Alamos National Laboratory
  • National Energy Research Scientific Computing Center (NERSC)
  • National University of Defense Technology (NUDT, China)
  • NVIDIA
  • Oak Ridge National Laboratory
  • SchedMD
  • SGI
  • Swiss National Supercomputing Centre


  • Daniel Ahlin (KTH, Sweden)
  • Ramiro Alba (Centre Tecnològic de Tranferència de Calor, Spain)
  • Amjad Majid Ali (Colorado State University)
  • Pär Andersson (National Supercomputer Centre, Sweden)
  • Don Albert (Bull)
  • Ernest Artiaga (Barcelona Supercomputing Center, Spain)
  • Axel Auweter (Leibniz Supercomputing Centre, Germany)

  • Jason W. Bacon
  • Leith Bade (Australian National University)
  • Troy Baer (The University of Tennessee, Knoxville)
  • Susanne Balle (HP)
  • Dominik Bartkiewicz (University of Warsaw, Poland)
  • Ralph Bean (Rochester Institute of Technology)
  • Carlos Bederián (Ciudad Universitaria, Argentina)
  • Alexander Bersenev (Institute of Mathematics and Mechanics, Russia)
  • David Bigagli (SchedMD)
  • Nicolas Bigaouette
  • Anton Blanchard (Samba)
  • Yoann Blein (Bull)
  • Janne Blomqvist (Aalto University, Finland)
  • David Bremer (Lawrence Livermore National Laboratory)
  • Jon Bringhurst (Los Alamos National Laboratory)
  • Franco Broi (ION)
  • Bill Brophy (Bull)
  • John Brunelle (Harvard University FAS Research Computing)
  • Andrew E. Bruno (University at Buffalo)

  • Luis Cabellos (Instituto de Fisica de Cantabria, Spain)
  • Thomas Cadeau (Bull)
  • Kilian Cavalotti (Standford)
  • Hongjia Cao (National University of Defense Technology, China)
  • Jimmy Cao (Greenplum/EMC)
  • Nate Caraor (Penn State University)
  • Ralph Castain (Intel, Greenplum/EMC, Los Alamos National Laboratory)
  • Sourav Chakraborty (The Ohio State University)
  • François Chevallier (CEA)
  • Daniel Christians (HP)
  • Brian Christiansen (SchedMD)
  • Gilles Civario (Bull)
  • Chuck Clouston (Bull)
  • J.T. Conklin
  • Trevor Cooper (San Diego Supercomputer Center)
  • Ryan Cox (Brigham Young University)

  • Yuri D'Elia (Center for Biomedicine, EURAC Research, Italy)
  • Ino de Bruijn (SciLifeLab, Sweden)
  • Francois Diakhate (CEA, France)
  • Daniele Didomizio (EURAC Research, Italy)
  • Joseph Donaghy (Lawrence Livermore National Laboratory)
  • Chris Dunlap (Lawrence Livermore National Laboratory)

  • Phil Eckert (Lawrence Livermore National Laboratory)
  • Joey Ekstrom (Lawrence Livermore National Laboratory/Brigham Young University)
  • Andrew Elwell
  • Josh England (TGS Management Corporation)
  • Kent Engström (National Supercomputer Centre, Sweden)

  • Roland Fehrenbacher (Q-Leap Networks, Germany)
  • Carles Fenoy (Barcelona Supercomputing Center, Spain)
  • Broi Franco (ION)
  • Damien François (Université catholique de Louvain, Belgium)

  • Jim Garlick (Lawrence Livermore National Laboratory)
  • Didier Gazen (Laboratoire d'Aerologie, France)
  • Raphael Geissert (Debian)
  • Yiannis Georgiou (Bull)
  • David Gloe (Cray)
  • Armin Größlinger (University Passau, Germany)
  • Mark Grondona (Lawrence Livermore National Laboratory)
  • Dmitri Gribenko
  • Andriy Grytsenko (Massive Solutions Limited, Ukraine)
  • Michael Gutteridge (Fred Hutchinson Cancer Research Center)

  • Anders Halager (Aarhus University, Denmark)
  • Chris Harwell (D. E. Shaw Research)
  • Takao Hatazaki (HP)
  • Matthieu Hautreux (CEA, France)
  • Dave Henseler (Cray)
  • John Hensley (University of Michigan)
  • Chris Holmes (HP)
  • David Höppner
  • Axel Huebl (TU Dresden, Germany)
  • Nathan Huff (North Dakota State University)
  • Doug Hughes (D. E. Shaw Research)
  • Michel Hummel (Thales Group)

  • Martins Innus (University of Buffalo)

  • David Jackson (Adaptive Computing)
  • Alec Jensen (SchedMD)
  • Jacob Jenson (SchedMD)
  • Klaus Joas (University Karlsruhe, Germany)
  • Greg Johnson (Los Alamos National Laboratory)
  • Magnus Jonsson (Umeå University, Sweden)
  • Nicolas Joly (Institut Pasteur, France)
  • Matthias Jurenz (ZIH, TU Dresden, Germany)

  • Jason King (Lawrence Livermore National Laboratory)
  • Yury Kiryanov (Intel)
  • Aaron Knister (Environmental Protection Agency, UMBC)
  • Marlys Kohnke (Cray)
  • Jens Svalgaard Kohrt (University of Southern Denmark)
  • Dorian Krause (Forschungszentrum Juelich GmbH, Germany)
  • Nancy Kritkausky (Bull)
  • Roman Kurakin (Institute of Natural Science and Ecology, Russia)

  • Sam Lang
  • Puenlap Lee (Bull)
  • Dennis Leepow
  • Veronique Legrand (Institut Pasteur, France)
  • Olli-Pekka Lehto (CSC-IT Center for Science Ltd., Finland)
  • Piotr Lesnicki (Bull)
  • Bernard Li (Genome Sciences Centre, Canada)
  • Eric Lin (Bull)
  • David Linden (HP)
  • Pierre Lindenbaum (L'Insitut du Thorax, France)
  • Donald Lipari (Lawrence Livermore National Laboratory)

  • Komoto Masahiro
  • L. Shawn Matott (University at Buffalo)
  • Steven McDougall (SiCortex)
  • Donna Mecozzi (Lawrence Livermore National Laboratory)
  • Sergey Meirovich
  • Bjørn-Helge Mevik (University of Oslo, Norway)
  • Stuart Midgley (Down Under GeoSolutions)
  • Levi Morrison (Brigham Young University)
  • Chris Morrone (Lawrence Livermore National Laboratory)
  • Pere Munt (Barcelona Supercomputing Center, Spain)

  • Denis Nadeau
  • Jon Nelson (Dyn)
  • Mark Nelson (IBM)
  • Jim Nordby (Cray)
  • Michal Novotny (Masaryk University, Czech Republic)

  • Bryan O'Sullivan (Pathscale)
  • Gennaro Oliva (Institute of High Performance Computing and Networking, Italy)
  • Alan Orth (International Livestock Research Institute, Kenya)

  • Juan Pancorbo (Leibniz-Rechenzentrum, Germany)
  • David Parks (Cray)
  • Chrysovalantis Paschoulas (Juelich Supercomputing Centre, Germany)
  • Rémi Palancher
  • Alejandro Lucero Palau (Barcelona Supercomputing Center, Spain)
  • Daniel Palermo (HP)
  • Martin Perry (Bull)
  • Dan Phung (Lawrence Livermore National Laboratory/Columbia University)
  • Ashley Pittman (Quadrics, UK)
  • Josko Plazonic (Princecton University)
  • Artem Polyakov (ISP SB RAS, Russia)
  • Ludovic Prevost (NEC, France)

  • Vijay Ramasubramanian (University of Maryland)
  • Krishnakumar Ravi[KK] (HP)
  • Michael Raymond (SGI)
  • Chris Read
  • Petter Reinholdtsen (University of Oslo, Norway)
  • Gerrit Renker (Swiss National Supercomputing Centre)
  • Andy Riebs (HP)
  • Asier Roa (Barcelona Supercomputing Center, Spain)
  • Manuel Rodríguez-Pascual (CIEMAT, Spain)
  • Andy Roosen (University of Deleware)
  • Miguel Ros (Barcelona Supercomputing Center, Spain)
  • Beat Rubischon (DALCO AG, Switzerland)
  • Simon Ruderich
  • Dan Rusak (Bull)
  • Eygene Ryabinkin (Kurchatov Institute, Russia)

  • Federico Sacerdoti (D. E. Shaw Research)
  • Aleksej Saushev
  • Uwe Sauter (High Performance Computing Center Stuttgart, Germany)
  • Chris Scheller (University of Michigan)
  • Alejandro (Alex) Sanchez (SchedMD)
  • Rod Schultz (Bull)
  • Samuel Senoner (Vienna University of Technology, Austria)
  • David Singleton
  • Filip Skalski (University of Warsaw, Poland)
  • Jason Sollom (Cray)
  • Eric Soyez (Science+Computing)
  • Marcin Stolarek
  • Tyler Strickland (University of Florida)
  • Jeff Squyres (LAM MPI)
  • Deric Sullivan (Government of Canada)
  • Nina Suvanphim (Cray)

  • Prashanth Tamraparni (HP, India)
  • Koji Tanaka (Indiana University)
  • Jimmy Tang (Trinity College, Ireland)
  • Kevin Tew (Lawrence Livermore National Laboratory/Brigham Young University)
  • John Thiltges (University of Nebraska-Lincoln)
  • Adam Todorski (Rensselaer Polytechnic Institute)
  • Stephen Trofinoff (Swiss National Supercomputing Centre)

  • Garrison Vaughan

  • Pythagoras Watson (Lawrence Livermore National Laboratory)
  • Daniel M. Weeks (Rensselaer Polytechnic Institute)
  • Nathan Weeks (Iowa State University)
  • Andy Wettstein (University of Chicago)
  • Tim Wickberg (SchedMD)
  • Chandler Wilkerson (Rice University)
  • Ramiro Brito Willmersdorf (Universidade Federal de Pemambuco, Brazil)
  • Jay Windley (Linux NetworX)
  • Eric Winter
  • Anne-Marie Wunderlin (Bull)

  • Yair Yarom (The Hebrew University of Jerusalem, Israel)
  • Nathan Yee (SchedMD)

Last modified 26 October 2015

slurm-slurm-15-08-7-1/doc/html/testimonials.shtml000066400000000000000000000162651265000126300216340ustar00rootroot00000000000000

Customer Testimonials


"With Oxford providing HPC not just to researchers within the University, but to local businesses and in collaborative projects, such as the T2K and NQIT projects, the SLURM scheduler really was the best option to ensure different service level agreements can be supported. If you look at the Top500 list of the World's fastest supercomputers, they're now starting to move to SLURM. The scheduler was specifically requested by the University to support GPUs and the heterogeneous estate of different CPUs, which the previous TORQUE scheduler couldn't, so this forms quite an important part of the overall HPC facility."

Julian Fielden, Managing Director at OCF

"In 2010, when we embarked upon our mission to port Slurm to our Cray XT and XE systems, we discovered first-hand the high quality software engineering that has gone into the creation of this product. From its very core Slurm has been designed to be extensible and flexible. Moreover, as our work progressed, we discovered the high level of technical expertise possessed by SchedMD who was very quick to respond to our questions with insightful advice, suggestions and clarifications. In the end we arrived at a solution that more than satisfied our needs. The project was so successful we have now migrated all our production science systems to Slurm, including our 20 cabinet Cray XT5 system. The ease with which we have made this transition is testament to the robustness and high quality of the product but also to the no-fuss installation and configuration procedure and the high quality documentation. We have no qualms about recommending Slurm to any facility, large or small, who wish to make the break from the various commercial options available today"

Colin McMurtrie, Head of Systems, Swiss National Supercomputing Centre

"Thank you for Slurm! It is one of the nicest pieces of free software for managing HPC clusters we have come across in a long time. Both of our Blue Genes are running Slurm and it works fantastically well. It's the most flexible, useful scheduling tool I've ever run across."

Adam Todorski, Computational Center for Nanotechnology Innovations, Rensselaer Polytechnic Institute

"Awesome! I just read the manual, set it up and it works great. I tell you, I've used Sun Grid Engine, Torque, PBS Pro and there's nothing like Slurm."

Aaron Knister, Environmental Protection Agency

"Today our largest IBM computers, BlueGene/L and Purple, ranked #1 and #3 respectively on the November 2005 Top500 list, use SLURM. This decision reduces large job launch times from tens of minutes to seconds. This effectively provides us with millions of dollars with of additional compute resources without additional cost. It also allows our computational scientists to use their time more effectively. Slurm is scalable to very large numbers of processors, another essential ingredient for use at LLNL. This means larger computer systems can be used than otherwise possible with a commensurate increase in the scale of problems that can be solved. Slurm's scalability has eliminated resource management from being a concern for computers of any foreseeable size. It is one of the best things to happen to massively parallel computing."

Dona Crawford, Associate Directory Lawrence Livermore National Laboratory

"We are extremely pleased with Slurm and strongly recommend it to others because it is mature, the developers are highly responsive and it just works."

Jeffrey M. Squyres, Pervasive Technology Labs at Indiana University

"We adopted Slurm as our resource manager over two years ago when it was at the 0.3.x release level. Since then it has become an integral and important component of our production research services. Its stability, flexibility and performance has allowed us to significantly increase the quality of experience we offer to our researchers."

Dr. Greg Wettstein, Ph.D. North Dakota State University

"SLURM is the coolest thing since the invention of UNIX... We now can control who can log into [compute nodes] or at least can control which ones to allow logging into. This will be a tremendous help for users who are developing their apps."

Dennis Gurgul, Research Computing, Partners Health Care

"SLURM is a great product that I'd recommend to anyone setting up a cluster, or looking to reduce their costs by abandoning an existing commercial resource manager."

Josh Lothian, National Center for Computational Sciences, Oak Ridge National Laboratory

"SLURM is under active development, is easy to use, works quite well, and most important to your harried author, it hasn't been a nightmare to configure or manage. (Strong praise, that.) I would range Slurm as the best of the three open source batching systems available, by rather a large margin."

Bryan O'Sullivan, Pathscale

"SLURM scales perfectly to the size of MareNostrum without noticeable performance degradation; the daemons running on the compute nodes are light enough to not interfere with the applications' processes and the status reports are accurate and concise, allowing us to spot possible anomalies in a single sight."

Erest Artiaga, Barcelona Supercomputing Center

"SLURM was a great help for us in implementing our own very concise job management system on top of it which could be tailored precisely to our needs, and which at the same time is very simple to use for our customers. In general, we are impressed with the stability, scalability, and performance of Slurm. Furthermore, Slurm is very easy to configure and use. The fact that SLURM is open-source software with a free license is also advantageous for us in terms of cost-benefit considerations."

Dr. Wilfried Juling, Direktor, Scientific Supercomputing Center, University of Karlsruhe

"I had missed Slurm initially when looking for software for a cluster and ended up installing Torque. When I found out abou. Slurm later, it took me only a couple of days to go from knowing nothing about it to having a SLURM cluster than ran better than the Torque one. I just wanted to say that your focus on more "secondary" stuff in cluster software, like security, usability and ease of getting started is *really* appreciated."

Christian Hudson, ApSTAT Technologies

"SLURM has been adopted as the parallel allocation infrastructure used in HP's premier cluster stack, XC System Software. Slurm has permitted easy scaling of parallel applications on cluster systems with thousands of processors, and has also proven itself to be highly portable and efficient between interconnects including Quadrics, QsNet, Myrinet, Infiniband and Gigabit Ethernet."

Bill Celmaster, XC Program Manager, Hewlett-Packard Company

Last modified 14 April 2015

slurm-slurm-15-08-7-1/doc/html/topo_ex1.gif000066400000000000000000000302111265000126300202600ustar00rootroot00000000000000GIF89aHxxxxxxxhppphxppHHH@PXXX``pXxPxx8@H#@p`hxxXXh`pxXPhpࠀ "`@PPX@HU$=H8`oxxxhpxx``px`hh`hXxhppXH0 HDpXB}`88(rP.)8@x@_U-PXXxtd,`XPHHp`tXxPxHxPxPxPxHxXx`PXttLT"&H`@`h`HpP@p@pϼ`pxp|lhxPH`X8HhppXhh@hPh@xh`h@h@h@hphhhxP8`phB]%TW`X@`hH`xh`hh`h@`ph3C.hxx8X````_YLtHXИ(x`pb&z\XpȲb5MЈXxXXXuJXhPDh|prp``h8XX`h4Ș8pXpXhhRlphhHPX؀@їlP`xH՝2hppȬDxhhhhhv 0@H@HHH!!ICCRGBG1012(appl mntrRGB XYZ  acspAPPLappl-appl rXYZ gXYZ4bXYZHwtpt\chadp,rTRCgTRCbTRCdesc?cprtTHvcgt0ndin8dscmXYZ tK>XYZ Zs&XYZ (W3XYZ Rsf32 B&lcurvdescGeneric RGB ProfileGeneric RGB ProfilevcgtRRRndin8HW K'P T9textCopyright 2002 - 2003 Apple Computer Inc., all rights reserved.mluc enUS&fesES&jdaDK.deDE,fiFI(frFU(itIT(>nlNL(noNO&ptBR&jsvSE&jaJP:koKR(zhTWTzhCNYleinen RGB-profiiliGenerisk RGB-profilProfil Gnrique RVBN, RGB 000000u( RGB r_icϏPerfil RGB GenricoAllgemeines RGB-Profilfn RGB cϏeNGenerel RGB-beskrivelseAlgemeen RGB-profiel| RGB \ |Profilo R;GB GenericoGeneric RGB Profile, H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիXjʵׯ`ÊKٳhӪ]˶۷pʝKݻx˷߿ LÈ+^̸ǐ#KL˘3k̹ϠCMӨS^ͺװc˞M۸sͻ Nȓ+_μУKNسkνËOӫ_Ͼ˟OϿ(h& 6F(Vhfv ($h(,0(4h8<@)DiH&L6PF)TViXf\v`)dihlp)tix|矀*蠄j衈&袌6裐F*餔Vj饘f馜v駠*ꨤjꩨꪬ꫰*무j뭸뮼+k&6F+Vkfv+kC"n*zk/?ċW laq'0ClOL\w,p DZ2p# '2o/Ì3\13;|:>[/3/ 1 NqH(mYV3RfMM-vK GV 5/ D9=4~= ߷,D*RPx`3[YP˰䂟q۞PΌR/-දo>npuc\7bp-2?g ңwڿNx ?GB/f??6B\>xX7@  Kb!ȯ]"%`@,bJPB$.qzp6/`BR&ALXC ap 6  !::0? %8Dn\∔ x$2M%J(j"(`H%0wA%`akWA$qʸ#A#ЈHlBUJ%H0C! )!ɇ18ԡxNZ2LaHVJILe/WJfM4XJ2Kl}3ql1 iցdfa(&4qH!. j"Rh aÚv({֦ubk'r׸p4=SND|S2PTvUH{2Ek s*ٺ߃PB paL3s.PHUjyZ C-qE/YKVs=h~KVrT4h:K\ؖu0 {B7N/[p#fkLl·󍍉7K>ΒµqqA1v?Mi7"7/V5 q2,ə5Ob̢%w^sX>=a^j]BL(Z9291 J|z]u/΢5X/7ի0: WňW\ %(GhQr%ln`W]Z4 $fwa6mwwˁ{X~v N{9ކ ޱ3K7[f?Q<9︺5+|`GNO^,OqV~r 'qh.r "yqxq :q? ma[@Fo`NuQ~vWWwX3f(;Nyux8?0!Cvs[,}'DNjk>gpCfwӗ<Bq:, Asv{h/? +Q[:9 Hm qٗBT: D ߈*):'!Fy@(di 0A$'YZ(|zjx K(Djɓ#J'ZEH2@;WBjqizi@A'X۴!;ڲ`!i'L 9!,nɶaQ_hk o;'qfکC={KIV2rTK:m˸FSʹ;qwZ[1*'+űs '"4&uYEibѩE&"лBnQr}& 3k:hq[&"pFz6aPJvq;%E; HB:d ‹Nu L<[D %v() .v@ VA ') p`. * - 0l]@?Ӡll$Fnam# |`| >@V eжoW,yG2C8\_@Jqy,qtdtl$v|z[\],pQX^ h8A.;K;͠:MHS\m,֭!mU[AtN0S.r~\?e3 M2.qn.4ݾr?^3ήL"3-4Nn3ކ1nV.r  O2."w(\)wC-,O)ɮg)o>_OxAQ|$70kfkȖlz`ĕ0\bkK `Okf/ll@z##LPn 0`A |ϝ$(0&ROcPwC[Z}J@@ n "ebA " 0JМ^XjDLr4CͿ#4@B7DoR3ڼY1`0 7d`tNپMP"v(aB )dC%NXE5nG!E$YReJt Ja͑"dYSKuPYԨK* ք)dwGN5K DITŎ%[Yc)vßAϷ/)Q)l][#fPtII>me̙5SZ-`9pRy2IʰMsnW%0&^ 5r 38Z I2Z0ίFwuJǯg޽C˳MORͣȫop0:%+1pA +p*j"Ɣ%`G H0""EVI:t6G *()"AQE!#m%$bmܚ4Nܑ2$H)AM/T=`y L5#̼ T&W72+:l %K{ uQtPt AQ&c˻DђÄ Ҋ"3O3 UXcU34q"=#QHRtRKV"JCYhMaJ5,2P`R/Hx*Ίl 4coW^s5ZxH9vM4ZkU"BL?Je1V">aY%X#=~q7)N`1ȸ= vb]'ވǎTb^4D'5YIc~y/)jLkӹt8EPcvoЛArlӆ7& 2z7 W)Lj&c׼:$=STpC%lJ~n%6_<)s2 qӤmI QVb_VT?u҃'nMMr.v .ɺw̅4')8QF~䍕&p>kwCg&RU;x8|:>$/07x'6\^g?"E CI[9CXȻ8EPqڶ6Rz2 WNdЀP;`lDmz h(6\҂@ mR ǥBK W35Qhڤ"gS{/Nnd^P5j{ 86q@#PYq$^jh6؅4 (CE"Mg!)d£4d%:YOo<\$X8ÅH$)!+YL9,dɢ' '`}+n6i \Br'!'ƞ*f=Ar/AXf,g4 A꜔7ɽ fD p%ȁ*gPbr`Lfrlj.$i&DT3+EҔm0#`.ēEt#7#%f60d2[ U/Kj&0քA .8%WSJ?U%ȚJO7!L^d0.,U 3zL4-3Fz:U.i%T Z2ݘSc'Vemk %xpmmWYJVNF0jvQpuָFRP!%Dl]:! /o] @ uBU٫Zvuoy5-8X&0{ :Dt8p/ `Ӂ . |8"t` _Nb8 n0 0l`x0u<`w91-ˆ-cA2bRr\Vr<@kfs<c#ps݌ 8hdհs\|σ)"] p>jh;Љ{йnDiKcZӠVs'ˆ^Z͌v\]P9ծ&3jX˚րn5smfG֫Ig=#` PS#(MOO֎mW_;"6@_WæAn~wm(a;ȞlhOC=pw.CMԳxnWqH#9io 7Ur|,7ef;7os!bsOǃf_Gg%r:Q_3ˡu>wΙuh=`'[dTw:SSV;ӳ}rW3sCggz+"W{ܩNwK]|;ǮwtSLW~tG>|{+zxUNoʖ{x;=>}[OO~gO&} Ј&9=Y(Ͼoyw GWpw[x6 XӸY:+# @ù>ϳ6#@US:o퓈s2`s5Xl:#?@$u6`:{5AZH?!t6 >Gؿ#û%$5A8F A?6BP+BB \SCP@YP3w39[3dAZkh?t3;;ܴ! P?3:%BW&@6Dɋ:kDP'LB4 BLTeDEs4lER7H8\E5#ER CX9 ƒAD[>$wD|x lOۅFȇ@5[DMC"he D;[D7#F60F@\|hg$_\3Ź>xGzCfDg$hl3iTEQTHcLF{G |EL@GB{LJH,ȃW`aF@r\thBGGy5FFL OD\G\l:CC9>$D|PJ`Jl3jJ6yʵlKAKTl3mdHsH,Jf8ʾLʥl$jHÌ7#ơTLǜK|w(Ldt˿LGA`wx(˾CKdόL5B伟l:lEtKF6`dJkC D5Fv$ NkvPHv|:F4NvBdNl\DfJ6II MN m5#B\PBG$|уNN5QqT$QT3wt)%Q3FZl3dP"Q&]3wX0(M%*L@H$L m/zqA ER$l4ȇ"P?8dюSS3CԄ>EFsݧm3PSTą̍lS<EK7a&@j@8K3bEb^bD"(NC&J_30b%&,b(ҕ]cc,fahcb7faN] t306PH3z5S3Hj[#FfG(UO^P4½VR&I^edGeE[ٽsZ޴T^e@-`Z\ַMdݐ>KNQ6> jfkmf[YkfUr>ghNsglg痣g|{篛Suh\d Chjfvc痓vf gIhi~iiiF8iin5i.ꗶ6ji^jc駞j);2ꩶj*jvjNC.k>kNk^k``kEk6kuk뾾lN`8l>_nl~Ȗl,llllmm.m>mNm^mnm~m؎mٞmڮm۾mmmmmnn.n>nNn^nnn~nnnnnnnnnoo.o>oNo^ono~ooooooooopp/p?pOp_poppp p p p p pppqq/q?qOq_qoqqqqqqqqqq r!r"/r#?r$Or%_r&or'r(r)r*r+r,r-r.r/r0s1s2/s*3?s4Os5_s6os7s8s9s:s;ss?s;slurm-slurm-15-08-7-1/doc/html/topo_ex2.gif000066400000000000000000000752671265000126300203050ustar00rootroot00000000000000GIF89a HxxxhhpxxHx`hHx8``hhh(PpxppHhpHxpxp`pxpxxDxpPPPźhpx`hX`x¾uXpxŐX_(p8tH8XԸXƀxXhh`P0Xphp lxxxxxxpxx bSxXlHH(ulXDtxxx `Hppppp(`pP]hjFnhpxppphhh]T9>`_H]@`8d`e^Xw p`To@<8PhEh` `(X@DXՒ&stPX`H`H`8`8`hpi%(`pXp@`hV`tЈ`Ȉ`H`X`XXPDHp`X@pX@xX8HX8XhE]e( ppp8h`x(x`pbxlpPP@@H8@xxX0 pP8xP8e]%!GWJHH`(ޒBhPmhQ#h,5$ڕhАhxx@&CzܘlxA©!!ICCRGBG1012(appl mntrRGB XYZ  acspAPPLappl-appl rXYZ gXYZ4bXYZHwtpt\chadp,rTRCgTRCbTRCdesc?cprtTHvcgt0ndin8dscmXYZ tK>XYZ Zs&XYZ (W3XYZ Rsf32 B&lcurvdescGeneric RGB ProfileGeneric RGB ProfilevcgtRRRndin8HW K'P T9textCopyright 2002 - 2003 Apple Computer Inc., all rights reserved.mluc enUS&fesES&jdaDK.deDE,fiFI(frFU(itIT(>nlNL(noNO&ptBR&jsvSE&jaJP:koKR(zhTWTzhCNYleinen RGB-profiiliGenerisk RGB-profilProfil Gnrique RVBN, RGB 000000u( RGB r_icϏPerfil RGB GenricoAllgemeines RGB-Profilfn RGB cϏeNGenerel RGB-beskrivelseAlgemeen RGB-profiel| RGB \ |Profilo R;GB GenericoGeneric RGB Profile, H*\ȰÇ#JHŋ3jȱǏ CIɓ(S\ɲ˗0cʜI͛8sɳϟ@ JѣH*]ʴӧPJJիXjʵׯ`ÊKٳhӪ]˶۷pʝKݻx˷߿ LÈ+^̸ǐ#KL˘3k̹ϠCMӨS^ͺװc˞M۸sͻ Nȓ+_μУKNسkνËOӫ_Ͼ˟OϿ(h& 6F(Vhfv (La(&*2-,(Ȍ8xe#SiH&"D.)唋QI_DIG~eLE2l`0$`SDxyC:X2nePj*v6}Jji2靏oRʗ'~&G_uڦ{+V鯉.*+WĂ^6,ekm6眸yEK촓U뵴e۬}1K,x++,z+wɫ+[++|-:ºj"{:| 0E0a2%0@#ΪSk1Dr2ʆ&±)g4{ 2{'D"7Br]/9,„ҁ\w4&ѰWfz4L q 耢fp rl6Y/ nҝjCs$@ޕ9|8gJv矇n+G0q5Ƙꗳxq!^/1>XÝWL3#k`#x뭺y|m.=盻y/ (P ћ*iY\}lm28׸3{D(Ԅ=|~40xy= joꛚ0( .n8UM'$ xKDGedƈen"+ȑ4+)~`.N$θ61bd!Sn8h."0mbD .Mc]50fR𽩐F\C>< 6桋RN U|¹aLG<׾]1^fD'کln~m'GLhSTź9mkh 3kn036߾NN鍎6iDaS[sA b&u !qMD'@?NlzAfESI, Ijv0 1xZd\ [<8PdTҺT2K#IS NUA]PIUTSI}R?Fq;GA#`|p sG2S;h2'$ H@7 ONe,.!R\ 0qTa\ 8$P|l z!1`2;B E B̘Ȑ0qz;/&Vrh8 ?br^B R02V3b|@|%,a,k)D+<4y I3 Ch @"0E#gX8D<#&~J 6й3@`Ѕ ]Sx" i3SAq@Fv?<L9ɤ`@ m>4#,a @!&lm͘'qkC)`nlx¶ mmOL`R4#;LlxbWpwR` 6?0 x$+ŸK %f0$ԷOAr>CۏAm@'lgm1OE)\̀D's [ ,xe9>m>E ~'P l;xPq_??f 'plpqX/m}Mpsg7S@fWm<~&zgttvauuc#`m ^q&{8yMކs>bdyY7@ hʆ(yր"ƀ{s`d7p(qx4ւpᄠDŽ7l$؄\98fwv[ul8p _`8HeW`8lyqϧlyJ6؄'qx R(8aƅ}|@e8!zwW8WW]qGXp0G767 Էmv( <@7(51p8d8i`ۦAvvƁjvv؄fHeHm޶Wj~Ѷ8`yH{Xx?6n~v{ͨhmzFpwX^XVY꧑X67I\ʦw(fؗ)loFwfl't(tH'1pP`z~W}Uwqpcax,}eW(r,8k]9{w7sY#H)9jsГW9}Ǖi#ǍggfɅEwt8yԈ9o) hWU*)V~UلyhgUxȅ7UPqwIV ᙤ#W]&#k^LA U0\+6[ꧏ9v9̖{{pcYhДtw08HYX 靡sy]{Ʌbej m9rTg5ƋXLFyxlV s qnc#7v)XJř.9xsaigٚJa}0hW7ٛRZ CZ+:gg\ɐ Q^v'Zȋm"x89a)']TjȓF4z9nMhG ګg*0gWgىǛD!yꕄ@}oHAPJ?.D:tncKi٨X٧*} lM tzF :(*pgj<>z8io |jc^s> ~I8;Zy!W x{`vɅH)੤G:GїJ % y'Wm(Z1Ǩ|0 [9ZGdoyGЕکh9Eq)iy 1 Wx 5yBQBow+֪(g*Hlڧ]L+N"x5jaoV!AJ2JhsZ|(>ٰJ`m7A˦FꤩɌ;םkaa\@|q |a#؜\W(K}t\ FKq ]Wϔ[t|l)a,<r!v|ph;X;nˉglې;׌ёHѬ6<\Wvo[+~ &1Ƭ,-Lܱ&Ⱦ7X9O=<91rKҺ IսX`:/-ՠ{Ζ;|H؆ /ͮ"ftRICzZ*}5M-~=G]j|h+}$"Ňڢy!8 1<gڧ1GŘбj}}a["}բӫ!ٵf3-{ȵ|Ocڇ 3fMڒ!=I,ۻmːy-v(q; x,nf|ҢҪ:+A%+1{m"HyU:y={Q`.tgIɻG-\Ki(mC;uZ[!mF/-5y-ۂ-ښs$i a̻q}Nߣ{a{0n| ^Ԩ ~-!Av q~3Nt;ˀ.ؚҭblZԊv(lM1 :|qپ%1;}ڎ0ZktLN^AH \0< A>෼ÂxZ/C:~:orG{H?؃P ^ٝ~4N έ⊒]G<<ީ$shN?Jl8`>=Й(󸡡ݨsi^S0#[ٿڐ=+KީHNn67>> u8vM.~.b.l86&Np"YvuPh6BEs }NUNKUrt&s/-mHѷ :e)S!ɰ.YaG +"x%y'cՓO;i;xVZr(}UNDy q_%jےNBbʢnoT!vv:eH8d8bsRx[df6lSkI"3HaS`xY h SMqxL! x4bvDrک'd(‚OlI$W(8~k* k:k0}>B,=g]uhgo(OY1\ $>BM qX=kŻpvϪ3]ډRtk%KBi"Id T;*+m kF4N|eR`b8BT..Q#P;ρ-wD$H'WjD* 9(|Rb88@Q6 61cQ!9b|PY؟B! qŠ$DZ5C(FFLb# @#4щ(F5ZoVy0"clS%6J`SG[$|Gq_HĄ1sLhD'V &- ~:!"T.f$XtlP_+R4JPrB$°/U2h([qD:fCɁQ&Z*|$ Spe^T}츢$N I?9 ["\--a<' ()ӝCzD,MD̍[#Gnyy#>[D Rt" 4ZPi;B؃X8=R?6ѭ` S!Ȗ3@ tī@&\ c;-=Á;@ȠҋQ(,!"V91{&Đȩک9* 7,0F: ,B$! 4ESÞ!6T>=!4A$D\ ;` D IL)F)P1<TڤI8kCj) +0 XZ\|ǂ.K?@,RgD.̧$`0["'C%#m9H?ၛa:=pD1 > ccHX#HcGh:ĻK MP LbMRx 1XbIdǍ1 I?L7TFbɖ=ĉn= rj,,6=37)MizA0@D)uDKY;|fJ[i4›5qAaˍq{ٛtň Bh!@vk*v2MpL˛?-z˸ܺ. gL;Ą*+?򌰇bꋊ`⢺i(+|4xo6| A,髡&OO4)_J6AMCH@F(E=R7ջ}9оMcHd$#zBJ"+%)MIb*M'M0g̰h ^_1XYbJLCۃ .mT0E1%'z 4IRNXAQˍHWӍ8HiӡL5=;J T]TLEDU./`TQ!b44IQCȕ1`:P(xDٴ;Si9]"VTcE֖3O='LUɨ0`O"al9NhDdĕ6PSmܗe5<άjw֑0*t*CW1gc`R'QkEtLK /h^T`Ar)UZ{;QY/ut| &&DlQ8b5%,Ny ؀^-(=k1HeLGD}U ؇xy,}GPYIוk^,ۇXrB5TDb <(Oh +3p+WKzV-6S]XUtDUŵ\qǝȍ2.U κYB-ҳQU6 6zΉntjS 55ޝ4]j]|:\  XH+iV;e@K|8)>kPـ `߅>zv.>h7vhY黼0h7G:㋡YCܵC?WY` t%`_: 2+h O0]ف -"%3퍋U,$h4 ^+bA Du(`.m>b[p +`hxf(ZCw[\$SWB@\͆>`NPgze׃&@EFpUh3vdF.6-ZFE[&D|FBtVh7بhH SPgPzs Z>fEV 82qewtt@C| v(KXeZ$D1) DH)">) }h^DI:KTaSd%^ye@M:?Ӏd!͸؁tjHAYF2`y=F cQ$֗-H`۴Zh%>>+މ}ek̕{kJdz`T` r\C-M~Q= s`o,,;ԁ5)k;XrH}27sBsa:TSrNR^Һ)'|B #Yx@q 5݃G:dJ{Hfzps"<YE~Z4 PR@)'8nJ|&w*/BČ/́pƐ"G,IìS0l%̘2gҬYRlxHNx^X)A"KR2OO1H=^G@RiJJ(MS⍘u' #GN]axf}WB iNLզ\8&/ݕG.m)9w(=b!!B^UwjU/$ UyJ"[cOS딮Fxn{D'jl ]zr#nLY_VNq(ڛ[wU4 8 kTP$dX')} vAMG>LX^AQJoDe„A)x6&-CW0zFAUm.uhYa凔2QZy%$4KY>Ģ}M.vUBDd5gz(TI0I!ķh}iuAՄW8sΥ`đ81w b{MSBd}I>PzIR~x*byUMtꟁnH_{S:%*M´)r8Ha')~PMqbe3MԤϩ@UrYǐL)4Ғ:&AjHR/W*8{R#ȨQn^铢i閴A 0˚nX,Vk)8Ssر}V-{0SD uP(E+MT]!0b@]auI(T<)N9Z:@W]Fān͌L\t[纎A\zz:Χ-_D,k-iхG4啻P HëFQ5(Yfw͟qT`O(DPELvTWE5Dj:EͼQbNEI$G*1:WǶG{TYŪqm+BV*`7ȷ".Dӝ_;ܩ}ÁŋG Ԏ1YI58B=7 Jpз~[}.51LkϠ 5tyP<B ԥĈ8?Gpp`ø= 뢓9 ooJT,k{O$Ռv@l}#V Îd 9P8ᰓ@xE s/@b$Xא%B+0z AHn-q'5Wv8>M (hAZUy{KxJx<( W}y֣;+`B`i3$лœvIA[31%3i9ޣH)ӑ]sJTor_,aG"#@+mo` $HAFm4: 1uh%.4nC1e"H&)xe#@T6=,U8hG%%QIK38(j">heh3{\1'K` ʖ=< lKj(+ B,+hQg`mS`< D9#Іј\@,ƺVG9ۗp@C fЮ.FjqR=ot2.f.IVhٖ`bL3F056!H'x@I hXW W*LHT5*jc7BK8yC 1˥ ٚ@n8tp␈ekTp5"U:#XYX8f!Bz UG@ށ5"2OP25Ov:T vX: ddV/T֒?wu3N3[,QB=MIo }6d#N{v%= ̞=:Ô߉ $L(B0oeg~I:cp 1v+\/H&#j dƷ4UO,ę0QcM-egԥ.\ښ5x_0 7Н%d@[K4mSxB3^p%)Ct,Io6]3&(bːp2c/EOOQVC ӭ#''B/+ֵ֞kM)0,s~FZnabgNMIy(CUHOK3^Ҡ3rِYeOD=3:4t\S"(^QByN[Ǎ#:3G AD UNO-1QC@efw`X~ c,Y5}c1[BH`_1ԛ >DT  luT|lڰu i%-\)F i'\#)U!s,][\ٌ zD^MR}M0aԛ ݏUn֨DuiƘua\m 7aIO9{ G$)LJF\"SLB͠gX"V T15^@Oơ*jtRĐZ0D ŠЉ#" V/F\/i>(p aU`xEri(eU<CFL#qR߸cL8bbxd)(m|%VEJ唂yqhp }`&bE^FedL|G2F"OB)KWhRhL5V%8%3O N$b`5zcaW2L %Q"ji4 uэiL!ZY)B"D 3MZF\BD$AMdeAAZ ^gdMԑJjEU%5ږfqf\䤀fUT6oREGI -ֆ1o+4ޑ@$f,#wsFt>F,)eC"\0^fFZxdiaj^P߻Y_WIIhhES꽗@OK1AZdYg*c] 'f vn܀RhLH.Q`H4 1TaL8_~}֦MH!~E I'u:Z(>v̌^NV⨒j̎FfR~KDXbT2O^eVz.FUb$=H|bMz[DhHTeL2<52AP,jcf*8i " ȪMjg.n:B"ߞR"_vTc&BK<^ǵ*`.+n0X>1"Oh+^puLZU*6,]FkIHi18? kޫn Mʒh͹@.~E, !5zߞeYMs*~S8Sl*Dl!15*!^pXoĄ"rIl0:K^*8(j U0WJ]meI%Ll  .v"̖DʘA-n٤|,M$LFEѩ$*]iũ6z2iН(ld陱X|rcދ|w皧O,%i8]e+.ݚ'B,6ej'nn )!D}F+| p]kxyIʷ:pխWp8'~G)e2lZɶkَرn-}5nl+#-ԪKʒU'V,IS̯jd=QnHHekA lj>.\[wKE\K`GX1Xʉ(8-ߖ b=q.X$mEqF Dn)LT-q\f\! %!#Q+rPt(ؕZpgdrɥ Ε23 /+3M_bg('ʭ)xp.M 4`{j޼1N FH>yE.2#v)pi:O7['H9/ gtjCl<;,k0)O1qH {XͤeF0rOҸdE;dhp|a!hG4p]0 5DzaYkMwW10@ svlKPX01dO"^S5tn2XI ->o#$+ʴPGfYly:Vg[$+/} 0gYS   -!;9:KE^HtŹPc 0mT #]K/7g9D%'2Nh 0qhzz8K|&?j&{ .{5CH֏ qK 3t|ŒcTx^J5fz'?/*qٷ/ Xt(EAEFs#x!h;1%[FӔ bJ)8\ pcF9vdH#Iro,/L1#JM!LV$RHhPCnA╂lZ 3L}lX7Jb $Ho!cu1%#S~p`5^H1 V#Di*FrʬϾ9w#u\[6[sloJLF,Vu֦J;}>`b|wݳa+L6.ى'tf/oGfkj3 |PP͇ |SL+p6 ȓRHiΒ>ZANB]OGKV3[憣n{mwq VtAK zq~q)XM_Ĝ'Xftx'1ǜcYD@7O/syݐ>ҁ}o/8v'l M>u/?ٗ=y'l]x_e^wә{(at9bB/6}PC4!@/C_ aØ%a`G:6x/A0BA78 S8 gK */<,8~E4fae]袊J X~P@BчeD&QKc`X "H-BXH12CGud xIq$'IrPֱ $O98Yle ; ˢr-?b\ +W SU,jLrۥ35h/ 1yLmIl7AkNּf6 nFӛR89NyrĜD7Mvޓ(\f<"zqgA1eS\? =A//ZQ~p4{J!AjT$\ i}8?W HXR(҂Ԩ 7̡ E[rBwj?!CxKġB~r9jUbU(@JWt F! L5%KTEU"2 nT~y qɮ3m%Y%˻V&EQz+.O<5 t%[XP~ʮ*JTڃPqIYS]yNo& IÇU&xH rI/ӕrȝtzԛ }1T8Ȟs"p/zsߣtO#|? j}?B>6 AMۏ);_m}Yopo/N ހ3$?&n 9jfhzb q`j p". , EGb N#dR`9i b0Rjx?fVa  xp!!f0VxaKߪL 9%P-+p6pc!d PD3 -aR dTfO ccGh&4 LZp1ޯLfc.Bd ?=%29p/Gh3C 8KG3nK%9N/bR+*RB9TAL9ѣ"8d>@%$>ǔRC<3G5:= DP%:*$4R*:#~u%SC23l*CM+LO*F2]5%V$ "n1>>^?y[0̣]:HY.(_=43V7rTK!tP=21Hó43CD_T_SII7Y%psZqOmXL%tV׎]Y%bELEc1 'EF[7Vޔ%+:I4T*UDS5"bnbY<Ӱ`G@eqSOlUKGv %U5*S/#@tot3*?e6hA^G`u d.76NVMJc#/Ƶ Vm Y!udrq<$hSC%Jp 0WjuXkc)3 "bE4>WX2)w&6h%N/:)"[%TwtyKw[eW)8CW{Yh!WjtMQ[)wE _cy%Sotdwr4/`oe7U46K} Ue6kg7g7x}TotxiCl%ȴ61Wi#lTOfI1s"UR"}o@N0i~іf3BW#}Fօsd sIN4jօwS]IhOF9R#׉iRhW6ug61T)hQksunBBvoօ=zwI=qw HX֐RvB9Wwt) s-%X iX|~7$֘.xw~Krt+wG/xq:Yxi)5Q⑕Nv׋#7{MR,f;SkYQ]}Op@rq/w#* B9BNRY՛% 3_ zvqW3Yt+)zq}\î5[ؖwQdq9w ZBEWN#ϸ+ ږ0~آhcZ3^k]8FOR zE%XN([8m /Pp my4#^Y@[ziב)% #7Z)f٧mCQw?#E;QrI;DQ|* =bqUm+{yXZq ]%W% V ;]=MI9% 8˵Ð=}w#.zWqKqiO[ Nyz mZڑ^کڅ7y#q=i\XYOQ:~ 钫U[?"?2ETG6+j$AqW)?[۷jtxY6Kne^;"z=5p_y#81<}{yɵ # E{\gq^ރ6 UZ9 >߁hUy0'"$RޝVb+S4$: @{߉ԉ0e>z<]TW#]{]I=Iݥ5uã]ދ͟UӉ޴G?þ>} iȃ}K}4ݹ9]6O}ţ7G?}5%U\w;7}~:-!9 9֕؟U~_ą>S.80!B)F0)F1ƍ;zxa;N$GUFX1ȱEkڼ3 >Fq%*W6#)F(3ԩTJ(Ȟ>nƥ6͓;xҙЮ HkyX2YCq4(/GF ⥮-5N7K)#]g(;kӘQ$kv_K#GeύS޽y_W8SpGG #cF]7K5$S%ׄR=vyDJFzi 'Gvc8\,6!F 銘;`g(Ih[>kC}9/3G'iڗ-:*).l!(1C' F;9vSW8$@^&(0 У9(|w 5w|? F$iߣGN&ʩChAWJ9kǰd6N\QӺv)g#`KuD.vl33UJiid΅5:ŏ r 繾)%_,2JXn4B[$ix,Fe4ArIe\v>*:Ql8A M)ܭ~[`l ʖ$*1!/sY g8&]qq hCvcA pس]oS s^w2.۝R6)䓋XRDI@c R\YYs LtaԌ5LF.};VJC>c[IiH2f,ґXve8Ѫtg%E#B'?cbvʈgֳzTti&f.'Ԧ'ӓKĺ"/p*/n*jn04hRN|֖M5=P-zfpvpSYЁv+btͪY#H,S1;LVT_|ln·̨4h:; ]xm.u87JJ ;^M{.[e=J);ʎ3O/-CqsLUOv uO3_{=}^}>K`9pbpVrcw7$!cyseUQU]I"TGAxiƒBmHH.8eQn,sB!>Նh?IYіlH>tji2gDqS4qI80jThfTR| 'V?5EqMFjs h}z~A+QψxoǂgRj}_tD"GgtD2QdfV,ֶ$mF7BeWtX#rm7^f0w=]n&19Iنx|GWfprAG/ g`D‰h}Ȃ)]5p+"%tFGQYdzz#(Qmۘq҄iǑs|72׋Ǵi|,2)P&(rp8{ Y'Th׈YfIԗ4W&ȍ́Xx{G7BL2Xyᤃ #kʷn9tG"YGDr _ͨW#8j撋(geZV\-y Ԓc7$MA`j4\i1}>j'쁂ͱxi&ⅻIh 4FTGvH]etW#腝!/9Ff }jy22ZiritH4jp  p ! #$#0-z,J/ʡ@5i;' :60 #0 3JDz2"/j 1Sj(*ʤ7zIֶ/_RZ0k pJ'Btz0JC~l:njtNꨎzj Lp੕znPz`pڨJZL઴کNZ*ʬ qɪn@_ʭ:ΚNZڨ Jκ:ꯛ ګ* [X2˱pʠ wб#_@ $;b!{в+b{,Z_ ? * 5F{4k;KuLpNJm`!'˵+[1{LG^۲bqp; ki۲u`vp mo۶Q˱Z;[;zP>˷&u@!kF `:kbp7+1\NAkD;$˴{?;kkK`ckxK+q;;[|K(+˱~+ ${7۸˵u`-[KK(K; ;۱+p+۷j$+* < ۽̱kPCr˾ , | ,  Lk 3l \= <lD\!#%')T?\/18\5,7 Aa =( !Krl,  B<2\qB<|G 0 wy{}+쿑î<|èl<ʗL#ܱP? κ 0D g` =.!-! ^ᶽ _`p %>-k$}=@L`-j 0h[N ^a Ú^~ߐ j8bN_`Lpp, >p.rn ۴Cbl}c^g x>};5 I#&nꤾ)~_nα6nގ˱뫾g^dV?pܱ " F4c.z  .~x^00h>BkUNݱlߎaԔ-< -g@ >`ϱM ]-~@ԀNљ.8x>/`'0'nْP9;=p5.].fsS_q? F-ln#n뾱G\_`*ܱ` < Қf/a|֕Ხa[`?`aδO FNߴ#[wBQ>mon ~ϛ lܟ@+=@тP&a  >Ps(y Ȅ+n"__T3$!M*#!DSJhSM(}tY1JTmB8 AOm~\,!]հ i {9I>%ے?6f#ɦGW*/ $!ɢ+^pl9"h1QsܲMFZyWcGlZ~C<ݕb ABg [vjC]{6_yM|Hu/;)"f:ig .rg;'ɭZB%AmB ̅`gZg" "*LgO*qG=THWgsGa&~S6f՜ =l,OHq]X,A?cټs> t)IK(wdTΕzP76xwHz> }bFdu)<o&k@dP%TDuv~ 5~g6t>Sw%T W8..pW ҮR1s%SH#I'kcBl4QNFscxiρe ҏiv>qI#+)22'{H`R!#U6y*HrҹHY~e<{3.ӏmt*M?Pãf8f7LRB:UN?*dW yQ%>cLd4ʬg3 M|N6M|zS'$w3ԧ:ѐ+(H {4!(?Rt(M'Aiy"O 'Cjz-'EisbԤDgViRӤ%5)LRԥ8e:OӦ;uRӧV9O~Ө!m+[=:WjHU՟S@;Xְ=,X6sld#;JֲŲvvg; Hi% )$,,jakٙcm= lnK/ 5dVu nt;]V׺nv]v׻%s;^׼Et[׽蝂x;_B5o~_jp<`Fp`7p%t=hBЇFthF7яt%=iJWҗt5iNwӟuE=jRԧFuUjVկue=kZַuuk^׿v=lbFvlf7φv=mjWvmnww=nrFwսnvw=ozwo~x>pGxp7x%>qWx5qwyE>rX'GyUr/ye>s7yus?zЅ>tGGzҕt7Ozԥ>uWWzֵuw_{~;slurm-slurm-15-08-7-1/doc/html/topology.shtml000066400000000000000000000206401265000126300207650ustar00rootroot00000000000000

Topology Guide

Slurm can be configured to support topology-aware resource allocation to optimize job performance. Slurm supports several modes of operation, one to optimize performance on systems with a three-dimensional torus interconnect and another for a hierarchical interconnect. The hierarchical mode of operation supports both fat-tree or dragonfly networks, using slightly different algorithms.

Slurm's native mode of resource selection is to consider the nodes as a one-dimensional array. Jobs are allocated resources on a best-fit basis. For larger jobs, this minimizes the number of sets of consecutive nodes allocated to the job.

Three-dimension Topology

Some larger computers rely upon a three-dimensional torus interconnect. The IBM BlueGene computers is one example of this which has highly constrained resource allocation scheme, essentially requiring that jobs be allocated a set of nodes logically having a rectangular prism shape. Slurm has a plugin specifically written for BlueGene to select appropriate nodes for jobs, change network switch routing, boot nodes, etc as described in the BlueGene User and Administrator Guide.

The Sun Constellation and Cray systems also have three-dimensional torus interconnects, but do not require that jobs execute in adjacent nodes. On those systems, Slurm only needs to allocate resources to a job which are nearby on the network. Slurm accomplishes this using a Hilbert curve to map the nodes from a three-dimensional space into a one-dimensional space. Slurm's native best-fit algorithm is thus able to achieve a high degree of locality for jobs. For more information, see Slurm's documentation for Sun Constellation and Cray XT and XE systems.

Hierarchical Networks

Slurm can also be configured to allocate resources to jobs on a hierarchical network to minimize network contention. The basic algorithm is to identify the lowest level switch in the hierarchy that can satisfy a job's request and then allocate resources on its underlying leaf switches using a best-fit algorithm. Use of this logic requires a configuration setting of TopologyPlugin=topology/tree.

Note that slurm uses a best-fit algorithm on the currently available resources. This may result in an allocation with more that the optimum number of switches. The user can request a maximum number of switches for the job as well as a maximum time willing to wait for that number using the --switches option with the salloc, sbatch and srun commands. The parameters can also be changed for pending jobs using the scontrol and squeue commands.

At some point in the future Slurm code may be provided to gather network topology information directly. Now the network topology information must be included in a topology.conf configuration file as shown in the examples below. The first example describes a three level switch in which each switch has two children. Note that the SwitchName values are arbitrary and only used to bookkeeping purposes, but a name must be specified on each line. The leaf switch descriptions contain a SwitchName field plus a Nodes field to identify the nodes connected to the switch. Higher-level switch descriptions contain a SwitchName field plus a Switches field to identify the child switches. Slurm's hostlist expression parser is used, so the node and switch names need not be consecutive (e.g. "Nodes=tux[0-3,12,18-20]" and "Switches=s[0-2,4-8,12]" will parse fine).

An optional LinkSpeed option can be used to indicate the relative performance of the link. The units used are arbitrary and this information is currently not used. It may be used in the future to optimize resource allocations.

The first example shows what a topology would look like for an eight node cluster in which all switches have only two children as shown in the diagram (not a very realistic configuration, but useful for an example).

# topology.conf
# Switch Configuration
SwitchName=s0 Nodes=tux[0-1]
SwitchName=s1 Nodes=tux[2-3]
SwitchName=s2 Nodes=tux[4-5]
SwitchName=s3 Nodes=tux[6-7]
SwitchName=s4 Switches=s[0-1]
SwitchName=s5 Switches=s[2-3]
SwitchName=s6 Switches=s[4-5]

The next example is for a network with two levels and each switch has four connections.

# topology.conf
# Switch Configuration
SwitchName=s0 Nodes=tux[0-3]   LinkSpeed=900
SwitchName=s1 Nodes=tux[4-7]   LinkSpeed=900
SwitchName=s2 Nodes=tux[8-11]  LinkSpeed=900
SwitchName=s3 Nodes=tux[12-15] LinkSpeed=1800
SwitchName=s4 Switches=s[0-3]  LinkSpeed=1800
SwitchName=s5 Switches=s[0-3]  LinkSpeed=1800
SwitchName=s6 Switches=s[0-3]  LinkSpeed=1800
SwitchName=s7 Switches=s[0-3]  LinkSpeed=1800

As a practical matter, listing every switch connection definitely results in a slower scheduling algorithm for Slurm to optimize job placement. The application performance may achieve little benefit from such optimization. Listing the leaf switches with their nodes plus one top level switch should result in good performance for both applications and Slurm. The previous example might be configured as follows:

# topology.conf
# Switch Configuration
SwitchName=s0 Nodes=tux[0-3]
SwitchName=s1 Nodes=tux[4-7]
SwitchName=s2 Nodes=tux[8-11]
SwitchName=s3 Nodes=tux[12-15]
SwitchName=s4 Switches=s[0-3]

Note that compute nodes on switches that lack a common parent switch can be used, but no job will span leaf switches without a common parent. For example, it is legal to remove the line "SwitchName=s4 Switches=s[0-3]" from the above topology.conf file. In that case, no job will span more than four compute nodes on any single leaf switch. This configuration can be useful if one wants to schedule multiple phyisical clusters as a single logical cluster under the control of a single slurmctld daemon.

For systems with a dragonfly network, configure Slurm with TopologyPlugin=topology/tree plus TopologyParam=dragonfly. If a single job can not be entirely placed within a single network leaf switch, the job will be spread across as many leaf switches as possible in order to optimize the job's network bandwidth.

NOTE:Slurm first identifies the network switches which provide the best fit for pending jobs and then selectes the nodes with the lowest "weight" within those switches. If optimizing resource selection by node weight is more important than optimizing network topology then do NOT use the topology/tree plugin.

NOTE:The topology.conf file for an Infiniband switch can be automatically generated using the ib2slurm tool found here:
https://github.com/fintler/ib2slurm.

User Options

For use with the topology/tree plugin, user can also specify the maximum number of leaf switches to be used for their job with the maximum time the job should wait for this optimized configuration. The syntax for this option is "--switches=count[@time]". The system administrator can limit the maximum time that any job can wait for this optimized configuration using the SchedulerParameters configuration parameter with the max_switch_wait option.

Environment Variables

if the topology/tree plugin is used, two environment variables will be set to describe that jobs network topology. Note that these environment variables will contain different data for the tasks launched on each node. Use of these environment variables is at the discression of the user.

SLURM_TOPOLOGY_ADDR: The value will be set to the names network switches which may be involved in the job's communications from the system's top level switch down to the leaf switch and ending with node name. A period is used to separate each hardware component name.

SLURM_TOPOLOGY_ADDR_PATTERN: This is set only if the system has the topology/tree plugin configured. The value will be set component types listed in SLURM_TOPOLOGY_ADDR. Each component will be identified as either "switch" or "node". A period is used to separate each hardware component type.

Last modified 9 October 2015

slurm-slurm-15-08-7-1/doc/html/topology_plugin.shtml000066400000000000000000000132511265000126300223430ustar00rootroot00000000000000

Topology Plugin Programmer Guide

Overview

This document describes Slurm topology plugin and the API that defines them. It is intended as a resource to programmers wishing to write their own Slurm topology plugin.

Slurm topology plugins are Slurm plugins that implement convey system topology information so that Slurm is able to optimize resource allocations and minimize communication overhead. The plugins must conform to the Slurm Plugin API with the following specifications:

const char plugin_type[]
The major type must be "topology." The minor type specifies the type of topology mechanism. We recommend, for example:

  • 3d_torus—Optimize placement for a three dimensional torus.
  • none—No topology information.
  • tree—Optimize placement based upon a hierarchy of network switches.

const char plugin_name[]
Some descriptive name for the plugin. There is no requirement with respect to its format.

const uint32_t plugin_version
If specified, identifies the version of Slurm used to build this plugin and any attempt to load the plugin from a different version of Slurm will result in an error. If not specified, then the plugin may be loadeed by Slurm commands and daemons from any version, however this may result in difficult to diagnose failures due to changes in the arguments to plugin functions or changes in other Slurm functions used by the plugin.

The actions performed by these plugins vary widely. In the case of 3d_torus, the nodes in configuration file are re-ordered so that nodes which are nearby in the one-dimensional table are also nearby in logical three-dimensional space. In the case of tree, a tabled is built to reflect network topology and that table is later used by the select plugin to optimize placement. Note carefully, however, the versioning discussion below.

Data Objects

The implementation must maintain (though not necessarily directly export) an enumerated errno to allow Slurm to discover as practically as possible the reason for any failed API call. Plugin-specific enumerated integer values may be used when appropriate.

These values must not be used as return values in integer-valued functions in the API. The proper error return value from integer-valued functions is SLURM_ERROR. The implementation should endeavor to provide useful and pertinent information by whatever means is practical. Successful API calls are not required to reset any errno to a known value. However, the initial value of any errno, prior to any error condition arising, should be SLURM_SUCCESS.

API Functions

The following functions must appear. Functions which are not implemented should be stubbed.

int init (void)

Description:
Called when the plugin is loaded, before any other functions are called. Put global initialization here.

Returns:
SLURM_SUCCESS on success, or
SLURM_ERROR on failure.

void fini (void)

Description:
Called when the plugin is removed. Clear any allocated storage here.

Returns: None.

Note: These init and fini functions are not the same as those described in the dlopen (3) system library. The C run-time system co-opts those symbols for its own initialization. The system _init() is called before the Slurm init(), and the Slurm fini() is called before the system's _fini().

int topo_build_config(void);

Description: Generate topology information.

Returns: SLURM_SUCCESS or SLURM_ERROR on failure.

bool topo_generate_node_ranking(void)

Description: Determine if this plugin will reorder the node records based upon each job's node rank field.

Returns: true if node reording is supported, false otherwise.

int topo_get_node_addr(char* node_name, char** paddr, char** ppatt);

Description: Get Topology address of a given node.

Arguments:
node_name (input) name of the targeted node
paddr (output) returns the topology address of the node and connected switches. If there are multiple switches at some level in the hierarchy, they will be represented using Slurm's hostlist expression (e.g. "s0" and "s1" are reported as "s[0-1]"). Each level in the hierarchy is separated by a period. The last element will always be the node's name (i.e. "s0.s10.nodename")
ppatt (output) returns the pattern of the topology address. Each level in the hierarchy is separated by a period. The final element will always be "node" (i.e. "switch.switch.node")

Returns: SLURM_SUCCESS or SLURM_ERROR on failure.

Last modified 27 March 2015

slurm-slurm-15-08-7-1/doc/html/tres.shtml000066400000000000000000000105041265000126300200640ustar00rootroot00000000000000

Trackable RESources (TRES)

A TRES is a resource that can be tracked for usage or used to enforce limits against. A TRES is a combination of a Type and a Name. Types are predefined. Current TRES Types are:

  • BB (burst buffers)
  • CPU
  • Energy
  • GRES
  • License
  • Mem (Memory)
  • Node

slurm.conf settings

  • AccountingStorageTRES

    Used to define which TRES are to be tracked on the system. By default CPU, Energy, Memory and Node are tracked. This will be the case whether specified or not. The following example:

    AccountingStorageTRES=gres/craynetwork,license/iop1,bb/cray

    will track cpu, energy, memory and nodes along with a GRES called craynetwork as well as a license called iop1. It will also track usage on a Cray burst buffer. Whenever these resources are used on the cluster they are recorded. TRES are automatically set up in the database on the start of the slurmctld.

    The TRES that require associated names are BB, GRES, and License. As seen in the above example, GRES and License are typically different on each system. The BB TRES is named the same as the burst buffer plugin being used. In the above example we are using the Cray burst buffer plugin.

  • PriorityWeightTRES

    A comma separated list of TRES Types and weights that sets the degree that each TRES Type contributes to the job's priority.

    PriorityWeightTRES=CPU=1000,Mem=2000,GRES/gpu=3000

    Applicable only if PriorityType=priority/multifactor and if AccountingStorageTRES is configured with each TRES Type. The default values are 0.

  • TRESBillingWeights

    For each partition this option is used to define the billing weights of each TRES type that will be used in calculating the usage of a job.

    Billing weights are specified as a comma-separated list of TRES=Weight pairs.

    Any TRES Type is available for billing. Note that the base unit for memory and burst buffers is megabytes.

    By default the billing of TRES is calculated as the sum of all TRES types multiplied by their corresponding billing weight.

    The weighted amount of a resource can be adjusted by adding a suffix of K,M,G,T or P after the billing weight. For example, a memory weight of "mem=.25" on a job allocated 8GB will be billed 2048 (8192MB *.25) units. A memory weight of "mem=.25G" on the same job will be billed 2 (8192MB * (.25/1024)) units.

    When a job is allocated 1 CPU and 8 GB of memory on a partition configured with:

    TRESBillingWeights="CPU=1.0,Mem=0.25G,GRES/gpu=2.0"

    the billable TRES will be:

    (1*1.0) + (8*0.25) + (0*2.0) = 3.0

    If PriorityFlags=MAX_TRES is configured, the billable TRES is calculated as the MAX of individual TRES' on a node (e.g. cpus, mem, gres) plus the sum of all global TRES' (e.g. licenses). Using the same example above, the billable TRES will be:

    MAX(1*1.0, 8*0.25) + (0*2.0) = 2.0

    If TRESBillingWeights is not defined then the job is billed against the total number of allocated CPUs.

    NOTE: TRESBillingWeights is only used when calculating fairshare and doesn't affect job priority directly as it is currently not used for the size of the job. If you want TRES' to play a role in the job's priority then refer to the PriorityWeightTRES option.

    NOTE: As with PriorityWeightTRES only TRES defined in AccountingStorageTRES are available for TRESBillingWeights.

sacct

sacct can be used to view the TRES of each job by adding "tres" to the --format option.

sacctmgr

sacctmgr is used to view the various TRES available globally in the system. sacctmgr show tres will do this.

sreport

Before 15.08, sreport would only report on CPU usage. It will now work on different TRES. Simply using the comma separated input option --tres= will have sreport generate reports available for the requested TRES types. More information about these reports can be found on the sreport manpage.

Last modified 19 October 2015

slurm-slurm-15-08-7-1/doc/html/troubleshoot.shtml000066400000000000000000000314121265000126300216410ustar00rootroot00000000000000

Slurm Troubleshooting Guide

This guide is meant as a tool to help system administrators or operators troubleshoot Slurm failures and restore services. The Frequently Asked Questions document may also prove useful.

Slurm is not responding

  1. Execute "scontrol ping" to determine if the primary and backup controllers are responding.
  2. If it responds for you, this could be a networking or configuration problem specific to some user or node in the cluster.
  3. If not responding, directly login to the machine and try again to rule out network and configuration problems.
  4. If still not responding, check if there is an active slurmctld daemon by executing "ps -el | grep slurmctld".
  5. If slurmctld is not running, restart it (typically as user root using the command "/etc/init.d/slurm start"). You should check the log file (SlurmctldLog in the slurm.conf file) for an indication of why it failed. If it keeps failing, you should contact the slurm team for help at slurm-dev@schedmd.com.
  6. If slurmctld is running but not responding (a very rare situation), then kill and restart it (typically as user root using the commands "/etc/init.d/slurm stop" and then "/etc/init.d/slurm start").
  7. If it hangs again, increase the verbosity of debug messages (increase SlurmctldDebug in the slurm.conf file) and restart. Again check the log file for an indication of why it failed. At this point, you should contact the slurm team for help at slurm-dev@schedmd.com.
  8. If it continues to fail without an indication as to the failure mode, restart without preserving state (typically as user root using the commands "/etc/init.d/slurm stop" and then "/etc/init.d/slurm startclean"). Note: All running jobs and other state information will be lost.

Jobs are not getting scheduled

This is dependent upon the scheduler used by Slurm. Executing the command "scontrol show config | grep SchedulerType" to determine this. For any scheduler, you can check priorities of jobs using the command "scontrol show job".

  • If the scheduler type is builtin, then jobs will be executed in the order of submission for a given partition. Even if resources are available to initiate jobs immediately, it will be deferred until no previously submitted job is pending.
  • If the scheduler type is backfill, then jobs will generally be executed in the order of submission for a given partition with one exception: later submitted jobs will be initiated early if doing so does not delay the expected execution time of an earlier submitted job. In order for backfill scheduling to be effective, users jobs should specify reasonable time limits. If jobs do not specify time limits, then all jobs will receive the same time limit (that associated with the partition), and the ability to backfill schedule jobs will be limited. The backfill scheduler does not alter job specifications of required or excluded nodes, so jobs which specify nodes will substantially reduce the effectiveness of backfill scheduling. See the backfill documentation for more details.
  • If the scheduler type is wiki, this represents The Maui Scheduler or Moab Cluster Suite. Please refer to its documentation for help.

Jobs and nodes are stuck in COMPLETING state

This is typically due to non-killable processes associated with the job. Slurm will continue to attempt terminating the processes with SIGKILL, but some jobs may be stuck performing I/O and non-killable. This is typically due to a file system problem and may be addressed in a couple of ways.

  1. Fix the file system and/or reboot the node. -OR-
  2. Set the node to a DOWN state and then return it to service ("scontrol update NodeName=<node> State=down Reason=hung_proc" and "scontrol update NodeName=<node> State=resume"). This permits other jobs to use the node, but leaves the non-killable process in place. If the process should ever complete the I/O, the pending SIGKILL should terminate it immediately. -OR-
  3. Use the UnkillableStepProgram and UnkillableStepTimeout configuration parameters to automatically respond to processes which can not be killed, by sending email or rebooting the node. For more information, see the slurm.conf documentation.

Notes are getting set to a DOWN state

  1. Check the reason why the node is down using the command "scontrol show node <name>". This will show the reason why the node was set down and the time when it happened. If there is insufficient disk space, memory space, etc. compared to the parameters specified in the slurm.conf file then either fix the node or change slurm.conf.
  2. If the reason is "Not responding", then check communications between the control machine and the DOWN node using the command "ping <address>" being sure to specify the NodeAddr values configured in slurm.conf. If ping fails, then fix the network or addresses in slurm.conf.
  3. Next, login to a node tha. Slurm considers to be in a DOWN state and check if the slurmd daemon is running with the command "ps -el | grep slurmd". If slurmd is not running, restart it (typically as user root using the command "/etc/init.d/slurm start"). You should check the log file (SlurmdLog in the slurm.conf file) for an indication of why it failed. You can get the status of the running slurmd daemon by executing the command "scontrol show slurmd" on the node of interest. Check the value of "Last slurmctld msg time" to determine if the slurmctld is able to communicate with the slurmd. If it keeps failing, you should contact the slurm team for help at slurm-dev@schedmd.com.
  4. If slurmd is running but not responding (a very rare situation), then kill and restart it (typically as user root using the commands "/etc/init.d/slurm stop" and then "/etc/init.d/slurm start").
  5. If still not responding, try again to rule out network and configuration problems.
  6. If still not responding, increase the verbosity of debug messages (increase SlurmdDebug in the slurm.conf file) and restart. Again check the log file for an indication of why it failed. At this point, you should contact the slurm team for help at slurm-dev@schedmd.com.
  7. If still not responding without an indication as to the failure mode, restart without preserving state (typically as user root using the commands "/etc/init.d/slurm stop" and then "/etc/init.d/slurm startclean"). Note: All jobs and other state information on that node will be lost.

Networking and configuration problems

  1. Check the controller and/or slurmd log files (SlurmctldLog and SlurmdLog in the slurm.conf file) for an indication of why it is failing.
  2. Check for consistent slurm.conf and credential files on the node(s) experiencing problems.
  3. If this is user-specific problem, check that the user is configured on the controller computer(s) as well as the compute nodes. The user doesn't need to be able to login, but his user ID must exist.
  4. Check that compatible versions of Slurm exists on all of the nodes (execute "sinfo -V" or "rpm -qa | grep slurm"). The Slurm version numbers contain three digits, which represent the major, minor and micro release numbers in that order (e.g. 14.11.3 is major=14, minor=11, micro=3). Changes in the RPCs (remote procedure calls) and state files will only be made if the major and/or minor release number changes. Slurm daemons will support RPCs and state files from the two previous minor or releases (e.g. a version 15.08.x SlurmDBD will support slurmctld daemons and commands with a version of 14.03.x or 14.11.x).

Bluegene: Why is a block in an error state

  1. Check the controller log file (SlurmctldLog in the slurm.conf file) for an indication of why it is failing. (grep for update_block:)
  2. If the reason was something that happened to the system like a failed boot or a nodecard going bad or something like that you will need to fix the problem and then manually set the block to free.

Bluegene: How to make it so no jobs will run on a block

  1. Set the block state to be in error manually.
  2. When you are ready to run jobs again on the block manually set the block to free.

Bluegene: Static blocks in bluegene.conf file not loading

  1. Run "smap -Dc"
  2. When it comes up type "load /path/to/bluegene.conf".
  3. This should give you some reasons why which block it is having problems loading.
  4. Note the blocks in the bluegene.conf file must be in the same order smap created them or you may encounter some problems loading the configuration.
  5. If you need help creating a loadable bluegene.conf file click here

Bluegene: How to free a block(s) manually

  • Using sfree
    1. To free a specific block run "sfree -b BLOCKNAME".
    2. To free all the blocks on the system run "sfree -a".
  • Using scontrol
    1. Run "scontrol update state=FREE BlockName=BLOCKNAME".

Bluegene: How to set a block in an error state manually

  1. Run "scontrol update state=ERROR BlockName=BLOCKNAME".

Bluegene: How to set a sub base partition which doesn't have a block already created in an error state manually

  1. Run "scontrol update state=ERROR subBPName=IONODE_LIST".
  2. IONODE_LIST is a list of the ionodes you want to down in a certain base partition i.e. bg000[0-3] will down the first 4 ionodes in base partition 000.

Bluegene: How to make a bluegene.conf file that will load in Slurm

  1. See the Bluegene admin guide

Last modified 15 April 2015

slurm-slurm-15-08-7-1/doc/html/tutorial_intro_files.tar000066400000000000000000000740001265000126300230070ustar00rootroot00000000000000build_bluegene_q/0000775000175100017510000000000011703627771013372 5ustar jettejettebuild_bluegene_q/configure0000700000175100017510000000025011703623547015257 0ustar jettejette#/bin/bash /tmp/slurm/slurm-2.3.2/configure --enable-bgq-emulation --enable-debug --prefix=/tmp/slurm/install_bluegene_q --sysconfdir=/tmp/slurm/install_bluegene_q/etc build_fake_cluster/0000775000175100017510000000000011703621545013724 5ustar jettejettebuild_fake_cluster/configure0000700000175100017510000000025111703621445015614 0ustar jettejette#!/bin/bash /tmp/slurm/slurm-2.3.2/configure --enable-debug --enable-front-end --prefix=/tmp/slurm/install_fake_cluster --sysconfdir=/tmp/slurm/install_fake_cluster/etc build_linux/0000775000175100017510000000000011703623621012411 5ustar jettejettebuild_linux/configure0000700000175100017510000000021011703621650014275 0ustar jettejette#!/bin/bash /tmp/slurm/slurm-2.3.2/configure --enable-debug --prefix=/tmp/slurm/install_linux --sysconfdir=/tmp/slurm/install_linux/etc install_bluegene_q/0000775000175100017510000000000011703626134013731 5ustar jettejetteinstall_bluegene_q/etc/0000775000175100017510000000000011706307270014504 5ustar jettejetteinstall_bluegene_q/etc/slurm.conf0000600000175100017510000000315011706307224016501 0ustar jettejette# Minimal slurm.conf file for sigle Linux node # Replace "HOSTNAME" with computer's name ("hostname -s") # Replace "USER" with your user name ("id -un") # ControlMachine=HOSTNAME # CHANGE "HOSTNAME" ControlAddr=127.0.0.1 AuthType=auth/munge ClusterName=linux CryptoType=crypto/munge Epilog=/tmp/slurm/install_bluegene_q/sbin/slurm_epilog FastSchedule=1 JobAcctGatherType=jobacct_gather/none JobCompType=jobcomp/none MpiDefault=none ProctrackType=proctrack/pgid Prolog=/tmp/slurm/install_bluegene_q/sbin/slurm_prolog ReturnToService=1 SallocDefaultCommand="/tmp/slurm/install_bluegene_q/bin/srun -n1 -N1 --pty --preserve-env --mpi=none $SHELL" SchedulerType=sched/backfill SchedulerPort=7321 SelectType=select/bluegene SelectTypeParameters=CR_CPU SlurmctldDebug=3 SlurmctldLogFile=/tmp/slurm/install_bluegene_q/tmp/slurmctld.log SlurmctldPidFile=/tmp/slurm/install_bluegene_q/tmp/slurmctld.pid SlurmctldPort=6817 SlurmdPidFile=/tmp/slurm/install_bluegene_q/tmp/slurmd.pid SlurmdDebug=3 SlurmdLogFile=/tmp/slurm/install_bluegene_q/tmp/slurmd.log SlurmdPort=6818 SlurmdSpoolDir=/tmp/slurm/install_bluegene_q/tmp/slurmd.state SlurmUser=USER # CHANGE "USER" SlurmdUser=USER # CHANGE "USER" StateSaveLocation=/tmp/slurm/install_bluegene_q/tmp/slurmctld.state SwitchType=switch/none # # COMPUTE NODES FrontEndName=HOSTNAME FrontEndAddr=127.0.0.1 # CHANGE "HOSTNAME" NodeName=bgq[0000x2333] CPUs=8096 PartitionName=debug Nodes=bgq[0000x2333] Default=YES MaxTime=INFINITE State=UP Shared=FORCE install_bluegene_q/etc/bluegene.conf0000600000175100017510000000046611703624141017131 0ustar jettejette# Minimal bluegene.conf file for emulated IBM BlueGene/Q system MloaderImage=/bgl/BlueLight/ppcfloor/bglsys/bin/mmcs-mloader.rts LayoutMode=DYNAMIC BasePartitionNodeCnt=512 NodeCardNodeCnt=32 Numpsets=64 #used for IO rich systems BridgeAPILogFile=/tmp/slurm/install_linux/tmp/bridgeapi.log BridgeAPIVerbose=0 install_bluegene_q/tmp/0000775000175100017510000000000011703626166014536 5ustar jettejetteinstall_fake_cluster/0000775000175100017510000000000011703614077014275 5ustar jettejetteinstall_fake_cluster/etc/0000775000175100017510000000000011706307270015045 5ustar jettejetteinstall_fake_cluster/etc/slurm.conf0000600000175100017510000000277211706307235017055 0ustar jettejette# Minimal slurm.conf file for emulated 4 node cluster # Replace "HOSTNAME" with computer's name ("hostname -s") # Replace "USER" with your user name ("id -un") # ControlMachine=HOSTNAME # CHANGE "HOSTNAME" ControlAddr=127.0.0.1 AuthType=auth/munge ClusterName=fake_cluster CryptoType=crypto/munge FastSchedule=1 JobAcctGatherType=jobacct_gather/none JobCompType=jobcomp/none MpiDefault=none ProctrackType=proctrack/pgid ReturnToService=1 SallocDefaultCommand="/tmp/slurm/install_fake_cluster/bin/srun -n1 -N1 --pty --preserve-env --mpi=none $SHELL" SchedulerType=sched/backfill SchedulerPort=7321 SelectType=select/cons_res SelectTypeParameters=CR_CPU SlurmctldDebug=3 SlurmctldLogFile=/tmp/slurm/install_fake_cluster/tmp/slurmctld.log SlurmctldPidFile=/tmp/slurm/install_fake_cluster/tmp/slurmctld.pid SlurmctldPort=6817 SlurmdPidFile=/tmp/slurm/install_fake_cluster/tmp/slurmd.pid SlurmdDebug=3 SlurmdLogFile=/tmp/slurm/install_fake_cluster/tmp/slurmd.log SlurmdPort=6818 SlurmdSpoolDir=/tmp/slurm/install_fake_cluster/tmp/slurmd.state SlurmUser=USER # CHANGE "USER" SlurmdUser=USER # CHANGE "USER" StateSaveLocation=/tmp/slurm/install_fake_cluster/tmp/slurmctld.state SwitchType=switch/none # # COMPUTE NODES FrontEndName=HOSTNAME FrontEndAddr=127.0.0.1 # CHANGE "HOSTNAME" NodeName=tux[0-3] CPUs=6 PartitionName=debug Nodes=tux[0-3] Default=YES MaxTime=INFINITE State=UP install_fake_cluster/tmp/0000775000175100017510000000000011703623077015075 5ustar jettejetteinstall_linux/0000775000175100017510000000000011703622350012756 5ustar jettejetteinstall_linux/etc/0000775000175100017510000000000011706307270013535 5ustar jettejetteinstall_linux/etc/slurm.conf0000600000175100017510000000267111706307243015542 0ustar jettejette# Minimal slurm.conf file for sigle Linux node # Replace "HOSTNAME" with computer's name ("hostname -s") # Replace "USER" with your user name ("id -un") # ControlMachine=HOSTNAME # CHANGE "HOSTNAME" ControlAddr=127.0.0.1 AuthType=auth/munge ClusterName=linux CryptoType=crypto/munge FastSchedule=1 JobAcctGatherType=jobacct_gather/none JobCompType=jobcomp/none MpiDefault=none ProctrackType=proctrack/pgid ReturnToService=1 SallocDefaultCommand="/tmp/slurm/install_linux/bin/srun -n1 -N1 --pty --preserve-env --mpi=none $SHELL" SchedulerType=sched/backfill SchedulerPort=7321 SelectType=select/cons_res SelectTypeParameters=CR_CPU SlurmctldDebug=3 SlurmctldLogFile=/tmp/slurm/install_linux/tmp/slurmctld.log SlurmctldPidFile=/tmp/slurm/install_linux/tmp/slurmctld.pid SlurmctldPort=6817 SlurmdPidFile=/tmp/slurm/install_linux/tmp/slurmd.pid SlurmdDebug=3 SlurmdLogFile=/tmp/slurm/install_linux/tmp/slurmd.log SlurmdPort=6818 SlurmdSpoolDir=/tmp/slurm/install_linux/tmp/slurmd.state SlurmUser=USER # CHANGE "USER" SlurmdUser=USER # CHANGE "USER" StateSaveLocation=/tmp/slurm/install_linux/tmp/slurmctld.state SwitchType=switch/none # # COMPUTE NODES NodeName=HOSTNAME CPUs=6 # CHANGE "HOSTNAME" PartitionName=debug Nodes=HOSTNAME Default=YES MaxTime=INFINITE State=UP # CHANGE "HOSTNAME" install_linux/tmp/0000775000175100017510000000000011703604235013560 5ustar jettejetteREADME0000664000175100017510000000474611707652602010773 0ustar jettejetteThese files can be used to build and install SLURM on a stand-alone Linux computer. All of the configuration files and scripts are written for this work to be performed in the directory "/tmp/slurm". Create the directory "tmp/slurm" and copy files from the CD into that directory. Three different system types are avavilable for your use on this CD: 1. A single node Linux node on which the CPUs can be indepedently scheduled 2. An emulated Linux cluster with four nodes 3. An emulated IBM Bluegene/Q system (96-cabinet, 1.6 million core system) Here are step-by-step instructions to build and install. 1. Build, install, and configure MUNGE (See MUNGE documentation for details). MUNGE is available here: http://code.google.com/p/munge/ MUNGE is also available as a Debian package. 2. Change directory to "/tmp/slurm": cd /tmp/slurm 3. Uncompress the SLURM tar-ball (Note SLURM version number may vary): bunzip2 slurm-2.3.2.tar.bz2 4. Unpackage the files from the tar-ball (Note SLURM version number may vary): tar -xf slurm-2.3.2.tar 5. Change directory to one of the three build directories (in this case, we build a fake Linux cluster): cd /tmp/slurm/build_fake_cluster 6. Execute the configure script: ./configure 7. Build the SLURM code ("-j" performs the build in parallel on all of your computer's processors, ">m.o" sends output to the file "m.o", error message will appear on your screen): make -j >m.o 8. Assuming the build was successful, install the code: make -j install >m.o 9. Change directory to the matching install directory (in this case, for a fake Linux cluster): cd /tmp/slurm/install_fake_cluster/etc 10. Edit the configuration file at slurm.conf. Change "HOSTNAME" to the output of "hostname -s" on your computer and change "USER" to your user name as output by "id -un" gedit slurm.conf (use your favorite editor) 11. Start the SLURM daemons in new windows: xterm & (create a new window and in that new window): cd /tmp/slurm/install_fake_cluster/sbin ./slurmctld -Dcv xterm & (create another new window and in that new window): cd /tmp/slurm/install_fake_cluster/sbin ./slurmd -Dcv 12. Change directory to where the SLURM commands are located: cd /tmp/slurm/install_fake_cluster/bin 13. Include files in this directory in your search path (command depends upon shell being used): export PATH=.:$PATH 14. Try running some commands sinfo srun -N4 hostname etc. slurm-slurm-15-08-7-1/doc/html/tutorials.shtml000066400000000000000000000056271265000126300211470ustar00rootroot00000000000000

Slurm Tutorials

Slurm Workload Manager: Architecture, Configuration and Use

Introduction to the Slurm Workload Manager for users and system administrators, plus some material for Slurm programmers: Slurm Workload Manager

Introduction to Slurm Tutorial

Introduction to the Slurm Resource Manager for users and system administrators. Tutorial covers Slurm architecture, daemons and commands. Learn how to use a basic set of commands. Learn how to build, configure, and install Slurm.

Introduction to Slurm Tools

This video gives a basic introduction to using sbatch, squeue, scancel and scontrol show job on the computers at Brigham Young University, Fulton Supercomputing Lab.

Introduction to Slurm Tools

Slurm Database Usage

Slurm Resource Manager database for users and system administrators. Tutorial covers Slurm architecture for database use, accounting commands, resource limits, fair share scheduling, and accounting configuration.

Slurm Database Usage video on YouTube (in two parts)
  1. Slurm Database Usage, Part 1
  2. Slurm Database Usage, Part 2

Last modified 10 September 2013

slurm-slurm-15-08-7-1/doc/html/usage_pies.gif000066400000000000000000000574701265000126300206660ustar00rootroot00000000000000GIF89a$< U uym qmq(i(i,,q i(@a Y0u@4 @$$$D}0(PYP(m0]}D< (m,,4Y i] ae P((a$$((i}} 4$$((qH4q8((}00@HL,,y} }<,, 0$}0,,Y Y4q(]u( <<<$i8H48y44i,@DPu<<yq(<<<}PHHDDeL 88@@y@@@Y@4,,,0LPPP00(uDDUUY(PPH8LLaaaaai PPPLUUeimaa,$PU0(]]qqyaUuuuqqyyyii@@ieuummyyuy]]ii‰ʙƑyyΝƝ00ΝơƮҮڡΡ]]ڲ}}޾ҡ, H*\ȰÇ#JHŋ3j.Ǐ C2ɓ(S\ɲ˗0cʜI$3Q}p@ JtOP,ijJJիXju4QjlT… FX۷pP;b]P#V׿ LғjJL.

Gw:mļ=N^oq}ȓ7_6$/!,@os%J؇_ wo~U/nASm}@$?+8 :* x̂[&!8GBP SK`p edw8S"p )X@RqqpȂ@o@ɰHcbgbQHa_t 4 Ѐ&]9ILe%8*4^RsG6T`^' NjƆMڴ8Uq})]KcՕ.cZ:oi)@gHࠋ]VUGb0iiӷY kb-Dу i(5&q+R*}(sfN׸@uRndwy=Ρ|o+_/_ =Y\`i3V7/uWLb p{к ΰWE|/T&\ e:?ڵc2VՏY0 '%});VVe˄@,,-o6yX L.e3D=k0*Y2ф,h##SzhENN9rfQN6UՅ"#}ZѸ& Hj}F68`e])U[uoM-zM$czܦ4ޱLD 6ey+kS?}%4)6EJZTp8o5N$X3BC&ĂCbP>m|Qeg羹Й좻rGz%. p?W}?񭏛_!q O{v coʽ.wH "n=nC? =xo[{ ƣ\wN'/YT|syByEߜ=DN=ͫztɷ}T~ ,_~懯OdP|pGzcwoyCz HYGu~G[mSoҰo{G}xq(ugxր^wїyg!xw'z灥{!vXugq뗂R7{|.h&wrh8o!yGn\hyt87Ɖm6cu1({8Xx؋X>]z"Șʸ،8Xhg¨XFV08Xx蘎긎؎[ܘ0 N9`xfG 9YَgxXH鐎0yp&y( X!s@@U@8Z >Y>ՓPmiHLٔN9+ɒV P; R`Z>I W > HOٖnYNRYTm0 0 |9:X]y 0~9~=`irY7Rq}Wy cy ӵ) !9iTyU b9WI 5)9~uDЏAfR|7 !!Xa雪9~{ H n9E2!'7nI 3yIW=g@i% ʔ3rYGo@ Ӱ~wY5 5 3 ؠ*zn4VƵ55ɗ"z[xiɏ+ZJ.qF9ؙǐB)FڥdJ:v)V-a IW)*9Ȭ *lɩٜz3` Uv鉮ɞ^&Zlj pr&J [ذ-ni@J'J` ٱ{ #۠0'E6.{v9 W%8dBJ>|#*QjEcY K+pN{zQbJBW)*YʭmI tuKw+h7˶up@۸P ;@۷({^9t kД +`;k ˵=5KI H  kY{' ^%{x;yH&S K6i ɻ(tۿ9&KW,[ k Ḻ+H L R:+ڳ'㸺  `,7ܖk0&͖  ,LJöPpx࿌|;Sj x [3^ PaKƮJg>~|VW?{Zcu\_zЭƧ#<"l%k_ȶ;Z +8Ȅlȱ(4kṟ{ɁW8M|X;x 5lg{^ڽe;E;K{<,|\ J `@]u 3 HMH%*;'MfX.}y<: 6><8FmKM(Q}#}|[Z]prh V(Mp{K]2.}{^ N=GX.#E-.|8-aa<~}㽛:<^wݻ7B^}}M>}A,1W>m-aw| p pIvmJm1 PNyy L^Jۂ~{q P@ ~ P!I m fᢛ^i^ü]PA叚K nP *|볞i\ ĵmywɐN ai) &@my6_ܜ : (vA>.'jN mL!',n n>"7og!y?_ ^xYAaq`İi'Z벎;A(gz>NȎ ~N܌ .&4d /sa\иnPB@& ,n]B; x`BQ'aޔQA2p &F\lB0-ᢋ DPB >QD-^G=~RVpDRJ-]SL5GuGN=}FEETiD} UK^ŚU+QQ=F~񋊚e͞EVE3 ̏~[śר9'\eÕ^… .JX"xVdʕgpW35"Z4eY^V $Ӭ|o&TخQQQu@0:PSU 0SUgUmQwͱZhWaS5bH`lC^߄Ym2Udtmu Qj7r-oa^}(]R<򘡐}ǵ^X#\xzh*8c RQ"ֶUuFGg?ڣ5ٲ}:˓ sdd^Q(ԘԝiH-sy׏Z2,u#aiFЯǵh3k"ϞVS-āA,lxgoU]q{e5YcDX$peEt#YỶG "0Y?Uo96\։rM\ ~f>a9_imou~^ЦWaȑ:ݩSu.UQұuN`u.rxuJp0ǫ!?=:lD*@X *"3hupGX>B&mc %h8Y~.#M B3 FQX1 BtS'S> ;9O( +8o V:DZd˦1G g B5O) @?A]hЇ:I\*0UI1Cj< P'eF1 G ";@P+ "K `XJNE8`l[%ֹε][SA 02 {&y xXB UhX€4X# M hָǵJi+"׹8zB׺eX=b׻v1jj^6L/ޚ^z_v#,LN7zp}\î !3fv)L`F|xG^zp^s!{ap< }4H* "d_ G(+Aq7V C A:YGF/'ԡyr7`{F</+g(K9)3Dp^ŏ L@Zq7GG#>A0'Ú gGWֲq]WÚVǪ}*L=Xf'dQ7K?jE>8|h yAޡdȼMGvMt;p FЩxsq]g3 # f17C6ȟ= nyΥqD3t*;y͕cx^u\huh: ۼ w(9ѩEڹn蘷uŎs@F8dYtNnGxP l?!UNu e'd}^puHjc<8D!Y}W:a3=1F{!vO FYNn˙<ܫW>}}Og]ofYӧ7>#L6 Qc+<;$C1 ?KL֋x53s7廿0xY83t7@P6P{}C>2=GA|@ۻ0U= 0|a#>*Nk2i ;cBB[4/a0K4,_h2$5B+77TJr CHDz F#HDm)0K$P^HOT3Ez0UEU\”5T TPh* jlӌ:*Uh%zoSV\ց׬XoX7SE̮ObP腆#J,E]5f+]DjetgϔäLUHl׊DUߐLjֵv`VSoMIS.EQRƈF-rҥ&l,K"P-Fe8Ć#evlȆl~lޢl~¥'5w K.3 fJwu'Sv2Y֍',JZ`%ȕ'6-bt_~7#"9Jd cc ~3QONv4-T7A9%N O'jkc`NJob9X#58&Bq>OT::4HQauKn74(i'2Y wp(baY_9oiz:㚗:9 S7g$Q3u3M@1%eN\)NZehZ5RI:e%S4$M~ rUn9)h\: G-gdn+'ҭ{Ss.5Uh@ƥzzk *>/Fd2%>'dP3M7Et.Mx^3;^#3G(c-kTSM`59M }6Mi>6[v0l{z=X5||<*ͪs i+pq/ܑ{vSm *Wg7} ofKx%8g7nǒV|s40w 5ʖ:8O!ͣHێ{b|7CNjh!h~ HpbގƔ&wu^BVeX~{R?gf &6)`uh ^SQP@`gD !u:m"J@ca #Xrx |=DBC'Ø &6cbgnIP!HxxEK[$N!AD1QH,cn~FșY%*fbch '#&G@f{LEA$E&p-ɍn<%Y' !&,)?"U>r\1 C\&m) RA~ LFf#bwLJE3 Mi ^!a jSH*Nӌȋ:h#K|'Z[O5 l/̑ y =>|N#XuixC +{rNw?Zf9#`picqqG`U2^-sel c K̋F[1ff]Ĥi|K S2^Gj՝0N_'Kb,y+"G'RO8x=aiنfu9mZj~m+YQ#Fɡ;ԂpusrQu7=iyն6ߖow+Vqo=Q k#8qd80 x3 4 q1{V_4WɁUqщYtܷ׊y[^fg |,yA~s`a-h6AJ|>HdiӸ.k[w;-o5s\79^v\$#Hdv{+_DD*gW>0okQ|\/H2ї ꓘjG^?"&7i~SϩlW?$FOW ?ةS˥C=ݴ]yU^G3Ń L ÍfQ)-ȠHٟY``EX&\`)֘#̟I-h!11E9R>c3=|bDQcr^L"@`YI<`"b)834b R `\”$icF 71@ء¡D N[W$&nff\aS0FxXc> rk~2\lBH_j./DG68)$p"2gq*fnȡJ50_t¨PAbJeQ6n`#zڄqnV{gdk'胩\'ntphM$F+AJ(6|~2@? bhiF Cx('qdrVGXBhhԦnn%$%*{r @ҕbY~ iD4i=q"Δj?Ki[q9EpN0HBF% $0!%(Dr\^!)E0Cp2|2aЃ!rA0 ---L.SD6?tĹA2*-n2673`|d41}xC*)3'7_20sA]3`".#4;;X/ϳԹsGUXs` 2;=A8E6Ǯ0$TrDY#3g2 {C-)H2l>Y4_E G[]5Oq7._CI3-IY9"PtO]o"pDl1a$q4+O_VVw+4"'@#qnDh31qHI/U_2l\K4)/&5A#*06D5k%e c߆+t:X6$BRG9Zki=Ìa3k]Pi߆9  ]n,^ZvlÙȔo?cKp65^o\su/(pjJ7 @Pu ]q'(v_A!\ n1l68d7@tUg[D9Ёc~?B'7yE0$̃ x@FBѷ3]85;ԇ[7vl*x)CIxbϸ4whm#u(yl$831yC98,u{̃P@S;I9BwߠyPJdx̀xoإB)sX!NGl1z:d90dR:̐yCSw>?8ߺFX"02S0'4A!<7c-BE*CqԂJ9pBl(˒֛+HpA[HTD9n2S޸D9B3AD,y|{{{*쵾W =v'£($;r !Br_;EԻDC-0T:*D:*TEפ*0Az[?l}w"4:X9P Cɾg>,(lA~0{9L~k?' Rn~>@:OAJBzcGAy\/HAʄsOK/aƌcH7qԹ3gɓei" WQG&MOOFujM [ȬkW_uHjYg9 Q&HvȐn]w8 g"m\V"Jx1Aj2r&&Jg֬M43aUdUio1$j5{ 4cMVd]Y}ViڧV[o 7lŝrMWu"]d}Wyzk}W{T߀ .y9^;&Ɂ-x_3H̐x!:Qq1^etg&Ӯ#1:T}Z-Y2"/[ytA,FhVAR"볧QzFG.QȪYh eE!- XK+pKAqi(<"j1I̸nb|r]\IGȆّ|or'fǹG"Eevi/Ih^^mns,jV ŌћȌ [ ?|/gHwѱ13C(7d"Y!6qnp˟T\W8]Oבad6z>t d:u')t]#: .z9m"FJ\Ɏl dpw>ȬaK C+4%dpvTGc7CQCw4oY9`H#cy;!#sU4inC@`hjp(H<4%Jƹ~!JYEZqq{^m򁎬!Ő @ /jg؁{kJ(0 p HnWz/ 04!P٠g L#A1ؐpwհ4VaIQTeq4xQ ]&@ L 1\ e 40A 1 qe! +!4 0. 1 x8R  1P!` r$mb"+"3r#;#($A%0R#9\ #R^(`p '͠>A('r'r(2)))R@p(JA)rq'٥u. R R"A)@'uB.c./3P//c/a/0`/.eА &83 ɕv4 &A3yRUS ?1*060 mS \3yp7)S801S7e43)s:/{)54`3 a6aPc@F `<3GEu0 G0 vBKcp KLT )@HBc @Jݐ Ca8T[J344 (233'T`U Pc0QE34T m?OaPE41tBT# LDQUH)є a#BKAZ1VS4~>}s$5;kX1YsuYh@MsWIsSq0 4XeQc8G$A@%EB\a0\\S,U]A[=sS:\UgS^S& PVH6n5?m$S_}uH5`Y3b!EYAHV dP[_gqu )d]/^YQ]5t$|VC[6haT9;(:5cyu!vRgiabitus$\:<5v7mvJcWk%L#muSs5ocSoy!UUv['g  n{ 6hOu!1 #0 pBwtK!N4 mr4V k 1-7LL'Kse">&aXSwy (N 4X{D#؁|i?uׇA~WSb7||+Ci~7|_7lGuY GwcЁIZ=nZ:22bQ5IbI7GÄ=il=3Ȕ.32h86oXV5VYumP`)C4 X1 XI;uW B'>X x `>VXYXQ vI7,X ?ؙ|vIQW 4eGuba0v-9АɅc؏E @ s@})s;9ג8X*`Y sE?o]9մ:ba9SYV6gV&hѵG5⚥PU9:X_b3fS\I,yI"v.Uiԋ֠ u^Sn9)Y"4y(U۶?rY)X]c0gi t[S=oM_ytsƨtW1"_1'/a-a,Bp/—aP/P99 "jU@9w"v:L-)[<zZGfZm;b{qZ'.[9B{ue9BcH+Ig!jsb="\[QV9#7SC/B)Zꙅ;BB@B?B^)z'{KzwR?(x1gS'"ػleMzk>BCY:sZۦ]w$z@9A:sU?u"#( -ܔ9BߐZ6٫;)N|`=BӺ깱=¬kcf:ƹe/uBǽy7ܔZo#q4;&\9ʱѕ6;"K#i#u̩#cOI;$L\+U$kyY{$eIw[$ّ6 rџ9",@ob' #$}+ݕH&-o"GL9w]}'[&oe%G֫e~AZy]Չ؍=ٕ}ٙYA.٣)}"'ޡB"!y}']='RM "]'ݽ]*}!}'ԡubs]Au!|!ߡaaAR @~!A@s]9A^I"]b]~!=B~K"2!-׽#|2Ar^d9"坂!闾>=8>|1׽^!` ^q^#>柠A#|^' P~!><#'*!7??><+sb)n_'|^<MR3a}%Y"sBa^A5}9"?׿A׿9X' :|1ĉ+Z` >zأ:PȎ[| 3̙4kڼ3Ν<{ 4СD=4ҥL:} 5ԩTZ5֭\z 6رd˚=X+-ܹtڽ2M1 J~ 93/fLq#ʜ;{|rS~ :լn*qٴkۼ{}1RҦQ?<9mKcp<鑙+ `3ܻi˛Tϻi\صǿ??^W*EЂ 9*fLXpD:W.$"#$<0υ.4*-TK5 ոCŸ *6RofÐU#L ?C@%@LԭA ϫ uK+|~GM 7 Ѐ&g-A[sσ*tYC 4\jьj,ݨG?t$u FKҔ*mK}"җt29)Moӊ4'6mQX5Y2W})|}/*Q+'h@ttGwĺ&Qpe2 UCn7(S$!֪D[]amIU|;?6C qcWd qf+I)nB0Ue,i**1Rx#څRK}R,tiL`rgϴqJVlz$BGؚD@dJݙlaLYzӜ?m;slurm-slurm-15-08-7-1/doc/html/user_permissions.shtml000066400000000000000000000030431265000126300225200ustar00rootroot00000000000000

User Permissions

Slurm supports several special user permissions as described below.

Operator

These users can add, modify, and remove any database object (user, account, etc), and add other operators. On a SlurmDBD served cluster, these users can

  • View information that is blocked to regular uses by a PrivateData flag
  • Create/Alter/Delete Reservations

Set using an AdminLevel option in the user's database record. For configuration information, see Accounting and Resource Limits.

Admin

These users have the same level of privileges as an operator in the database. They can also alter anything on a served slurmctld as if they were the SlurmUser or root.

An AdminLevel option in the user's database record. For configuration information, see Accounting and Resource Limits.

Coordinator

A special privileged user, usually an account manager or such, that can add users or sub-accounts to the account they are coordinator over. This should be a trusted person since they can change limits on account and user associations inside their realm.

Set using a table in Slurm's database defining user's and accounts for which they can serve as coordinators. For configuration information, see the sacctmgr man page.

Last modified 22 September 2015

slurm-slurm-15-08-7-1/doc/html/wckey.shtml000066400000000000000000000052361265000126300202370ustar00rootroot00000000000000

Workload Characterization Key (WCKey) Management

A WCKey is an orthogonal way to do accounting against possibly unrelated accounts. This can be useful where users from different accounts are all working on the same project.

slurm(dbd).conf settings

Including "WCKey" in your AccountingStorageEnforce option in your slurm.conf file will enforce WCKeys per job. This means only jobs with valid WCKeys (WCKeys previously added through sacctmgr) will be allowed to run.

If you wish to track the value of a jobs WCKey you must set the TrackWCKey option in both the slurm.conf as well as the slurmdbd.conf files. This will assure the WCKey is tracked on each job. If you set "WCKey" in your AccountingStorageEnforce line TrackWCKey is set automatically, it still needs to be added to your slurmdbd.conf file though.

sbatch/salloc/srun

Each submitting tool has the --wckey= option that can set the WCKey for a job. [SBATCH|SALLOC|SLURM]_WCKEY can also be set in the environment to set the WCKey. If no WCKey is given the WCKey for the job will be set to the users default WCKey for the cluster, which can be set up with sacctmgr. Also if no WCKey is specified the accounting record is appended with a '*' to signify the WCKey was not specified. This is useful for a manager to determine if a user is specifying their WCKey or not.

sacct

Sacct can be used to view the WCKey by adding "wckey" to the --format option. You can also single out jobs by using the --wckeys= option which would only send information about jobs that ran with specific WCKeys.

sacctmgr

Sacctmgr is used to manage WCKeys. You can add and remove WCKeys from users or list them.

You add a user to a WCKey much like you do an account, only the WCKey doesn't need to be created before hand. i.e.

sacctmgr add user da wckey=secret_project

You can remove them from a WCKey in the same fashion.

sacctmgr del user da wckey=secret_project

To alter the users default WCKey you can run a line like

sacctmgr mod user da cluster=snowflake set defaultwckey=secret_project

Which will change the default WCKey for user "da" on cluster "snowflake" to be "secret_project". If you want this for all clusters just remove the cluster= option.

sreport

Information about reports available for WCKeys can be found on the sreport manpage.

Last modified 14 November 2014

slurm-slurm-15-08-7-1/doc/jsspp/000077500000000000000000000000001265000126300162315ustar00rootroot00000000000000slurm-slurm-15-08-7-1/doc/jsspp/Makefile000066400000000000000000000024121265000126300176700ustar00rootroot00000000000000# The following comments are to remind me how the automatic variables work: # $@ - target # $% - target member # $< - First prerequisite # $? - All (newer) prerequisites # $^ - All prerequisites # $+ - $^ but with repetitions # $* - $* stem of pattern (for "foo.c" in %.c:%.o this would be "foo") # 'info "GNU make"': "Using variables": "Automatic" also lists a few more. REPORT = jsspp TEX = ../common/llnlCoverPage.tex $(REPORT).tex FIGDIR = ../figures FIGS = $(FIGDIR)/allocate-init.eps \ $(FIGDIR)/arch.eps \ $(FIGDIR)/connections.eps \ $(FIGDIR)/entities.eps \ $(FIGDIR)/interactive-job-init.eps \ $(FIGDIR)/queued-job-init.eps \ $(FIGDIR)/slurm-arch.eps PLOTS = $(FIGDIR)/times.eps BIB = ../common/project.bib references.bib %.eps: %.dia dia --nosplash -e $@ $< %.eps: %.gpl gnuplot $< %.eps: %.fig fig2dev -Lps $< $@ %.eps: %.obj tgif -print -eps $< %.ps: %.dvi dvips -K -t letter -o $(@F) $(}: Specifies the number of processors cpus) required for each task (or process) to run. This may be useful if the job is multithreaded and requires more than one cpu per task for optimal performance. The default is one cpu per process. \item {\tt nodes=[-]}: Specifies the number of nodes required by this job. The node count may be either a specific value or a minimum and maximum node count separated by a hyphen. The partition's node limits supersede those of the job. If a job's node limits are completely outside of the range permitted for it's associated partition, the job will be left in a PENDING state. The default is to allocate one cpu per process, such that nodes with one cpu will run one task, nodes with 2 cpus will run two tasks, etc. The distribution of processes across nodes may be controlled using this option along with the {\tt nproc} and {\tt cpus-per-task} options. \item {\tt nprocs=}: Specifies the number of processes to run. Specification of the number of processes per node may be achieved with the {\tt cpus-per-task} and {\tt nodes} options. The default is one process per node unless {\tt cpus-per-task} explicitly specifies otherwise. \end{itemize} \subsubsection{Constraint Specification} These options describe what configuration requirements of the nodes which can be used. \begin{itemize} \item {\tt constraint=list}: Specify a list of constraints. The list of constraints is a comma separated list of features that have been assigned to the nodes by the slurm administrator. If no nodes have the requested feature, then the job will be rejected. \item {\tt contiguous=[yes|no]}: demand a contiguous range of nodes. The default is "yes". \item {\tt mem=}: Specify a minimum amount of real memory per node (in megabytes). \item {\tt mincpus=}: Specify minimum number of cpus per node. \item {\tt partition=name}: Specifies the partition to be used. There will be a default partition specified in the SLURM configuration file. \item {\tt tmp=}: Specify a minimum amount of temporary disk space per node (in megabytes). \item {\tt vmem=}: Specify a minimum amount of virtual memory per node (in megabytes). \end{itemize} \subsubsection{Other Resource Specification} \begin{itemize} \item {\tt batch}: Submit in "batch mode." srun will make a copy of the executable file (a script) and submit therequest for execution when resouces are available. srun will terminate after the request has been submitted. The executable file will run on the first node allocated to the job and must contain srun commands to initiate parallel tasks. \item {\tt exclude=[filename|node\_list]}: Request that a specific list of hosts not be included in the resources allocated to this job. The host list will be assumed to be a filename if it contains a "/"character. If some nodes are suspect, this option may be used to avoid using them. \item {\tt immediate}: Exit if resources are not immediately available. By default, the request will block until resources become available. \item {\tt nodelist=[filename|node\_list]}: Request a specific list of hosts. The job will contain at least these hosts. The list may be specified as a comma-separated list of hosts, a range of hosts (host[1-5,7,...] for example), or a filename. The host list will be assumed to be a filename if it contains a "/" character. \item {\tt overcommit}: Overcommit resources. Normally the job will not be allocated more than one process per cpu. By specifying this option, you are explicitly allowing more than one process per cpu. \item {\tt share}: The job can share nodes with other running jobs. This may result in faster job initiation and higher system utilization, but lower application performance. \item {\tt time=}: Establish a time limit to terminate the job after the specified number of minutes. If the job's time limit exceed's the partition's time limit, the job will be left in a PENDING state. The default value is the partition's time limit. When the time limit is reached, the job's processes are sent SIGXCPU followed by SIGKILL. The interval between signals is configurable. \end{itemize} All parameters may be specified using single letter abbreviations ("-n" instead of "--nprocs=4"). Environment variable can also be used to specify many parameters. Environment variable will be set to the actual number of nodes and processors allocated In the event that the node count specification is a range, the application could inspect the environment variables to scale the problem appropriately. To request four processes with one cpu per task the command line would look like this: {\em srun --nprocs=4 --cpus-per-task=1 hostname}. Note that if multiple resource specifications are provided, resources will be allocated so as to satisfy the all specifications. For example a request with the specification {\tt nodelist=dev[0-1]} and {\tt nodes=4} may be satisfied with nodes {\tt dev[0-3]}. \subsection{The Maui Scheduler and SLURM} {\em The integration of the Maui Scheduler with SLURM was just beginning at the time this paper was written. Full integration is anticipated by the time of the conference. This section will be modified as needed based upon that experience.} The Maui Scheduler is integrated with SLURM through the previously described plugin mechanism. The previously described SLURM commands are used for all job submissions and interactions. When a job is submitted to SLURM, a Maui Scheduler module is called to establish its initial priority. Another Maui Scheduler module is called at the beginning of each SLURM scheduling cycle. Maui can use this opportunity to change priorities of pending jobs or take other actions. \subsection{DPCS and SLURM} DPCS is a meta-batch system designed for use within a single administrative domain (all computers have a common user ID space and exist behind a firewall). DPCS presents users with a uniform set of commands for a wide variety of computers and underlying resource managers (e.g. LoadLeveler on IBM SP systems, SLURM on Linux clusters, NQS, etc.). It was developed in 1991 and has been in production use since 1992. While Globus\cite{Globus2002} has the ability to span administrative domains, both systems could interface with SLURM in a similar fashion. Users submit jobs directly to DPCS. The job consists of a script and an assortment of constraints. Unless specified by constraints, the script can execute on a variety of different computers with various architectures and resource managers. DPCS monitors the state of these computers and performs backfill scheduling across the computers with jobs under its management. When DPCS decides that resources are available to immediately initiate some job of its choice, it takes the following actions: \begin{itemize} \item Transfers the job script and assorted state information to the computer upon which the job is to execute. \item Allocates resources for the job. The resource allocation is performed as user {\em root} and SLURM is configured to restrict resource allocations in the relevent partitions to user {\em root}. This prevents user resource allocations to that partition except through DPCS, which has complete control over job scheduling there. The allocation request specifies the target user ID, job ID (to match DPCS' own numbering scheme) and specific nodes to use. \item Spawns the job script as the desired user. This script may contain multiple instantiations of \srun\ to initiate multiple job steps. \item Monitor the job's state and resource consumption. This is performed using DPCS daemons on each compute node recording CPU time, real memory and virtual memory consumed. \item Cancel the job as needed when it has reached its time limit. The SLURM job is initiated with an infinite time limit. DPCS mechanisms are used exclusively to manage job time limits. \end{itemize} Much of the SLURM functionality is left unused in the DPCS controlled environment. It should be noted that DPCS is typically configured to not control all partitions. A small (debug) partition is typically configured for smaller jobs and users may directly use SLURM commands to access that partition. slurm-slurm-15-08-7-1/doc/jsspp/intro.tex000066400000000000000000000141021265000126300201040ustar00rootroot00000000000000\section{Introduction} Linux clusters, often constructed by using commodity off-the-shelf (COTS) componnets, have become increasingly populuar as a computing platform for parallel computation in recent years, mainly due to their ability to deliver a high perfomance-cost ratio. Researchers have built and used small to medium size clusters for various applications~\cite{BeowulfWeb,LokiWeb}. The continuous decrease in the price of the COTS parts in conjunction with the good scalability of the cluster architecture has now made it feasible to economically build large-scale clusters with thousands of processors~\cite{MCRWeb,PCRWeb}. An essential component that is needed to harness such a computer is a resource management system. A resource management system (or resource manager) performs such crucial tasks as scheduling user jobs, monitoring machine and job status, launching user applications, and managing machine configuration, An ideal resource manager should be simple, efficient, scalable, fault-tolerant, and portable. Unfortunately there are no open-source resource management systems currently available which satisfy these requirements. A survey~\cite{Jette02} has revealed that many existing resource managers have poor scalability and fault-tolerance rendering them unsuitable for large clusters having thousands of processors~\cite{LoadLevelerWeb,LoadLevelerManual}. While some proprietary cluster managers are suitable for large clusters, they are typically designed for particular computer systems and/or interconnects~\cite{RMS,LoadLevelerWeb,LoadLevelerManual}. Proprietary systems can also be expensive and unavailable in source-code form. Furthermore, proprietary cluster management functionality is usually provided as a part of a specific job scheduling system package. This mandates the use of the given scheduler just to manage a cluster, even though the scheduler does not necessarily meet the need of organization that hosts the cluster. Clear separation of the cluster management functionality from scheduling policy is desired. This observation led us to set out to design a simple, highly scalable, and portable resource management system. The result of this effort is Simple Linux Utility Resource Management (SLURM\footnote{A tip of the hat to Matt Groening and creators of {\em Futurama}, where Slurm is the most popular carbonated beverage in the universe.}). SLURM was developed with the following design goals: \begin{itemize} \item {\em Simplicity}: SLURM is simple enough to allow motivated end-users to understand its source code and add functionality. The authors will avoid the temptation to add features unless they are of general appeal. \item {\em Open Source}: SLURM is available to everyone and will remain free. Its source code is distributed under the GNU General Public License~\cite{GPLWeb}. \item {\em Portability}: SLURM is written in the C language, with a GNU {\em autoconf} configuration engine. While initially written for Linux, other UNIX-like operating systems should be easy porting targets. SLURM also supports a general purpose {\em plugin} mechanism, which permits a variety of different infrastructures to be easily supported. The SLURM configuration file specifies which set of plugin modules should be used. \item {\em Interconnect independence}: SLURM supports UDP/IP based communication as well as the Quadrics Elan3 and Myrinet interconnects. Adding support for other interconnects is straightforward and utilizes the plugin mechanism described above. \item {\em Scalability}: SLURM is designed for scalability to clusters of thousands of nodes. Jobs may specify their resource requirements in a variety of ways including requirements options and ranges, potentially permitting faster initiation than otherwise possible. \item {\em Robustness}: SLURM can handle a variety of failure modes without terminating workloads, including crashes of the node running the SLURM controller. User jobs may be configured to continue execution despite the failure of one or more nodes on which they are executing. Nodes allocated to a job are available for reuse as soon as the job(s) allocated to that node terminate. If some nodes fail to complete job termination in a timely fashion due to hardware of software problems, only the scheduling of those tardy nodes will be effected. \item {\em Secure}: SLURM employs crypto technology to authenticate users to services and services to each other with a variety of options available through the plugin mechanism. SLURM does not assume that its networks are physically secure, but does assume that the entire cluster is within a single administrative domain with a common user base across the entire cluster. \item {\em System administrator friendly}: SLURM is configured a simple configuration file and minimizes distributed state. Its configuration may be changed at any time without impacting running jobs. Heterogeneous nodes within a cluster may be easily managed. SLURM interfaces are usable by scripts and its behavior is highly deterministic. \end{itemize} The main contribution of our work is that we have provided a readily available tool that anybody can use to efficiently manage clusters of different size and architecture. SLURM is highly scalable\footnote{It was observed that it took less than five seconds for SLURM to launch a 1900-task job over 950 nodes on recently installed cluster at Lawrence Livermore National Laboratory.}. The SLURM can be easily ported to any cluster system with minimal effort with its plugin capability and can be used with any meta-batch scheduler or a Grid resource broker~\cite{Gridbook} with its well-defined interfaces. The rest of the paper is organized as follows. Section 2 describes the architecture of SLURM in detail. Section 3 discusses the services provided by SLURM followed by performance study of SLURM in Section 4. Brief survey of existing cluster management systems is presented in Section 5. %Section 6 describes how the SLURM can be used with more sphisticated external schedulers. Concluding remarks and future development plan of SLURM is given in Section 6. slurm-slurm-15-08-7-1/doc/jsspp/jsspp.tex000066400000000000000000000047271265000126300201240ustar00rootroot00000000000000\documentclass[11pt]{article} \usepackage{graphics} \usepackage{epsfig} \usepackage{hhline} \setlength{\textheight}{8.5in} \setlength{\textwidth}{6in} \setlength{\oddsidemargin}{0.15in} \setlength{\parindent}{1pc} \setlength{\topmargin}{-0.0625in} % define some macros: \newcommand{\munged}{{\tt munged}} \newcommand{\srun}{{\tt srun}} \newcommand{\scancel}{{\tt scancel}} \newcommand{\squeue}{{\tt squeue}} \newcommand{\scontrol}{{\tt scontrol}} \newcommand{\sinfo}{{\tt sinfo}} \newcommand{\slurmctld}{{\tt slurmctld}} \newcommand{\slurmd}{{\tt slurmd}} \title{SLURM: Simple Linux Utility for Resource Management\thanks{ This document was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the University of California nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or the University of California, and shall not be used for advertising or product endorsement purposes. This work was performed under the auspices of the U. S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48. Document UCRL-JC-147996.}} \author{Morris A. Jette \and Andy B. Yoo \and Mark Grondona} % We cheat here to easily get the desired allignment %\date{\{jette1,mgrondona\}@llnl.gov} \date{Lawrence Livermore National Laboratory\\ Livermore, CA 94551\\ \{jette1 $\mid$ yoo2 $\mid$ mgrondona\}@llnl.gov} \begin{document} \maketitle \input{abstract} \input{intro} \input{architecture} \input{services} \input{survey} %\input{interaction} \input{perf} \input{conclusions} \section*{Acknowledgments} Additional programmers are responsible for the development of SLURM include: Chris Dunlap, Joey Ekstrom, Jim Garlick, Kevin Tew and Jay Windley. \newpage \bibliographystyle{abbrv} \bibliography{references} /biblio \end{document} slurm-slurm-15-08-7-1/doc/jsspp/perf.tex000066400000000000000000000013121265000126300177040ustar00rootroot00000000000000\section{Performance Study} \begin{figure}[htb] \centerline{\epsfig{file=../figures/times.eps}} \caption{Time to execute /bin/hostname with various node counts} \label{timing} \end{figure} We were able to perform some SLURM tests on a 1000 node cluster at LLNL. Some development was still underway at that time and tuning had not been performed. The results for executing simple 'hostname' program on two tasks per node and various node counts is show in Figure~\ref{timing}. We found SLURM performance to be comparable to the Quadrics Resource Management System (RMS)~\cite{RMS} for all job sizes and about 80 times faster than IBM LoadLeveler~\cite{LoadLevelerWeb,LoadLevelerManual} at tested job sizes. slurm-slurm-15-08-7-1/doc/jsspp/references.bib000066400000000000000000000372231265000126300210370ustar00rootroot00000000000000@string{micro = "{I}{E}{E}{E} {M}icro"} @string{icdcs = "{I}nt'l {C}onf. on {D}istributed {C}omputing {S}ystems"} @string{superapplication = "{I}nternational {J}ournal of {S}upercomputer {A}pplications"} @article{Anderson95, author = "Thomas E. Anderson and David E. Culler and David A. Patterson", title = "{A} {C}ase for {N}{O}{W} ({N}etworks of {W}orkstations)", journal = micro, volume = "15(1)", year = 1995, month = feb, pages = "54--64"} @article{Bailey91, author = "{D. H. Bailey et al.}", title = "{T}he {N}{A}{S} {P}arallel {B}enchmarks", journal = superapplication, year = 1991, volume = 5, pages = "63--73"} @techreport{Jette02, author = "Moe Jette and Chris Dunlap and Jim Garlick and Mark Grondona", title = "{Survey of Batch/Resource Management-Related System Software}", institution = "{Lawrence Livermore National Laboratory}", year = 2002, number = "N/A"} @techreport{Suzuoka95, author = "T. Suzuoka and J. Subhlok and T. Gross", title = "{E}valuating {J}ob {S}cheduling {T}echniques for {H}ighly {P}arallel {C}omputers", institution = "{S}chool of {C}omputer {S}cience, {C}arbegie {M}ellon {U}niversity", year = 1995, number = "CMU-CS-95-149"} @techreport{Bailey93, author = "{D. H. Bailey et al.}", title = "{T}he {N}{A}{S} {P}arallel {B}enchmarks", institution = "{N}{A}{S}{A} {A}mes {R}esearch {C}enter", year = 1993, number = "NASA Technical Memorandom 103863"} @techreport{Bailey95, author = "{D. H. Bailey et al.}", title = "{T}he {N}{A}{S} {P}arallel {B}enchmarks 2.0", institution = "{N}{A}{S}{A} {A}mes {R}esearch {C}enter", month = dec, year = 1995, number = "NAS-95-020"} @techreport{Downey96, author = "A. B. Downey", title = "{A} {P}arallel {W}orkload {M}odel and {I}ts {I}mplications for {P}rocessor {A}llocation", institution = "{C}omputer {S}cience {D}ivision, {U}niversity of {C}alifornia, {B}erkeley", month = nov, year = 1996, number = "CSD-96-922"} @misc{Bailey99, author = "{D. H. Bailey et al.}", title = "{V}aluation of {U}ltra-{S}cale {C}omputing {S}ystems: {A} {W}hite {P}aper", institution = "{U}. {S}. {D}epartment of {E}nergy", month = dec, year = 1999} @techreport{Saini96, author = "S. Saini and D. H. Bailey", title = "{N}{A}{S} {P}arallel {B}enchmark ({V}ersion 1.0) {R}esults 11-96", institution = "{N}{A}{S}{A} {A}mes {R}esearch {C}enter", month = nov, year = 1996, number = "NAS-96-18"} @article{Boden95, author = "{N. J. Boden et al.}", title = "{M}yrinet: {A} {G}igabit-per-second {L}ocal {A}rea {N}etwork", journal = micro, year = 1995, volume = "15(1)", month = feb, pages = "29--36"} @conference{Arpaci95, author = "R. H. Arpaci and A. C. Dusseau and A. Vahdat and L. T. Liu and T. E. Anderson and D. A. Patterson", title = "{T}he {I}nteraction of {P}arallel and {S}equential {W}orkloads on a {N}etwork of {W}orkstations", booktitle = "{P}roc. {A}{C}{M} {S}{I}{G}{M}{E}{T}{R}{I}{C}{S} 1995 {C}onf. on {M}easurement and {M}odeling of {C}omputer {S}systems", month = may, year = 1995, pages = "267--278"} @conference{Dusseau98, author = "A. C. Arpaci-Dusseau and D. E. Culler and A. M. Mainwaring", title = "{S}cheduling with {I}mplicit {I}nformation in {D}istributed {S}ystems", booktitle = "{P}roc. {A}{C}{M} {S}{I}{G}{M}{E}{T}{R}{I}{C}{S} 1998 {C}onf. on {M}easurement and {M}odeling of {C}omputer {S}systems", year = 1998} @conference{Eicken92, author = "{T. {von} Eicken and D. E. Culler and S. C. Goldsten and K. E. Schauser}", title = "{A}ctive {M}essages: {A} {M}echanism for {I}ntegrated {C}ommunication and {C}omputation", booktitle = "{P}roc. 19th {A}nnual {I}nt'l {S}ymp. on {C}omputer {A}rchitecture", month = dec, year = 1995} @conference{Eicken95, author = "{T. {von} Eicken and A. Basu and V. Buch and W. Vogels}", title = "{U}-{N}net: {A} {U}ser-{L}evel {N}etwork {I}nterface for {P}arallel and {D}istributed {C}omputing", booktitle = "{P}roc. 15th {A}{C}{M} {S}ymp. on {O}perating {S}ystem {P}rinciples", month = dec, year = 1995} @conference{Haring95, author = "G. Haring and G. Kotsis", title = "{W}orkload {M}odeling for {P}arallel {P}rocessing {S}systems", booktitle = "Proc. {I}nternational {S}ymposium on {M}odeling, {A}nalysis and {S}imulation of {C}omputer and {T}elecommunication {S}ystems ({M}{A}{S}{C}{O}{T}{S})", year = 1995, pages = "8--12"} @conference{Subhlok96, author = "J. Subhlok and T. Gross and T. Suzuoka", title = "{I}mpact of {J}ob {M}ix {O}ptimizations for {S}pace {S}haring {S}cheduling", booktitle = "Proc. {S}upercomputing 96", year = 1996, month = nov} @conference{Feitelson96, author = "D. Feitelson", title = "{P}acking {S}cheme for {G}ang {S}cheduling", booktitle = "Proc. {I}{P}{P}{S}'96 {W}orkshop on {J}ob {S}cheduling {S}trategies for {P}arallel {P}rocessing", year = 1996, month = apr, pages = "89--110"} @conference{Windisch96, author = "K. Windisch and V. Lo and D. Feitelson and B. Nitzberg and R. Moore", title = "{A} {C}omparison of {W}orkload {T}races from {T}wo {P}roduction {P}arallel {M}achines", booktitle = "Proc. {S}ixth {S}ymposium on the {F}rontiers of {M}assively {P}arallel {C}omputing", year = 1996, month = oct, pages = "319--326"} @conference{Feitelson97_Memory, author = "D. G. Feitelson", title = "{M}emory {U}sage in the {L}{A}{N}{L} {C}{M}-5 {W}orkload", booktitle = "Proc. {I}{P}{P}{S}'97 {W}orkshop on {J}ob {S}cheduling {S}trategies for {P}arallel {P}rocessing", year = 1997, pages = "78--94"} @conference{Lo98, author = "V. Lo and J. Mache and K. Windisch", title = "{A} {C}omparative {S}tudy of {R}eal {W}orkload {T}races and {S}ynthetic {W}orkload {M}odels for {P}arallel {J}ob {S}cheduling", booktitle = "Proc. {I}{P}{P}{S}'98 {W}orkshop on {J}ob {S}cheduling {S}trategies for {P}arallel {P}rocessing", year = 1998, month = mar, pages = "1--16"} @article{Downey99, author = "A. B. Downey and D. G. Feitelson", title = "{T}he {E}lusive {G}oal of {W}orkload {C}haracterization", journal = "{P}erformance {E}valuation {R}eview", year = 1999, month = mar, pages = "14--29"} @article{Gropp96, author = "W. Gropp and E. Lusk", title = "{A} {H}igh-{P}erformance, {P}ortable {I}mplementation of the {M}{P}{I} {M}essage {P}assing {I}nterface {S}tandard", journal = "{P}arallel {C}omputing", volume = "22", year = 1995, month = feb, pages = "54--64"} @conference{Parsons95, author = "E. W. Parsons and K. C. Sevcik", title = "{M}ultiprocessor {S}cheduling for {H}igh-{V}ariability {S}ervice {T}ime {D}istributions", booktitle = "Proc. {I}{P}{P}{S}'95 {W}orkshop on {J}ob {S}cheduling {S}trategies for {P}arallel {P}rocessing", year = 1995, month = apr, pages = "76--88"} @conference{Sobalvarro95, author = "P. G. Sobalvarro and W. E. Weihl", title = "{D}emand-based {C}oscheduling of {P}arallel {J}obs on {M}ultipr ogrammed {M}ultiprocessors", booktitle = "Proc. {I}{P}{P}{S}'95 {W}orkshop on {J}ob {S}cheduling {S}trategies for {P}arallel {P}rocessing", year = 1995, month = apr, pages = "63--75"} @phdthesis{Sobalvarro97, author = "P. G. Sobalvarro", title = "{D}emand-based {C}oscheduling of {P}arallel {J}obs on {M}ultiprogrammed {M}ultiprocessors", school = "{D}ept. of {E}lectrical {E}ngineering and {C}ompuer {S}cience, {M}assachusetts {I}nstitutute of {T}echnology", year = 1997} @misc{GPLWeb, author = "{GNU General Public License}", note = "{\tt http://www.gnu.org/licenses/gpl.html}"} @misc{PCRWeb, author = "{Parallel Capacity Resource}", note = "{\tt http://www.llnl.gov/linux/pcr}"} @misc{MCRWeb, author = "{Multiprogrammatic Capability Cluster}", note = "{\tt http://www.llnl.gov/linux/mcr}"} @misc{LokiWeb, author = "{Loki -- Commodity Parallel Processing}", note = "{\tt http://loki-www.lanl.org}"} @misc{BeowulfWeb, author = "{Beowulf Project}", note = "{\tt http://www.beowulf.org}"} @misc{LAM, author = "{Local Area Multicomputer}", note = "{\tt http://www.lam-mpi.org}"} @misc{BProc, author = "{Beowulf Distributed Process Space}", note = "{\tt http://bproc.sourceforge.net}"} @misc{PAGG, author = "{Linux PAGG Process Aggregates}", note = "{\tt oss.sgi.com/projects/pagg}"} @misc{Maui, author = "{Maui Scheduler}", note = "{\tt http://supercluster.org/maui}"} @misc{DPCS, author = "{Distributed Production Control System}", note = "{\tt http://www.llnl.gov/icc/lc/dpcs\_overview.html}"} @misc{ASCIprojectatLLNL, author = "{ASCI Project}", note = "{\tt http://www.llnl.gov/asci}"} @misc{ASCIprojectatLANL, author = "{ASCI Project}", note = "{\tt http://www.lanl.gov/asci}"} @misc{ASCIprojectatSNL, author = "{ASCI Project}", note = "{\tt http://www.lanl.gov/ASCI/TFLOP/Home\_page.html}"} @misc{ASCI_BlueMountain, author = "{ASCI Blue Mountain}", note = "{\tt http://www.lanl.gov/asci/bluemtn/bluemtn.html}"} @misc{ASCI_BluePacific, author = "{ASCI Blue Pacific}", note = "{\tt http://www.llnl.gov/platforms/bluepac}"} @misc{ASCI_Red, author = "{ASCI Red}", note = "{\tt http://www.sandia.gov/ASCI/Red}"} @misc{ClassScheduler, author = "{Class Scheduler}", note = "{\tt http://www.unix.digital.com/faqs/publications/base\_doc}"} %note = "{\tt http://www.unix.digital.com/faqs/publications/base\_doc/DOCUMENTATION/V50\_HTML/MAN/MAN4/0102\_\_\_\_.HTM}"} @misc{PBS, author = "{Portable Batch System}", note = "{\tt http://www.openpbs.org}"} @conference{Jann97, author = "J. Jann and P. Pattnaik and H. Franke and F. Wang and J. Skovira and J. Riodan", title = "{M}odeling of {W}orkload in {M}{P}{P}s", booktitle = "{I}{P}{P}{S}'97 {W}orkshop on {J}ob {S}cheduling {S}trategies for {P}arallel {P}rocessing, {V}ol. 1291 of {L}ecture {N}otes in {C}omputer {S}cience", year = 1997, month = apr, pages = "95--116", publisher = "{S}pringer-{V}erlag"} @conference{Feitelson97, author = "D. J. Feitelson and M. Jette", title = "{I}mproved {U}tilization and {R}esponsiveness with {G}ang {S}cheduling", booktitle = "{I}{P}{P}{S}'97 {W}orkshop on {J}ob {S}cheduling {S}trategies for {P}arallel {P}rocessing, {V}ol. 1291 of {L}ecture {N}otes in {C}omputer {S}cience", year = 1997, month = apr, pages = "238--261", publisher = "{S}pringer-{V}erlag"} @conference{Franke96, author = "H. Franke and P. Pattnaik and L. Rudolph", title = "{G}ang {S}cheduling for {H}ighly {E}fficient {M}ultiprocessors", booktitle = "{P}roc. {S}ixth {S}ymp. on the {F}rontiers of {M}assively {P}arallel {P}rocessing", year = 1996, month = oct} @conference{Hotovy96, author = "S. Hotovy", title = "{W}orkload {E}valuation on the {C}ornell {T}heory {C}enter {I}{B}{M} {S}{P}2", booktitle = "{I}{P}{P}{S}'96 {W}orkshop on {J}ob {S}cheduling {S}trategies for {P}arallel {P}rocessing, {V}ol. 1162 of {L}ecture {N}otes in {C}omputer {S}cience", year = 1996, month = apr, publisher = "{S}pringer-{V}erlag"} @conference{Feitelson96_Job, author = "D. G. Feitelson and B. Nitzberg", title = "{J}ob {C}haracteristics of a {P}roduction {P}arallel {S}cientific {W}orkload on the {N}{A}{S}{A} {A}mes i{P}{S}{C}/860", booktitle = "{I}{P}{P}{S}'96 {W}orkshop on {J}ob {S}cheduling {S}trategies for {P}arallel {P}rocessing, {V}ol. 1162 of {L}ecture {N}otes in {C}omputer {S}cience", year = 1996, month = apr, pages = "337--360", publisher = "{S}pringer-{V}erlag"} @conference{EasyLL, author = "J. Skovira and W. Chan and H. Zhou and D. Lifka", title = "{T}he {E}asy-{L}oad{L}eveler {A}{P}{I} {P}roject", booktitle = "{I}{P}{P}{S}'96 {W}orkshop on {J}ob {S}cheduling {S}trategies for {P}arallel {P}rocessing, {V}ol. 1162 of {L}ecture {N}otes in {C}omputer {S}cience", year = 1996, month = apr, pages = "41--47", publisher = "{S}pringer-{V}erlag"} @conference{Jette96, author = "M. Jette and D. Storch and E. Yim", title = "{T}imesharing the {C}ray {T}3{D}", booktitle = "{C}ray {U}ser {G}roup", year = 1996, pages = "247--252", month = mar} @conference{Jette97, author = "M. Jette", title = "{P}erformance {C}haracteristics of {G}ang {S}cheduling in {M}ultiprogrammed {E}nvironments", booktitle = "{P}roc. {S}uper{C}omputing97", year = 1997, month = nov} @conference{Jette98, author = "M. Jette", title = "{E}xpanding {S}ymmetric {M}ultiprocessor {C}apability {T}hrough {G}ang {S}cheduling", booktitle = "{I}{P}{P}{S}'98 {W}orkshop on {J}ob {S}cheduling {S}trategies for {P}arallel {P}rocessing", year = 1998, month = mar} @conference{Litzkow88, author = "M. Litzknow and M. Livny and M. Mutka", title = "Condor - A Hunter for Idle Workstations", booktitle = "Proc. International Conference on Distributed Computing Systems", year = 1988, pasges = "104--111", month = jun} @misc{BGL, author = "{Blue Gene/L}", note = "{\tt http://cmg-rr.llnl.gov/asci/platforms/bluegenel}"} @misc{RMS, author = "{Quadircs Resource Management System}", note = "{\tt http://www.quadrics.com/website/pdf/rms.pdf}"} @misc{Condor, author = "{Condor}", note = "{\tt http://www.cs.wisc.edu/condor}"} @misc{LSF, author = "{Load Sharing Facility}", note = "{\tt http://www.platform.com}"} @misc{LoadLevelerWeb, author = "{Load Leveler}", note = "{\tt http://www-1.ibm.com/servers/eservers/pseries/library/sp\_books/loadleveler.html}"} @manual{LoadLevelerManual, title = "{L}oad{L}eveler's {U}ser {G}uide, {R}elease 2.1", organization = "{I}{B}{M} {C}orporation"} @misc{SupercomCenters, author = "{Top 500 Supercomputer Sites}", note = "{\tt http://www.netlib.org/benchmark/top500.html}"} @misc{MPIForumWebpage, author = "{The MPI Forum}", year = 1995, month = may, note = "{\tt http://www.mcs.anl.gov/mpi/standard.html}"} @article{Basney97, author = "J. Basney and M. Livny and T. Tannenbaum", title = "{High Throughput Computing with Condor}", journal = "HPCU news", month = jun, year = 1997, volume = "1(2)"} @article{Calzarossa93, author = "M. Calzarossa and G. Serazzi", title = "{W}orkload {C}haracterization: {A} {S}urvey", journal = "{P}roceedings of the {I}{E}{E}{E}", month = aug, year = 1993, volume = "81(8)"} @article{MPIForum, author = "{Message Passing Interface Forum}", title = "{M}{P}{I}: {A} {M}essage-{P}assing {I}nterface {S}tandard", journal = superapplication, year = 1994, volume = "8(3/4)", pages = "165--414"} @conference{Moreira99, author = "{J. E. Moreira et al.}", title = "{A} {G}ang-{S}cheduling {S}ystem for {A}{S}{C}{I} {B}lue-{P}acific", booktitle = "{P}roc. {D}istributed {C}omputing and {M}etacomputing ({D}{C}{M}) {W}orkshop, {H}igh-{P}erformance {C}omputing and {N}etworking '99", year = 1999, month = apr} @conference{Nagar99, author = "S. Nagar and A. Banerjee and A. Sivasubramaniam and C. R. Das", title = "{A} {C}loser {L}ook {A}t {C}oscheduling {A}pproaches for a {N}etwork of {W}orkstations", booktitle = "{P}roc. 11th {A}{C}{M} {S}ymp. of {P}arallel {A}lgorithms and {A}rchitectures", month = jun, year = 1999} @conference{Ousterhaut82, author = "J. K. Ousterhout", title = "{S}cheduling {T}echnique for {C}oncurrent {S}ystems", booktitle = icdcs, year = 1982, pages = "22--30"} @conference{Pakin95, author = "S. Pakin and M. Lauria and A. Chien", title = "{H}igh {P}erformance {M}essaging on {W}orkstations: {I}llinois {F}ast {Me}essages ({F}{M})", booktitle = "{P}roc. {S}upercomputing '95", month = dec, year = 1995} @book{CAbook, author = {J. Hennessy and D. Patterson}, title = {Computer Architecture: A Quantitive Approach, Second Edition}, publisher = {Morgan Kaufmann Publishers, Inc.}, pages = "669", year = 1996} @book{Gridbook, author = {I. Foster and C. Kesselman}, title = {The GRID: Blueprint for a New Computing Onfrastructure}, publisher = {Morgan Kaufmann Publishers, Inc.}, year = 1999} @conference{STORM01, author = "Eitan Frachtenberg and Fabrizio Petrini and others", title = "STORM: Lightning-Fast Resource Management", booktitle = "Proceedings of SuperComputing", year = 2002, } @misc{Authd02, author = "Authd home page", title = "http://www.theether.org/authd/", } @misc{Quadrics02, AUTHOR = "Quadrics Resource Management System", TITLE = "http://www.quadrics.com/", } slurm-slurm-15-08-7-1/doc/jsspp/services.tex000066400000000000000000000404231265000126300206010ustar00rootroot00000000000000\section{SLURM Operation and Services} \subsection{Command Line Utilities} The command line utilities are the user interface to SLURM functionality. They offer users access to remote execution and job control. They also permit administrators to dynamically change the system configuration. These commands all use SLURM APIs which are directly available for more sophisticated applications. \begin{itemize} \item {\tt scancel}: Cancel a running or a pending job or job step, subject to authentication and authorization. This command can also be used to send an arbitrary signal to all processes on all nodes associated with a job or job step. \item {\tt scontrol}: Perform privileged administrative commands such as draining a node or partition in preparation for maintenance. Many \scontrol\ functions can only be executed by privileged users. \item {\tt sinfo}: Display a summary of partition and node information. A assortment of filtering and output format options are available. \item {\tt squeue}: Display the queue of running and waiting jobs and/or job steps. A wide assortment of filtering, sorting, and output format options are available. \item {\tt srun}: Allocate resources, submit jobs to the SLURM queue, and initiate parallel tasks (job steps). Every set of executing parallel tasks has an associated \srun\ which initiated it and, if the \srun\ persists, managing it. Jobs may be submitted for batch execution, in which case \srun\ terminates after job submission. Jobs may also be submitted for interactive execution, where \srun\ keeps running to shepherd the running job. In this case, \srun\ negotiates connections with remote {\tt slurmd}'s for job initiation and to get stdout and stderr, forward stdin, and respond to signals from the user. The \srun\ may also be instructed to allocate a set of resources and spawn a shell with access to those resources. \srun\ has a total of 13 parameters to control where and when the job is initiated. \end{itemize} \subsection{Plugins} In order to make the use of different infrastructures possible, SLURM uses a general purpose plugin mechanism. A SLURM plugin is a dynamically linked code object which is loaded explicitly at run time by the SLURM libraries. A plugin provides a customized implemenation of a well-defined API connected to tasks such as authentication, interconnect fabric, task scheduling. A common set of functions is defined for use by all of the different infrastructures of a particular variety. For example, the authentication plugin must define functions such as: {\tt slurm\_auth\_activate} to create a credential, {\tt slurm\_auth\_verify} to verify a credential to approve or deny authentication, {\tt slurm\_auth\_get\_uid} to get the user ID associated with a specific credential, etc. It also must define the data structure used, a plugin type, a plugin version number. The available plugins are defined in the configuration file. %When a slurm daemon is initiated, it reads the configuration %file to determine which of the available plugins should be used. %For example {\em AuthType=auth/authd} says to use the plugin for %authd based authentication and {\em PluginDir=/usr/local/lib} %identifies the directory in which to find the plugin. \subsection{Communications Layer} SLURM presently uses Berkeley sockets for communications. However, we anticipate using the plugin mechanism to easily permit use of other communications layers. At LLNL we are using an Ethernet for SLURM communications and the Quadrics Elan switch exclusively for user applications. The SLURM configuration file permits the identification of each node's hostname as well as its name to be used for communications. %In the case of a control machine known as {\em mcri} to be %communicated with using the name {\em emcri} (say to indicate %an ethernet communications path), this is represented in the %configuration file as {\em ControlMachine=mcri ControlAddr=emcri}. %The name used for communication is the same as the hostname unless %%otherwise specified. While SLURM is able to manage 1000 nodes without difficulty using sockets and Ethernet, we are reviewing other communication mechanisms which may offer improved scalability. One possible alternative is STORM\cite{STORM01}. STORM uses the cluster interconnect and Network Interface Cards to provide high-speed communications including a broadcast capability. STORM only supports the Quadrics Elan interconnnect at present, but does offer the promise of improved performance and scalability. \subsection{Security} SLURM has a simple security model: Any user of the cluster may submit parallel jobs to execute and cancel his own jobs. Any user may view SLURM configuration and state information. Only privileged users may modify the SLURM configuration, cancel any jobs, or perform other restricted activities. Privileged users in SLURM include the users {\em root} and {\tt SlurmUser} (as defined in the SLURM configuration file). If permission to modify SLURM configuration is required by others, set-uid programs may be used to grant specific permissions to specific users. We presently support three authentication mechanisms via plugins: {\tt authd}\cite{Authd02}, {\tt munged} and {\tt none}. A plugin can easily be developed for Kerberos or authentication mechanisms as desired. The \munged\ implementation is described below. A \munged\ daemon running as user {\em root} on each node confirms the identify of the user making the request using the {\tt getpeername} function and generates a credential. The credential contains a user ID, group ID, time-stamp, lifetime, some pseudo-random information, and any user supplied information. The \munged\ uses a private key to generate a Message Authentication Code (MAC) for the credential. The \munged\ then uses a public key to symmetrically encrypt the credential including the MAC. SLURM daemons and programs transmit this encrypted credential with communications. The SLURM daemon receiving the message sends the credential to \munged\ on that node. The \munged\ decrypts the credential using its private key, validates it and returns the user ID and group ID of the user originating the credential. The \munged\ prevents replay of a credential on any single node by recording credentials that have already been authenticated. In SLURM's case, the user supplied information includes node identification information to prevent a credential from being used on nodes it is not destined for. When resources are allocated to a user by the controller, a {\em job step credential} is generated by combining the user ID, job ID, step ID, the list of resources allocated (nodes), and the credential lifetime. This job step credential is encrypted with a \slurmctld\ private key. This credential is returned to the requesting agent ({\tt srun}) along with the allocation response, and must be forwarded to the remote {\tt slurmd}'s upon job step initiation. \slurmd\ decrypts this credential with the \slurmctld 's public key to verify that the user may access resources on the local node. \slurmd\ also uses this job step credential to authenticate standard input, output, and error communication streams. %Access to partitions may be restricted via a {\em RootOnly} flag. %If this flag is set, job submit or allocation requests to this %partition are only accepted if the effective user ID originating %the request is a privileged user. %The request from such a user may submit a job as any other user. %This may be used, for example, to provide specific external schedulers %with exclusive access to partitions. Individual users will not be %permitted to directly submit jobs to such a partition, which would %prevent the external scheduler from effectively managing it. %Access to partitions may also be restricted to users who are %members of specific Unix groups using a {\em AllowGroups} specification. \subsection{Job Initiation} There are three modes in which jobs may be run by users under SLURM. The first and most simple is {\em interactive} mode, in which stdout and stderr are displayed on the user's terminal in real time, and stdin and signals may be forwarded from the terminal transparently to the remote tasks. The second is {\em batch} mode, in which the job is queued until the request for resources can be satisfied, at which time the job is run by SLURM as the submitting user. In {\em allocate} mode, a job is allocated to the requesting user, under which the user may manually run job steps via a script or in a sub-shell spawned by \srun . \begin{figure}[tb] \centerline{\epsfig{file=../figures/connections.eps,scale=0.5}} \caption{\small Job initiation connections overview. 1. The \srun\ connects to \slurmctld\ requesting resources. 2. \slurmctld\ issues a response, with list of nodes and job credential. 3. The \srun\ opens a listen port for every task in the job step, then sends a run job step request to \slurmd . 4. \slurmd 's initiate job step and connect back to \srun\ for stdout/err. } \label{connections} \end{figure} Figure~\ref{connections} gives a high-level depiction of the connections that occur between SLURM components during a general interactive job startup. The \srun\ requests a resource allocation and job step initiation from the {\tt slurmctld}, which responds with the job ID, list of allocated nodes, job credential. if the request is granted. The \srun\ then initializes listen ports for each task and sends a message to the {\tt slurmd}'s on the allocated nodes requesting that the remote processes be initiated. The {\tt slurmd}'s begin execution of the tasks and connect back to \srun\ for stdout and stderr. This process and the other initiation modes are described in more detail below. \subsubsection{Interactive mode initiation} \begin{figure}[tb] \centerline{\epsfig{file=../figures/interactive-job-init.eps,scale=0.5} } \caption{\small Interactive job initiation. \srun\ simultaneously allocates nodes and a job step from \slurmctld\ then sends a run request to all \slurmd 's in job. Dashed arrows indicate a periodic request that may or may not occur during the lifetime of the job.} \label{init-interactive} \end{figure} Interactive job initiation is illustrated in Figure~\ref{init-interactive}. The process begins with a user invoking \srun\ in interactive mode. In Figure~\ref{init-interactive}, the user has requested an interactive run of the executable ``{\tt cmd}'' in the default partition. After processing command line options, \srun\ sends a message to \slurmctld\ requesting a resource allocation and a job step initiation. This message simultaneously requests an allocation (or job) and a job step. The \srun\ waits for a reply from {\tt slurmctld}, which may not come instantly if the user has requested that \srun\ block until resources are available. When resources are available for the user's job, \slurmctld\ replies with a job step credential, list of nodes that were allocated, cpus per node, and so on. The \srun\ then sends a message each \slurmd\ on the allocated nodes requesting that a job step be initiated. The \slurmd 's verify that the job is valid using the forwarded job step credential and then respond to \srun . Each \slurmd\ invokes a job thread to handle the request, which in turn invokes a task thread for each requested task. The task thread connects back to a port opened by \srun\ for stdout and stderr. The host and port for this connection is contained in the run request message sent to this machine by \srun . Once stdout and stderr have successfully been connected, the task thread takes the necessary steps to initiate the user's executable on the node, initializing environment, current working directory, and interconnect resources if needed. Once the user process exits, the task thread records the exit status and sends a task exit message back to \srun . When all local processes terminate, the job thread exits. The \srun\ process either waits for all tasks to exit, or attempt to clean up the remaining processes some time after the first task exits. Regardless, once all tasks are finished, \srun\ sends a message to the \slurmctld\ releasing the allocated nodes, then exits with an appropriate exit status. When the \slurmctld\ receives notification that \srun\ no longer needs the allocated nodes, it issues a request for the epilog to be run on each of the \slurmd 's in the allocation. As \slurmd 's report that the epilog ran successfully, the nodes are returned to the partition. \subsubsection{Batch mode initiation} \begin{figure}[tb] \centerline{\epsfig{file=../figures/queued-job-init.eps,scale=0.5} } \caption{\small Queued job initiation. \slurmctld\ initiates the user's job as a batch script on one node. Batch script contains an srun call which initiates parallel tasks after instantiating job step with controller. The shaded region is a compressed representation and is illustrated in more detail in the interactive diagram (Figure~\ref{init-interactive}).} \label{init-batch} \end{figure} Figure~\ref{init-batch} illustrates the initiation of a batch job in SLURM. Once a batch job is submitted, \srun\ sends a batch job request to \slurmctld\ that contains the input/output location for the job, current working directory, environment, requested number of nodes. The \slurmctld\ queues the request in its priority ordered queue. Once the resources are available and the job has a high enough priority, \slurmctld\ allocates the resources to the job and contacts the first node of the allocation requesting that the user job be started. In this case, the job may either be another invocation of \srun\ or a {\em job script} which may have multiple invocations of \srun\ within it. The \slurmd\ on the remote node responds to the run request, initiating the job thread, task thread, and user script. An \srun\ executed from within the script detects that it has access to an allocation and initiates a job step on some or all of the nodes within the job. Once the job step is complete, the \srun\ in the job script notifies the \slurmctld\, and terminates. The job script continues executing and may initiate further job steps. Once the job script completes, the task thread running the job script collects the exit status and sends a task exit message to the \slurmctld . The \slurmctld\ notes that the job is complete and requests that the job epilog be run on all nodes that were allocated. As the \slurmd 's respond with successful completion of the epilog, the nodes are returned to the partition. \subsubsection{Allocate mode initiation} \begin{figure}[tb] \centerline{\epsfig{file=../figures/allocate-init.eps,scale=0.5} } \caption{\small Job initiation in allocate mode. Resources are allocated and \srun\ spawns a shell with access to the resources. When user runs an \srun\ from within the shell, the a job step is initiated under the allocation.} \label{init-allocate} \end{figure} In allocate mode, the user wishes to allocate a job and interactively run job steps under that allocation. The process of initiation in this mode is illustrated in Figure~\ref{init-allocate}. The invoked \srun\ sends an allocate request to \slurmctld , which, if resources are available, responds with a list of nodes allocated, job id, etc. The \srun\ process spawns a shell on the user's terminal with access to the allocation, then waits for the shell to exit at which time the job is considered complete. An \srun\ initiated within the allocate sub-shell recognizes that it is running under an allocation and therefore already within a job. Provided with no other arguments, \srun\ started in this manner initiates a job step on all nodes within the current job. However, the user may select a subset of these nodes implicitly. An \srun\ executed from the sub-shell reads the environment and user options, then notify the controller that it is starting a job step under the current job. The \slurmctld\ registers the job step and responds with a job credential. The \srun\ then initiates the job step using the same general method as described in the section on interactive job initiation. When the user exits the allocate sub-shell, the original \srun\ receives exit status, notifies \slurmctld\ that the job is complete, and exits. The controller runs the epilog on each of the allocated nodes, returning nodes to the partition as they complete the epilog. slurm-slurm-15-08-7-1/doc/jsspp/survey.tex000066400000000000000000000354111265000126300203140ustar00rootroot00000000000000\section{Related Work} \subsection*{Portable Batch System (PBS)} The Portable Batch System (PBS)~\cite{PBS} is a flexible batch queuing and workload management system originally developed by Veridian Systems for NASA. It operates on networked, multi-platform UNIX environments, including heterogeneous clusters of workstations, supercomputers, and massively parallel systems. PBS was developed as a replacement for NQS (Network Queuing System) by many of the same people. PBS supports sophisticated scheduling logic (via the Maui Scheduler). PBS spawn's daemons on each machine to shepherd the job's tasks. It provides an interface for administrators to easily interface their own scheduling modules. PBS can support long delays in file staging with retry. Host authentication is provided by checking port numbers (low ports numbers are only accessible to user root). Credential service is used for user authentication. It has the job prolog and epilog feature. PBS Supports high priority queue for smaller "interactive" jobs. Signal to daemons causes current log file to be closed, renamed with time-stamp, and a new log file created. Although the PBS is portable and has a broad user base, it has significant drawbacks. PBS is single threaded and hence exhibits poor performance on large clusters. This is particularly problematic when a compute node in the system fails: PBS tries to contact down node while other activities must wait. PBS also has a weak mechanism for starting and cleaning up parallel jobs. %Specific complaints about PBS from members of the OSCAR group (Jeremy Enos, %Jeff Squyres, Tim Mattson): %\begin{itemize} %\item Sensitivity to hostname configuration on the server; improper % configuration results in hard to diagnose failure modes. Once % configuration is correct, this issue disappears. %\item When a compute node in the system dies, everything slows down. % PBS is single-threaded and continues to try to contact down nodes, % while other activities like scheduling jobs, answering qsub/qstat % requests, etc., have to wait for a complete timeout cycle before being % processed. %\item Default scheduler is just FIFO, but Maui can be plugged in so this % is not a big issue. %\item Weak mechanism for starting/cleaning up parallel jobs (pbsdsh). % When a job is killed, pbsdsh kills the processes it started, but % if the process doesn't die on the first shot it may continue on. %\item PBS server continues to mark specific nodes offline, even though they % are healthy. Restarting the server fixes this. %\item Lingering jobs. Jobs assigned to nodes, and then bounced back to the % queue for any reason, maintain their assignment to those nodes, even % if another job had already started on them. This is a poor clean up % issue. %\item When the PBS server process is restarted, it puts running jobs at risk. %\item Poor diagnostic messages. This problem can be as serious as ANY other % problem. This problem makes small, simple problems turn into huge % turmoil occasionally. For example, the variety of symptoms that arise % from improper hostname configuration. All the symptoms that result are % very misleading to the real problem. %\item Rumored to have problems when the number of jobs in the queues gets % large. %\item Scalability problems on large systems. %\item Non-portable to Windows %\item Source code is a mess and difficult for others (e.g. the open source % community) to improve/expand. %\item Licensing problems (see below). %\end{itemize} %The one strength mentioned is PBS's portability and broad user base. % %PBS is owned by Veridian and is released as three separate products with %different licenses: {\em PBS Pro} is a commercial product sold by Veridian; %{\em OpenPBS} is an pseudo open source version of PBS that requires %registration; and %{\em PBS} is a GPL-like, true open source version of PBS. % %Bug fixes go into PBS Pro. When a major revision of PBS Pro comes out, %the previous version of PBS Pro becomes OpenPBS, and the previous version %of OpenPBS becomes PBS. The delay getting bug fixes (some reported by the %open source community) into the true open source version of PBS is the source %of some frustration. \subsection{Quadrics RMS} Quadrics RMS\cite{Quadrics02} (Resource Management System) is for Unix systems having Quadrics Elan interconnects. RMS functionality and performance is excellent. It major limitation is the requirement for a Quadrics interconnect. The proprietary code and cost may also pose difficulties under some circumstances. \subsection*{Maui Scheduler} Maui Scheduler~\cite{Maui} is an advanced reservation HPC batch scheduler for use with SP, O2K, and UNIX/Linux clusters. It is widely used to extend the functionality of PBS and LoadLeveler, which Maui requires to perform the parallel job initiation and management. \subsection*{Distributed Production Control System (DPCS)} The Distributed Production Control System (DPCS)~\cite{DPCS} is a scheduler developed at Lawrence Livermore National Laboratory (LLNL). The DPCS provides basic data collection and reporting mechanisms for prject-level, near real-time accounting and resource allocation to customers with established limits per customers' organization budgets, In addition, the DPCS evenly distributes workload across available computers and supports dynamic reconfiguration and graceful degradation of service to prevent overuse of a computer where not authorized. %DPCS is (or will soon be) open source, although its use is presently %confined to LLNL. The development of DPCS began in 1990 and it has %evolved into a highly scalable and fault-tolerant meta-scheduler %operating on top of LoadLeveler, RMS, and NQS. DPCS provides: %\begin{itemize} %\item Basic data collection and reporting mechanisms for project-level, % near real-time accounting. %\item Resource allocation to customers with established limits per % customers' organizational budgets. %\item Proactive delivery of services to organizations that are relatively % underserviced using a fair-share resource allocation scheme. %\item Automated, highly flexible system with feedback for proactive delivery % of resources. %\item Even distribution of the workload across available computers. %\item Flexible prioritization of production workload, including "run on demand." %\item Dynamic reconfiguration and re-tuning. %\item Graceful degradation in service to prevent overuse of a computer where % not authorized. %\end{itemize} DPCS supports only a limited number of computer systems: IBM RS/6000 and SP, Linux, Sun Solaris, and Compaq Alpha. Like the Maui Scheduler, DPCS requires an underlying infrastructure for parallel job initiation and management (LoadLeveler, NQS, RMS or SLURM). \subsection*{LoadLeveler} LoadLeveler~\cite{LoadLevelerManual,LoadLevelerWeb} is a proprietary batch system and parallel job manager by IBM. LoadLeveler supports few non-IBM systems. Very primitive scheduling software exists and other software is required for reasonable performance, such as Maui Scheduler or DPCS. The LoadLeveler has a simple and very flexible queue and job class structure available operating in "matrix" fashion. The biggest problem of the LoadLeveler is its poor scalability. It typically requires 20 minutes to execute even a trivial 500-node, 8000-task on the IBM SP computers at LLNL. %In addition, all jobs must be initiated through the LoadLeveler, and a special version of %MPI is requested to run a parallel job. %[So do RMS, SLURM, etc. for interconnect set-up - Moe]% % %Many configuration files exist with signals to %daemons used to update configuration (like LSF, good). All jobs must %be initiated through LoadLeveler (no real "interactive" jobs, just %high priority queue for smaller jobs). Job accounting is only available %on termination (very bad for long-running jobs). Good status %information on nodes and LoadLeveler daemons is available. LoadLeveler %allocates jobs either entire nodes or shared nodes ,depending upon configuration. % %A special version of MPI is required. LoadLeveler allocates %interconnect resources, spawns the user's processes, and manages the %job afterwards. Daemons also monitor the switch and node health using %a "heart-beat monitor." One fundamental problem is that when the %"Central Manager" restarts, it forgets about all nodes and jobs. They %appear in the database only after checking in via the heartbeat. It %needs to periodically write state to disk instead of doing %"cold-starts" after the daemon fails, which is rare. It has the job %prolog and epilog feature, which permits us to enable/disable logins %and remove stray processes. % %LoadLeveler evolved from Condor, or what was Condor a decade ago. %While I am less familiar with LSF and Condor than LoadLeveler, they %all appear very similar with LSF having the far more sophisticated %scheduler. We should carefully review their data structures and %daemons before designing our own. % \subsection*{Load Sharing Facility (LSF)} LSF~\cite{LSF} is a proprietary batch system and parallel job manager by Platform Computing. Widely deployed on a wide variety of computer architectures, it has sophisticated scheduling software including fair-share, backfill, consumable resources, an job preemption and very flexible queue structure. It also provides good status information on nodes and LSF daemons. While LSF is quite powerful, it is not open-source and can be costly on larger clusters. %The LSF share many of its shortcomings with the LoadLeveler: job initiation only %through LSF, requirement of a spwcial MPI library, etc. %Limits are available on both a per process bs per-job %basis. Time limits include CPU time and wall-clock time. Many %configuration files with signals to daemons used to update %configuration (like LoadLeveler, good). All jobs must be initiated %through LSF to be accounted for and managed by LSF ("interactive" %jobs can be executed through a high priority queue for %smaller jobs). Job accounting only available in near real-time (important %for long-running jobs). Jobs initiated from same directory as %submitted from (not good for computer centers with diverse systems %under LSF control). Good status information on nodes and LSF daemons. %Allocates jobs either entire nodes or shared nodes depending upon %configuration. % %A special version of MPI is required. LSF allocates interconnect %resources, spawns the user's processes, and manages the job %afterwards. While I am less familiar with LSF than LoadLeveler, they %appear very similar with LSF having the far more sophisticated %scheduler. We should carefully review their data structures and %daemons before designing our own. \subsection*{Condor} Condor~\cite{Condor,Litzkow88,Basney97} is a batch system and parallel job manager developed by the University of Wisconsin. Condor was the basis for IBM's LoadLeveler and both share very similar underlying infrastructure. Condor has a very sophisticated checkpoint/restart service that does not rely upon kernel changes, but a variety of library changes (which prevent it from being completely general). The Condor checkpoint/restart service has been integrated into LSF, Codine, and DPCS. Condor is designed to operate across a heterogeneous environment, mostly to harness the compute resources of workstations and PCs. It has an interesting "advertising" service. Servers advertise their available resources and consumers advertise their requirements for a broker to perform matches. The checkpoint mechanism is used to relocate work on demand (when the "owner" of a desktop machine wants to resume work). % %\subsection*{Linux PAGG Process Aggregates} % %PAGG~\cite{PAGG} %consists of modifications to the linux kernel that allows %developers to implement Process AGGregates as loadable kernel modules. %A process aggregate is defined as a collection of processes that are %all members of the same set. A set would be implemented as a container %for the member processes. For instance, process sessions and groups %could have been implemented as process aggregates. % \subsection*{Beowulf Distributed Process Space (BPROC)} The Beowulf Distributed Process Space (BPROC) is set of kernel modifications, utilities and libraries which allow a user to start processes on other machines in a Beowulf-style cluster~\cite{BProc}. Remote processes started with this mechanism appear in the process table of the front end machine in a cluster. This allows remote process management using the normal UNIX process control facilities. Signals are transparently forwarded to remote processes and exit status is received using the usual wait() mechanisms. This tight coupling of a cluster's nodes is convenient, but high scalability can be difficult to achieve. %\subsection{xcat} % %Presumably IBM's suite of cluster management software %(xcat\footnote{http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/sg246041.html}) %includes a batch system. Look into this. % %\subsection{CPLANT} % %CPLANT\footnote{http://www.cs.sandia.gov/cplant/} includes %Parallel Job Launcher, Compute Node Daemon Process, %Compute Node Allocator, Compute Node Status Tool. % %\subsection{NQS} % %NQS\footnote{http://umbc7.umbc.edu/nqs/nqsmain.html}, %the Network Queueing System, is a serial batch system. % %\subsection*{LAM / MPI} % %Local Area Multicomputer (LAM)~\cite{LAM} %is an MPI programming environment and development system for heterogeneous %computers on a network. %With LAM, a dedicated cluster or an existing network %computing infrastructure can act as one parallel computer solving %one problem. LAM features extensive debugging support in the %application development cycle and peak performance for production %applications. LAM features a full implementation of the MPI %communication standard. % %\subsection{MPICH} % %MPICH\footnote{http://www-unix.mcs.anl.gov/mpi/mpich/} %is a freely available, portable implementation of MPI, %the Standard for message-passing libraries. % %\subsection{Sun Grid Engine} % %SGE\footnote{http://www.sun.com/gridware/} is now proprietary. % % %\subsection{SCIDAC} % %The Scientific Discovery through Advanced Computing (SciDAC) %project\footnote{http://www.scidac.org/ScalableSystems} %has a Resource Management and Accounting working group %and a white paper\cite{Res2000}. Deployment of a system with %the required fault-tolerance and scalability is scheduled %for June 2006. % %\subsection{GNU Queue} % %GNU Queue\footnote{http://www.gnuqueue.org/home.html}. % %\subsection{Clubmask} %Clubmask\footnote{http://clubmask.sourceforge.net} is based on bproc. %Separate queueing system? % %\subsection{SQMX} %Part of the SCE Project\footnote{http://www.opensce.org/}, %SQMX\footnote{http://www.beowulf.org/pipermail/beowulf-announce/2001-January/000086.html} is worth taking a look at. slurm-slurm-15-08-7-1/doc/man/000077500000000000000000000000001265000126300156455ustar00rootroot00000000000000slurm-slurm-15-08-7-1/doc/man/Makefile.am000066400000000000000000000000401265000126300176730ustar00rootroot00000000000000 SUBDIRS = man1 man3 man5 man8 slurm-slurm-15-08-7-1/doc/man/Makefile.in000066400000000000000000000562421265000126300177230ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = doc/man DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \ ctags-recursive dvi-recursive html-recursive info-recursive \ install-data-recursive install-dvi-recursive \ install-exec-recursive install-html-recursive \ install-info-recursive install-pdf-recursive \ install-ps-recursive install-recursive installcheck-recursive \ installdirs-recursive pdf-recursive ps-recursive \ tags-recursive uninstall-recursive am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \ distclean-recursive maintainer-clean-recursive am__recursive_targets = \ $(RECURSIVE_TARGETS) \ $(RECURSIVE_CLEAN_TARGETS) \ $(am__extra_recursive_targets) AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \ distdir am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` ETAGS = etags CTAGS = ctags DIST_SUBDIRS = $(SUBDIRS) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) am__relativize = \ dir0=`pwd`; \ sed_first='s,^\([^/]*\)/.*$$,\1,'; \ sed_rest='s,^[^/]*/*,,'; \ sed_last='s,^.*/\([^/]*\)$$,\1,'; \ sed_butlast='s,/*[^/]*$$,,'; \ while test -n "$$dir1"; do \ first=`echo "$$dir1" | sed -e "$$sed_first"`; \ if test "$$first" != "."; then \ if test "$$first" = ".."; then \ dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \ dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \ else \ first2=`echo "$$dir2" | sed -e "$$sed_first"`; \ if test "$$first2" = "$$first"; then \ dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \ else \ dir2="../$$dir2"; \ fi; \ dir0="$$dir0"/"$$first"; \ fi; \ fi; \ dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \ done; \ reldir="$$dir2" ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ SUBDIRS = man1 man3 man5 man8 all: all-recursive .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu doc/man/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu doc/man/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs # This directory's subdirectories are mostly independent; you can cd # into them and run 'make' without going through this Makefile. # To change the values of 'make' variables: instead of editing Makefiles, # (1) if the variable is set in 'config.status', edit 'config.status' # (which will cause the Makefiles to be regenerated when you run 'make'); # (2) otherwise, pass the desired values on the 'make' command line. $(am__recursive_targets): @fail=; \ if $(am__make_keepgoing); then \ failcom='fail=yes'; \ else \ failcom='exit 1'; \ fi; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ case "$@" in \ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ *) list='$(SUBDIRS)' ;; \ esac; \ for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done; \ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-recursive TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ include_option=--etags-include; \ empty_fix=.; \ else \ include_option=--include; \ empty_fix=; \ fi; \ list='$(SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test ! -f $$subdir/TAGS || \ set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \ fi; \ done; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-recursive CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-recursive cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ $(am__make_dryrun) \ || test -d "$(distdir)/$$subdir" \ || $(MKDIR_P) "$(distdir)/$$subdir" \ || exit 1; \ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \ $(am__relativize); \ new_distdir=$$reldir; \ dir1=$$subdir; dir2="$(top_distdir)"; \ $(am__relativize); \ new_top_distdir=$$reldir; \ echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \ echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \ ($(am__cd) $$subdir && \ $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$$new_top_distdir" \ distdir="$$new_distdir" \ am__remove_distdir=: \ am__skip_length_check=: \ am__skip_mode_fix=: \ distdir) \ || exit 1; \ fi; \ done check-am: all-am check: check-recursive all-am: Makefile installdirs: installdirs-recursive installdirs-am: install: install-recursive install-exec: install-exec-recursive install-data: install-data-recursive uninstall: uninstall-recursive install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-recursive install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-recursive clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-recursive -rm -f Makefile distclean-am: clean-am distclean-generic distclean-tags dvi: dvi-recursive dvi-am: html: html-recursive html-am: info: info-recursive info-am: install-data-am: install-dvi: install-dvi-recursive install-dvi-am: install-exec-am: install-html: install-html-recursive install-html-am: install-info: install-info-recursive install-info-am: install-man: install-pdf: install-pdf-recursive install-pdf-am: install-ps: install-ps-recursive install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-recursive -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-recursive mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-recursive pdf-am: ps: ps-recursive ps-am: uninstall-am: .MAKE: $(am__recursive_targets) install-am install-strip .PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am check \ check-am clean clean-generic clean-libtool cscopelist-am ctags \ ctags-am distclean distclean-generic distclean-libtool \ distclean-tags distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ installdirs-am maintainer-clean maintainer-clean-generic \ mostlyclean mostlyclean-generic mostlyclean-libtool pdf pdf-am \ ps ps-am tags tags-am uninstall uninstall-am # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/doc/man/man1/000077500000000000000000000000001265000126300165015ustar00rootroot00000000000000slurm-slurm-15-08-7-1/doc/man/man1/Makefile.am000066400000000000000000000016521265000126300205410ustar00rootroot00000000000000htmldir = ${datadir}/doc/${PACKAGE}-${SLURM_VERSION_STRING}/html man1_MANS = \ sacct.1 \ sacctmgr.1 \ salloc.1 \ sattach.1 \ sbatch.1 \ sbcast.1 \ scancel.1 \ scontrol.1 \ sdiag.1 \ sinfo.1 \ slurm.1 \ smap.1 \ sprio.1 \ sh5util.1 \ squeue.1 \ sreport.1 \ srun.1 \ srun_cr.1 \ sshare.1 \ sstat.1 \ strigger.1 \ sview.1 EXTRA_DIST = $(man1_MANS) if HAVE_MAN2HTML html_DATA = \ sacct.html \ sacctmgr.html \ salloc.html \ sattach.html \ sbatch.html \ sbcast.html \ scancel.html \ scontrol.html \ sdiag.html \ sinfo.html \ smap.html \ sprio.html \ sh5util.html \ squeue.html \ sreport.html \ srun.html \ srun_cr.html \ sshare.html \ sstat.html \ strigger.html \ sview.html MOSTLYCLEANFILES = ${html_DATA} EXTRA_DIST += $(html_DATA) SUFFIXES = .html .1.html: `dirname $<`/../man2html.py @SLURM_MAJOR@.@SLURM_MINOR@ $(srcdir)/../../html/header.txt $(srcdir)/../../html/footer.txt $< endif slurm-slurm-15-08-7-1/doc/man/man1/Makefile.in000066400000000000000000000544011265000126300205520ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ @HAVE_MAN2HTML_TRUE@am__append_1 = $(html_DATA) subdir = doc/man/man1 DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } man1dir = $(mandir)/man1 am__installdirs = "$(DESTDIR)$(man1dir)" "$(DESTDIR)$(htmldir)" NROFF = nroff MANS = $(man1_MANS) DATA = $(html_DATA) am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = ${datadir}/doc/${PACKAGE}-${SLURM_VERSION_STRING}/html includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ man1_MANS = \ sacct.1 \ sacctmgr.1 \ salloc.1 \ sattach.1 \ sbatch.1 \ sbcast.1 \ scancel.1 \ scontrol.1 \ sdiag.1 \ sinfo.1 \ slurm.1 \ smap.1 \ sprio.1 \ sh5util.1 \ squeue.1 \ sreport.1 \ srun.1 \ srun_cr.1 \ sshare.1 \ sstat.1 \ strigger.1 \ sview.1 EXTRA_DIST = $(man1_MANS) $(am__append_1) @HAVE_MAN2HTML_TRUE@html_DATA = \ @HAVE_MAN2HTML_TRUE@ sacct.html \ @HAVE_MAN2HTML_TRUE@ sacctmgr.html \ @HAVE_MAN2HTML_TRUE@ salloc.html \ @HAVE_MAN2HTML_TRUE@ sattach.html \ @HAVE_MAN2HTML_TRUE@ sbatch.html \ @HAVE_MAN2HTML_TRUE@ sbcast.html \ @HAVE_MAN2HTML_TRUE@ scancel.html \ @HAVE_MAN2HTML_TRUE@ scontrol.html \ @HAVE_MAN2HTML_TRUE@ sdiag.html \ @HAVE_MAN2HTML_TRUE@ sinfo.html \ @HAVE_MAN2HTML_TRUE@ smap.html \ @HAVE_MAN2HTML_TRUE@ sprio.html \ @HAVE_MAN2HTML_TRUE@ sh5util.html \ @HAVE_MAN2HTML_TRUE@ squeue.html \ @HAVE_MAN2HTML_TRUE@ sreport.html \ @HAVE_MAN2HTML_TRUE@ srun.html \ @HAVE_MAN2HTML_TRUE@ srun_cr.html \ @HAVE_MAN2HTML_TRUE@ sshare.html \ @HAVE_MAN2HTML_TRUE@ sstat.html \ @HAVE_MAN2HTML_TRUE@ strigger.html \ @HAVE_MAN2HTML_TRUE@ sview.html @HAVE_MAN2HTML_TRUE@MOSTLYCLEANFILES = ${html_DATA} @HAVE_MAN2HTML_TRUE@SUFFIXES = .html all: all-am .SUFFIXES: .SUFFIXES: .html .1 $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu doc/man/man1/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu doc/man/man1/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs install-man1: $(man1_MANS) @$(NORMAL_INSTALL) @list1='$(man1_MANS)'; \ list2=''; \ test -n "$(man1dir)" \ && test -n "`echo $$list1$$list2`" \ || exit 0; \ echo " $(MKDIR_P) '$(DESTDIR)$(man1dir)'"; \ $(MKDIR_P) "$(DESTDIR)$(man1dir)" || exit 1; \ { for i in $$list1; do echo "$$i"; done; \ if test -n "$$list2"; then \ for i in $$list2; do echo "$$i"; done \ | sed -n '/\.1[a-z]*$$/p'; \ fi; \ } | while read p; do \ if test -f $$p; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; echo "$$p"; \ done | \ sed -e 'n;s,.*/,,;p;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,' | \ sed 'N;N;s,\n, ,g' | { \ list=; while read file base inst; do \ if test "$$base" = "$$inst"; then list="$$list $$file"; else \ echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man1dir)/$$inst'"; \ $(INSTALL_DATA) "$$file" "$(DESTDIR)$(man1dir)/$$inst" || exit $$?; \ fi; \ done; \ for i in $$list; do echo "$$i"; done | $(am__base_list) | \ while read files; do \ test -z "$$files" || { \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(man1dir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(man1dir)" || exit $$?; }; \ done; } uninstall-man1: @$(NORMAL_UNINSTALL) @list='$(man1_MANS)'; test -n "$(man1dir)" || exit 0; \ files=`{ for i in $$list; do echo "$$i"; done; \ } | sed -e 's,.*/,,;h;s,.*\.,,;s,^[^1][0-9a-z]*$$,1,;x' \ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,'`; \ dir='$(DESTDIR)$(man1dir)'; $(am__uninstall_files_from_dir) install-htmlDATA: $(html_DATA) @$(NORMAL_INSTALL) @list='$(html_DATA)'; test -n "$(htmldir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(htmldir)'"; \ $(MKDIR_P) "$(DESTDIR)$(htmldir)" || exit 1; \ fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(htmldir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(htmldir)" || exit $$?; \ done uninstall-htmlDATA: @$(NORMAL_UNINSTALL) @list='$(html_DATA)'; test -n "$(htmldir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(htmldir)'; $(am__uninstall_files_from_dir) tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(MANS) $(DATA) installdirs: for dir in "$(DESTDIR)$(man1dir)" "$(DESTDIR)$(htmldir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: -test -z "$(MOSTLYCLEANFILES)" || rm -f $(MOSTLYCLEANFILES) clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-htmlDATA install-man install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-man1 install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-htmlDATA uninstall-man uninstall-man: uninstall-man1 .MAKE: install-am install-strip .PHONY: all all-am check check-am clean clean-generic clean-libtool \ cscopelist-am ctags-am distclean distclean-generic \ distclean-libtool distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-htmlDATA install-info install-info-am \ install-man install-man1 install-pdf install-pdf-am install-ps \ install-ps-am install-strip installcheck installcheck-am \ installdirs maintainer-clean maintainer-clean-generic \ mostlyclean mostlyclean-generic mostlyclean-libtool pdf pdf-am \ ps ps-am tags-am uninstall uninstall-am uninstall-htmlDATA \ uninstall-man uninstall-man1 @HAVE_MAN2HTML_TRUE@.1.html: @HAVE_MAN2HTML_TRUE@ `dirname $<`/../man2html.py @SLURM_MAJOR@.@SLURM_MINOR@ $(srcdir)/../../html/header.txt $(srcdir)/../../html/footer.txt $< # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/doc/man/man1/sacct.1000066400000000000000000000742701265000126300176720ustar00rootroot00000000000000.TH sacct "1" "Slurm Commands" "April 2015" "Slurm Commands" .SH "NAME" sacct \- displays accounting data for all jobs and job steps in the Slurm job accounting log or Slurm database .SH "SYNOPSIS" \fBsacct\fR [\fIOPTIONS\fR...] .SH "DESCRIPTION" .PP Accounting information for jobs invoked with Slurm are either logged in the job accounting log file or saved to the Slurm database. .PP The \f3sacct\fP command displays job accounting data stored in the job accounting log file or Slurm database in a variety of forms for your analysis. The \f3sacct\fP command displays information on jobs, job steps, status, and exitcodes by default. You can tailor the output with the use of the \f3\-\-format=\fP option to specify the fields to be shown. .PP For the root user, the \f3sacct\fP command displays job accounting data for all users, although there are options to filter the output to report only the jobs from a specified user or group. .PP For the non\-root user, the \f3sacct\fP command limits the display of job accounting data to jobs that were launched with their own user identifier (UID) by default. Data for other users can be displayed with the \f3\-\-allusers\fP, \f3\-\-user\fP, or \f3\-\-uid\fP options. .TP "7" \f3Note: \fP\c If designated, the slurmdbd.conf option PrivateData may further restrict the accounting data visible to users which are not SlurmUser, root, or a user with AdminLevel=Admin. See the slurmdbd.conf man page for additional details on restricting access to accounting data. .TP \f3Note: \fP\c If the AccountingStorageType is set to "accounting_storage/filetxt", space characters embedded within account names, job names, and step names will be replaced by underscores. If account names with embedded spaces are needed, it is recommended that a database type of accounting storage be configured. .TP \f3Note: \fP\c The content's of Slurm's database are maintained in lower case. This may result in some \f3sacct\fP output differing from that of other Slurm commands. .TP \f3Note: \fP\c Much of the data reported by \f3sacct\fP has been generated by the \f2wait3()\fP and \f2getrusage()\fP system calls. Some systems gather and report incomplete information for these calls; \f3sacct\fP reports values of 0 for this missing data. See your systems \f2getrusage (3)\fP man page for information about which data are actually available on your system. .IP Elapsed time fields are presented as [days-]hours:minutes:seconds[.microseconds]. Only 'CPU' fields will ever have microseconds. .IP The default input file is the file named in the \f3AccountingStorageLoc\fP parameter in slurm.conf. .SH "OPTIONS" .TP "10" \f3\-a\fP\f3,\fP \f3\-\-allusers\fP Displays all users jobs when run by user root or if \fBPrivateData\fP is not configured to \fBjobs\fP. Otherwise display the current user's jobs .IP .TP \f3\-A \fP\f2account_list\fP \fP\f3,\fP \f3\-\-accounts\fP\f3=\fP\f2account_list\fP Displays jobs when a comma separated list of accounts are given as the argument. .IP .TP \f3\-b\fP\f3,\fP \f3\-\-brief\fP Displays a brief listing, which includes the following data: .RS .TP "3" \f3jobid\fP .TP "3" \f3status\fP .TP "3" \f3exitcode\fP .RE .IP .TP \f3\-c\fP\f3,\fP \f3\-\-completion\fP Use job completion instead of job accounting. The \f3JobCompType\fP parameter in the slurm.conf file must be defined to a non-none option. .IP .TP \f3\-\-delimiter\f3=\fP\f2characters\fP ASCII characters used to separate the fields when specifying the \f3\-p\fP or \f3\-P\fP options. The default delimiter is a '|'. This options is ignored if \f3\-p\fP or \f3\-P\fP options are not specified. .TP \f3\-D\fP\f3,\fP \f3\-\-duplicates\fP If Slurm job ids are reset, some job numbers will probably appear more than once in the accounting log file but refer to different jobs. Such jobs can be distinguished by the "submit" time stamp in the data records. .IP When data for specific jobs are requested with the \-\-jobs option, \f3sacct\fP returns the most recent job with that number. This behavior can be overridden by specifying \-\-duplicates, in which case all records that match the selection criteria will be returned. .TP \f3\-e\fP\f3,\fP \f3\-\-helpformat\fP .IP Print a list of fields that can be specified with the \f3\-\-format\fP option. .IP .RS .PP .nf .ft 3 Fields available: AllocCPUS AllocGRES AllocNodes AllocTRES Account AssocID AveCPU AveCPUFreq AveDiskRead AveDiskWrite AvePages AveRSS AveVMSize BlockID Cluster Comment ConsumedEnergy ConsumedEnergyRaw CPUTime CPUTimeRAW DerivedExitCode Elapsed Eligible End ExitCode GID Group JobID JobIDRaw JobName Layout MaxDiskRead MaxDiskReadNode MaxDiskReadTask MaxDiskWrite MaxDiskWriteNode MaxDiskWriteTask MaxPages MaxPagesNode MaxPagesTask MaxRSS MaxRSSNode MaxRSSTask MaxVMSize MaxVMSizeNode MaxVMSizeTask MinCPU MinCPUNode MinCPUTask NCPUS NNodes NodeList NTasks Priority Partition QOS QOSRAW ReqCPUFreq ReqCPUFreqMin ReqCPUFreqMax ReqCPUFreqGov ReqCPUS ReqGRES ReqMem ReqNodes ReqTRES Reservation ReservationId Reserved ResvCPU ResvCPURAW Start State Submit Suspended SystemCPU Timelimit TotalCPU UID User UserCPU WCKey WCKeyID .ft 1 .fi .RE .IP The section titled "Job Accounting Fields" describes these fields. .TP \f3\-E \fP\f2end_time\fP\fP\f3,\fP \f3\-\-endtime\fP\f3=\fP\f2end_time\fP .IP Select jobs in any state before the specified time. If states are given with the \-s option return jobs in this state before this period. Valid time formats are... .sp HH:MM[:SS] [AM|PM] .br MMDD[YY] or MM/DD[/YY] or MM.DD[.YY] .br MM/DD[/YY]\-HH:MM[:SS] .br YYYY\-MM\-DD[THH:MM[:SS]] .IP .TP \f3\-f \fP\f2file\fP\f3,\fP \f3\-\-file\fP\f3=\fP\f2file\fP Causes the \f3sacct\fP command to read job accounting data from the named \f2file\fP instead of the current Slurm job accounting log file. Only applicable when running the filetxt plugin. .TP \f3\-g \fP\f2gid_list\fP\f3, \-\-gid=\fP\f2gid_list\fP \f3\-\-group=\fP\f2group_list\fP Displays the statistics only for the jobs started with the GID or the GROUP specified by the \f2gid_list\fP or the\f2group_list\fP operand, which is a comma\-separated list. Space characters are not allowed. Default is no restrictions.\&. .TP \f3\-h\fP\f3,\fP \f3\-\-help\fP Displays a general help message. .TP \f3\-i\fP\f3,\fP \f3\-\-nnodes\fP\f3=\fP\f2N\fP Return jobs which ran on this many nodes (N = min[\-max]) .TP \f3\-j \fP\f2job(.step)\fP \f3,\fP \f3\-\-jobs\fP\f3=\fP\f2job(.step)\fP Displays information about the specified job(.step) or list of job(.step)s. .IP The \f2job(.step)\fP parameter is a comma\-separated list of jobs. Space characters are not permitted in this list. NOTE: A step id of 'batch' will display the information about the batch step. The batch step information is only available after the batch job is complete unlike regular steps which are available when they start. .IP The default is to display information on all jobs. .TP \f3\-k\fP\f3,\fP \f3\-\-timelimit-min\fP Only send data about jobs with this timelimit. If used with timelimit_max this will be the minimum timelimit of the range. Default is no restriction. .TP \f3\-K\fP\f3,\fP \f3\-\-timelimit-max\fP Ignored by itself, but if timelimit_min is set this will be the maximum timelimit of the range. Default is no restriction. .TP \f3\-l\fP\f3,\fP \f3\-\-long\fP Equivalent to specifying: .IP .na \-\-format=jobid,jobname,partition,maxvmsize,maxvmsizenode,maxvmsizetask, avevmsize,maxrss,maxrssnode,maxrsstask,averss,maxpages,maxpagesnode, maxpagestask,avepages,mincpu,mincpunode,mincputask,avecpu,ntasks, alloccpus,elapsed,state,exitcode,maxdiskread,maxdiskreadnode,maxdiskreadtask, avediskread,maxdiskwrite,maxdiskwritenode,maxdiskwritetask,avediskwrite, allocgres,reqgres,avecpufreq,reqcpufreqmin,reqcpufreqmax,reqcpufreqgov .ad .TP \f3\-L\fP\f3,\fP \f3\-\-allclusters\fP Display jobs ran on all clusters. By default, only jobs ran on the cluster from where \f3sacct\fP is called are displayed. .TP \f3\-M \fP\f2cluster_list\fP\f3, \-\-clusters=\fP\f2cluster_list\fP Displays the statistics only for the jobs started on the clusters specified by the \f2cluster_list\fP operand, which is a comma\-separated list of clusters. Space characters are not allowed in the \f2cluster_list\fP. Use \-1 for all clusters. The default is current cluster you are executing the \f3sacct\fP command on\&. .TP \f3\-n\fP\f3,\fP \f3\-\-noheader\fP No heading will be added to the output. The default action is to display a header. .IP .TP \f3\-\-noconvert\fP Don't convert units from their original type (e.g. 2048M won't be converted to 2G). .IP .TP \f3\-N \fP\f2node_list\fP\f3, \-\-nodelist=\fP\f2node_list\fP Display jobs that ran on any of these node(s). \f2node_list\fP can be a ranged string. .IP .TP \f3\-\-name=\fP\f2jobname_list\fP Display jobs that have any of these name(s). .IP .TP \f3\-o\fP\f3,\fP \f3\-\-format\fP Comma separated list of fields. (use "\-\-helpformat" for a list of available fields). NOTE: When using the format option for listing various fields you can put a %NUMBER afterwards to specify how many characters should be printed. e.g. format=name%30 will print 30 characters of field name right justified. A %\-30 will print 30 characters left justified. When set, the SACCT_FORMAT environment variable will override the default format. For example: SACCT_FORMAT="jobid,user,account,cluster" .TP \f3\-p\fP\f3,\fP \f3\-\-parsable\fP output will be '|' delimited with a '|' at the end .TP \f3\-P\fP\f3,\fP \f3\-\-parsable2\fP output will be '|' delimited without a '|' at the end .TP \f3\-q\fP\f3,\fP \f3\-\-qos\fP Only send data about jobs using these qos. Default is all. .TP \f3\-r\fP\f3,\fP \f3\-\-partition\fP Comma separated list of partitions to select jobs and job steps from. The default is all partitions. .TP \f3\-s \fP\f2state_list\fP \f3, \-\-state\fP\f3=\fP\f2state_list\fP Selects jobs based on their state during the time period given. Unless otherwise specified, the start and end time will be the current time when the \f3\-\-state\fP option is specified and only currently running jobs can be displayed. A start and/or end time must be specified to view information about jobs not currently running. The following state designators are valid and multiple state names may be specified using comma separators. Either the short or long form of the state name may be used (e.g. \f3CA\fP or \f3CANCELLED\fP) and the name is case insensitive (e.g. \f3ca\fP and \f3CA\fP both work). .RS .TP "20" \fBBF BOOT_FAIL\fR Job terminated due to launch failure, typically due to a hardware failure (e.g. unable to boot the node or block and the job can not be requeued). .TP \f3CA CANCELLED\fP Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated. .TP \f3CD COMPLETED\fP Job has terminated all processes on all nodes with an exit code of zero. .TP \f3CF CONFIGURING\fP Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting). .TP \f3CG COMPLETING\fP Job is in the process of completing. Some processes on some nodes may still be active. .TP \f3F FAILED\fP Job terminated with non\-zero exit code or other failure condition. .TP \f3NF NODE_FAIL\fP Job terminated due to failure of one or more allocated nodes. .TP \f3PD PENDING\fP Job is awaiting resource allocation. Note for a job to be selected in this state it must have "EligibleTime" in the requested time interval or different from "Unknown". The "EligibleTime" is displayed by the "scontrol show job" command. For example jobs submitted with the "\-\-hold" option will have "EligibleTime=Unknown" as they are pending indefinitely. .TP \fBPR PREEMPTED\fR Job terminated due to preemption. .TP \f3R RUNNING\fP Job currently has an allocation. .TP \f3RS RESIZING\fP Job is about to change size. .TP \f3S SUSPENDED\fP Job has an allocation, but execution has been suspended. .TP \f3TO TIMEOUT\fP Job terminated upon reaching its time limit. .RE .IP The \f2state_list\fP operand is a comma\-separated list of these state designators. Space characters are not allowed in the \f2state_list\fP\c NOTE: When specifying states and no start time is given the default starttime is 'now'. \&. .TP \f3\-S\fP\f3,\fP \f3\-\-starttime\fP Select jobs in any state after the specified time. Default is 00:00:00 of the current day, unless '\-s' is set then the default is 'now'. If states are given with the '\-s' option then only jobs in this state at this time will be returned. Valid time formats are... .sp HH:MM[:SS] [AM|PM] .br MMDD[YY] or MM/DD[/YY] or MM.DD[.YY] .br MM/DD[/YY]\-HH:MM[:SS] .br YYYY\-MM\-DD[THH:MM[:SS]] .TP \f3\-T\fP\f3,\fP \f3\-\-truncate\fP Truncate time. So if a job started before \-\-starttime the start time would be truncated to \-\-starttime. The same for end time and \-\-endtime. .TP \f3\-u \fP\f2uid_list\fP\f3, \-\-uid=\fP\f2uid_list\fP\f3, \-\-user=\fP\f2user_list\fP Use this comma separated list of uids or user names to select jobs to display. By default, the running user's uid is used. .TP \f3\-\-usage\fP Display a command usage summary. .TP \f3\-v\fP\f3,\fP \f3\-\-verbose\fP Primarily for debugging purposes, report the state of various variables during processing. .TP \f3\-V\fP\f3,\fP \f3\-\-version\fP Print version. .TP \f3\-W \fP\f2wckey_list\fP\f3, \-\-wckeys=\fP\f2wckey_list\fP Displays the statistics only for the jobs started on the wckeys specified by the \f2wckey_list\fP operand, which is a comma\-separated list of wckey names. Space characters are not allowed in the \f2wckey_list\fP. Default is all wckeys\&. .TP \f3\-x \fP\f2associd_list\fP\f3, -\-associations=\fP\f2assoc_list\fP Displays the statistics only for the jobs running under the association ids specified by the \f2assoc_list\fP operand, which is a comma\-separated list of association ids. Space characters are not allowed in the \f2assoc_list\fP. Default is all associations\&. .TP \f3\-X\fP\f3,\fP \f3\-\-allocations\fP Only show cumulative statistics for each job, not the intermediate steps. .SS "Job Accounting Fields" The following describes each job accounting field: .RS .TP "10" \f3ALL\fP Print all fields listed below. .TP \f3AllocCPUs\fP Count of allocated CPUs. Equivalant to \f3NCPUS\fP. .TP \f3AllocGRES\fP Names and counts of generic resources allocated. .TP \f3AllocNodes\fP Number of nodes allocated to the job/step. 0 if the job is pending. .TP \f3AllocTres\fP Trackable resources. These are the resources allocated to the job/step after the job started running. For pending jobs this should be blank. For more details see AccountingStorageTRES in slurm.conf. .TP \f3Account\fP Account the job ran under. .TP \f3AssocID\fP Reference to the association of user, account and cluster. .TP \f3AveCPU\fP Average (system + user) CPU time of all tasks in job. .TP \f3AveCPUFreq\fP Average weighted CPU frequency of all tasks in job, in kHz. .TP \f3AveDiskRead\fP Average number of bytes read by all tasks in job. .TP \f3AveDiskWrite\fP Average number of bytes written by all tasks in job. .TP \f3AvePages\fP Average number of page faults of all tasks in job. .TP \f3AveRSS\fP Average resident set size of all tasks in job. .TP \f3AveVMSize\fP Average Virtual Memory size of all tasks in job. .TP \f3BlockID\fP Block ID, applicable to BlueGene computers only. .TP \f3Cluster\fP Cluster name. .TP \f3Comment\fP The job's comment string when the AccountingStoreJobComment parameter in the slurm.conf file is set (or defaults) to YES. The Comment string can be modified by invoking \f3sacctmgr modify job\fP or the specialized \f3sjobexitmod\fP command. .TP \f3ConsumedEnergy\fP Total energy consumed by all tasks in job, in joules. Note: Only in case of exclusive job allocation this value reflects the jobs' real energy consumption. .TP \f3CPUTime\fP Formatted (Elapsed time * CPU) count used by a job or step. .TP \f3CPUTimeRAW\fP Unlike above non formatted (Elapsed time * CPU) count for a job or step. Units are cpu-seconds. .TP \f3DerivedExitCode\fP The highest exit code returned by the job's job steps (srun invocations). Following the colon is the signal that caused the process to terminate if it was terminated by a signal. The DerivedExitCode can be modified by invoking \f3sacctmgr modify job\fP or the specialized \f3sjobexitmod\fP command. .TP \f3Elapsed\fP The jobs elapsed time. .IP The format of this fields output is as follows: .RS .PD "0" .HP \f2[DD\-[hh:]]mm:ss\fP .PD .RE .IP as defined by the following: .RS .TP "10" \f2DD\fP days .TP \f2hh\fP hours .TP \f2mm\fP minutes .TP \f2ss\fP seconds .RE .TP \f3Eligible\fP When the job became eligible to run. .TP \f3End\fP Termination time of the job. Format output is, YYYY\-MM\-DDTHH:MM:SS, unless changed through the SLURM_TIME_FORMAT environment variable. .TP \f3ExitCode\fP The exit code returned by the job script or salloc, typically as set by the exit() function. Following the colon is the signal that caused the process to terminate if it was terminated by a signal. .TP \f3GID\fP The group identifier of the user who ran the job. .TP \f3Group\fP The group name of the user who ran the job. .TP \f3JobID\fP The number of the job or job step. It is in the form: \f2job.jobstep\fP\c \&. .TP \f3JobIDRaw\fP In case of job array print the jobId instead of the ArrayJobId. For non job arrays the output is the jobId in the format \f2job.jobstep\fP\c \&. .TP \f3JobName\fP The name of the job or job step. The \f3slurm_accounting.log\fP file is a space delimited file. Because of this if a space is used in the jobname an underscore is substituted for the space before the record is written to the accounting file. So when the jobname is displayed by \f3sacct\fP the jobname that had a space in it will now have an underscore in place of the space. .TP \f3Layout\fP What the layout of a step was when it was running. This can be used to give you an idea of which node ran which rank in your job. .TP \f3MaxDiskRead\fP Maximum number of bytes read by all tasks in job. .TP \f3MaxDiskReadNode\fP The node on which the maxdiskread occurred. .TP \f3MaxDiskReadTask\fP The task ID where the maxdiskread occurred. .TP \f3MaxDiskWrite\fP Maximum number of bytes written by all tasks in job. .TP \f3MaxDiskWriteNode\fP The node on which the maxdiskwrite occurred. .TP \f3MaxDiskWriteTask\fP The task ID where the maxdiskwrite occurred. .TP \f3MaxPages\fP Maximum number of page faults of all tasks in job. .TP \f3MaxPagesNode\fP The node on which the maxpages occurred. .TP \f3MaxPagesTask\fP The task ID where the maxpages occurred. .TP \f3MaxRSS\fP Maximum resident set size of all tasks in job. .TP \f3MaxRSSNode\fP The node on which the maxrss occurred. .TP \f3MaxRSSTask\fP The task ID where the maxrss occurred. .TP \f3MaxVMSize\fP Maximum Virtual Memory size of all tasks in job. .TP \f3MaxVMSizeNode\fP The node on which the maxvmsize occurred. .TP \f3MaxVMSizeTask\fP The task ID where the maxvmsize occurred. .TP \f3MinCPU\fP Minimum (system + user) CPU time of all tasks in job. .TP \f3MinCPUNode\fP The node on which the mincpu occurred. .TP \f3MinCPUTask\fP The task ID where the mincpu occurred. .TP \f3NCPUS\fP Count of allocated CPUs. Equivalant to \f3AllocCPUS\fP Total number of CPUs allocated to the job. .TP \f3NodeList\fP List of nodes in job/step. .TP \f3NNodes\fP Number of nodes in a job or step. If the job is running, or ran, this count will be the number allocated, else the number will be the number requested. .TP \f3NTasks\fP Total number of tasks in a job or step. .TP \f3Priority\fP Slurm priority. .TP \f3Partition\fP Identifies the partition on which the job ran. .TP \f3QOS\fP Name of Quality of Service. .TP \f3QOSRAW\fP Id of Quality of Service. .TP \f3ReqCPUFreq\fP Requested CPU frequency for the step, in kHz. Note: This value applies only to a job step. No value is reported for the job. .TP \f3ReqCPUS\fP Required CPUs. .TP \f3ReqGRES\fP Names and counts of generic resources requested. .TP \f3ReqMem\fP Minimum required memory for the job, in MB. A 'c' at the end of number represents Memory Per CPU, a 'n' represents Memory Per Node. Note: This value is only from the job allocation, not the step. .TP \f3ReqNodes\fP Requested minimum Node count for the job/step. .TP \f3ReqTres\fP Trackable resources. These are the minimum resource counts requested by the job/step at submission time. For more details see AccountingStorageTRES in slurm.conf. .TP \f3Reservation\fP Reservation Name. .TP \f3ReservationId\fP Reservation Id. .TP \f3Reserved\fP How much wall clock time was used as reserved time for this job. This is derived from how long a job was waiting from eligible time to when it actually started. .TP \f3ResvCPU\fP Formatted time for how long (cpu secs) a job was reserved for. .TP \f3ResvCPURAW\fP Reserved CPUs in second format, not formatted. .TP \f3Start\fP Initiation time of the job in the same format as \f3End\fP. .TP \f3State\fP Displays the job status, or state. Output can be RUNNING, RESIZING, SUSPENDED, COMPLETED, CANCELLED, FAILED, TIMEOUT, PREEMPTED, BOOT_FAIL or NODE_FAIL. If more information is available on the job state than will fit into the current field width (for example, the uid that CANCELLED a job) the state will be followed by a "+". You can increase the size of the displayed state using the "%NUMBER" format modifier described earlier. NOTE: The RUNNING state will return suspended jobs as well. In order to print suspended jobs you must request SUSPENDED at a different call from RUNNING. .TP \f3Submit\fP The time and date stamp (in Universal Time Coordinated, UTC) the job was submitted. The format of the output is identical to that of the \f3End\fP field. NOTE: If a job is requeued, the submit time is reset. To obtain the original submit time it is necessary to use the \-D or \-\-duplicate option to display all duplicate entries for a job. .TP \f3Suspended\fP How long the job was suspended for. .TP \f3SystemCPU\fP The amount of system CPU time used by the job or job step. The format of the output is identical to that of the \f3Elapsed\fP field. NOTE: SystemCPU provides a measure of the task's parent process and does not include CPU time of child processes. .TP \f3Timelimit\fP What the timelimit was/is for the job. .TP \f3TotalCPU\fP The sum of the SystemCPU and UserCPU time used by the job or job step. The total CPU time of the job may exceed the job's elapsed time for jobs that include multiple job steps. The format of the output is identical to that of the \f3Elapsed\fP field. NOTE: TotalCPU provides a measure of the task's parent process and does not include CPU time of child processes. .TP \f3UID\fP The user identifier of the user who ran the job. .TP \f3User\fP The user name of the user who ran the job. .TP \f3UserCPU\fP The amount of user CPU time used by the job or job step. The format of the output is identical to that of the \f3Elapsed\fP field. NOTE: UserCPU provides a measure of the task's parent process and does not include CPU time of child processes. .TP \f3WCKey\fP Workload Characterization Key. Arbitrary string for grouping orthogonal accounts together. .TP \f3WCKeyID\fP Reference to the wckey. .SH "ENVIRONMENT VARIABLES" .PP Some \fBsacct\fR options may be set via environment variables. These environment variables, along with their corresponding options, are listed below. (Note: Commandline options will always override these settings.) .TP 20 \fBSLURM_CONF\fR The location of the Slurm configuration file. .TP \fBSLURM_TIME_FORMAT\fR Specify the format used to report time stamps. A value of \fIstandard\fR, the default value, generates output in the form "year\-month\-dateThour:minute:second". A value of \fIrelative\fR returns only "hour:minute:second" if the current day. For other dates in the current year it prints the "hour:minute" preceded by "Tomorr" (tomorrow), "Ystday" (yesterday), the name of the day for the coming week (e.g. "Mon", "Tue", etc.), otherwise the date (e.g. "25 Apr"). For other years it returns a date month and year without a time (e.g. "6 Jun 2012"). All of the time stamps use a 24 hour format. A valid strftime() format can also be specified. For example, a value of "%a %T" will report the day of the week and a time stamp (e.g. "Mon 12:34:56"). .SH "EXAMPLES" This example illustrates the default invocation of the \f3sacct\fP command: .RS .PP .nf .ft 3 # sacct Jobid Jobname Partition Account AllocCPUS State ExitCode \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\- 2 script01 srun acct1 1 RUNNING 0 3 script02 srun acct1 1 RUNNING 0 4 endscript srun acct1 1 RUNNING 0 4.0 srun acct1 1 COMPLETED 0 .ft 1 .fi .RE .PP This example shows the same job accounting information with the \f3brief\fP option. .RS .PP .nf .ft 3 # sacct \-\-brief Jobid State ExitCode \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\- 2 RUNNING 0 3 RUNNING 0 4 RUNNING 0 4.0 COMPLETED 0 .ft 1 .fi .RE .PP .RS .PP .nf .ft 3 # sacct \-\-allocations Jobid Jobname Partition Account AllocCPUS State ExitCode \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\- 3 sja_init andy acct1 1 COMPLETED 0 4 sjaload andy acct1 2 COMPLETED 0 5 sja_scr1 andy acct1 1 COMPLETED 0 6 sja_scr2 andy acct1 18 COMPLETED 2 7 sja_scr3 andy acct1 18 COMPLETED 0 8 sja_scr5 andy acct1 2 COMPLETED 0 9 sja_scr7 andy acct1 90 COMPLETED 1 10 endscript andy acct1 186 COMPLETED 0 .ft 1 .fi .RE .PP This example demonstrates the ability to customize the output of the \f3sacct\fP command. The fields are displayed in the order designated on the command line. .RS .PP .nf .ft 3 # sacct \-\-format=jobid,elapsed,ncpus,ntasks,state Jobid Elapsed Ncpus Ntasks State \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\- 3 00:01:30 2 1 COMPLETED 3.0 00:01:30 2 1 COMPLETED 4 00:00:00 2 2 COMPLETED 4.0 00:00:01 2 2 COMPLETED 5 00:01:23 2 1 COMPLETED 5.0 00:01:31 2 1 COMPLETED .ft 1 .fi .RE .PP This example demonstrates the use of the \-T (\-\-truncate) option when used with \-S (\-\-starttime) and \-E (\-\-endtime). When the \-T option is used, the start time of the job will be the specified \-S value if the job was started before the specified time, otherwise the time will be the job's start time. The end time will be the specified \-E option if the job ends after the specified time, otherwise it will be the jobs end time. NOTE: If no \-s (\-\-state) option is given sacct will display jobs that ran durning the specified time, otherwise it returns jobs that were in the state requested durning that period of time. Without \-T (normal operation) sacct output would be like this. .RS .PP .nf .ft 3 # sacct \-S2014\-07\-03\-11:40 \-E2014\-07\-03\-12:00 \-X \-ojobid,start,end,state JobID Start End State \-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\- 2 2014\-07\-03T11:33:16 2014\-07\-03T11:59:01 COMPLETED 3 2014\-07\-03T11:35:21 Unknown RUNNING 4 2014\-07\-03T11:35:21 2014\-07\-03T11:45:21 COMPLETED 5 2014\-07\-03T11:41:01 Unknown RUNNING .ft 1 .fi .RE .PP By adding the \-T option the job's start and end times are truncated to reflect only the time requested. If a job started after the start time requested or finished before the end time requested those times are not altered. The \-T option is useful when determining exact run times durning any given period. .RS .PP .nf .ft 3 # sacct \-T \-S2014\-07\-03\-11:40 \-E2014\-07\-03\-12:00 \-X \-ojobid,jobname,user,start,end,state JobID Start End State \-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\- 2 2014\-07\-03T11:40:00 2014\-07\-03T11:59:01 COMPLETED 3 2014\-07\-03T11:40:00 2014\-07\-03T12:00:00 RUNNING 4 2014\-07\-03T11:40:00 2014\-07\-03T11:45:21 COMPLETED 5 2014\-07\-03T11:41:01 2014\-07\-03T12:00:00 RUNNING .ft 1 .fi .RE .SH "COPYING" Copyright (C) 2005\-2007 Copyright Hewlett\-Packard Development Company L.P. .br Copyright (C) 2008\-2010 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). .br Copyright (C) 2010\-2014 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "FILES" .TP "10" \f3/etc/slurm.conf\fP Entries to this file enable job accounting and designate the job accounting log file that collects system job accounting. .TP \f3/var/log/slurm_accounting.log\fP The default job accounting log file. By default, this file is set to read and write permission for root only. .SH "SEE ALSO" \fBsstat\fR(1), \fBps\fR (1), \fBsrun\fR(1), \fBsqueue\fR(1), \fBgetrusage\fR (2), \fBtime\fR (2) slurm-slurm-15-08-7-1/doc/man/man1/sacctmgr.1000066400000000000000000001657251265000126300204060ustar00rootroot00000000000000.TH sacctmgr "1" "Slurm Commands" "April 2015" "Slurm Commands" .SH "NAME" sacctmgr \- Used to view and modify Slurm account information. .SH "SYNOPSIS" \fBsacctmgr\fR [\fIOPTIONS\fR...] [\fICOMMAND\fR...] .SH "DESCRIPTION" \fBsacctmgr\fR is used to view or modify Slurm account information. The account information is maintained within a database with the interface being provided by \fBslurmdbd\fR (Slurm Database daemon). This database can serve as a central storehouse of user and computer information for multiple computers at a single site. Slurm account information is recorded based upon four parameters that form what is referred to as an \fIassociation\fR. These parameters are \fIuser\fR, \fIcluster\fR, \fIpartition\fR, and \fIaccount\fR. \fIuser\fR is the login name. \fIcluster\fR is the name of a Slurm managed cluster as specified by the \fIClusterName\fR parameter in the \fIslurm.conf\fR configuration file. \fIpartition\fR is the name of a Slurm partition on that cluster. \fIaccount\fR is the bank account for a job. The intended mode of operation is to initiate the \fBsacctmgr\fR command, add, delete, modify, and/or list \fIassociation\fR records then commit the changes and exit. .TP "7" \f3Note: \fP\c The content's of Slurm's database are maintained in lower case. This may result in some \f3sacctmgr\fP output differing from that of other Slurm commands. .SH "OPTIONS" .TP \fB\-h\fR, \fB\-\-help\fR Print a help message describing the usage of \fBsacctmgr\fR. This is equivalent to the \fBhelp\fR command. .TP \fB\-i\fR, \fB\-\-immediate\fR commit changes immediately. .TP \fB\-n\fR, \fB\-\-noheader\fR No header will be added to the beginning of the output. .TP \fB\-p\fR, \fB\-\-parsable\fR Output will be '|' delimited with a '|' at the end. .TP \fB\-P\fR, \fB\-\-parsable2\fR Output will be '|' delimited without a '|' at the end. .TP \fB\-Q\fR, \fB\-\-quiet\fR Print no messages other than error messages. This is equivalent to the \fBquiet\fR command. .TP \fB\-r\fR, \fB\-\-readonly\fR Makes it so the running sacctmgr cannot modify accounting information. The \fBreadonly\fR option is for use within interactive mode. .TP \fB\-s\fR, \fB\-\-associations\fR Use with show or list to display associations with the entity. This is equivalent to the \fBassociations\fR command. .TP \fB\-v\fR, \fB\-\-verbose\fR Enable detailed logging. This is equivalent to the \fBverbose\fR command. .TP \fB\-V\fR , \fB\-\-version\fR Display version number. This is equivalent to the \fBversion\fR command. .SH "COMMANDS" .TP \fBadd\fR <\fIENTITY\fR> <\fISPECS\fR> Add an entity. Identical to the \fBcreate\fR command. .TP \fBassociations\fR Use with show or list to display associations with the entity. .TP \fBcreate\fR <\fIENTITY\fR> <\fISPECS\fR> Add an entity. Identical to the \fBadd\fR command. .TP \fBdelete\fR <\fIENTITY\fR> where <\fISPECS\fR> Delete the specified entities. .TP \fBdump\fR <\fIENTITY\fR> <\fIFile=FILENAME\fR> Dump cluster data to the specified file. If the filename is not specified it uses clustername.cfg filename by default. .TP \fBexit\fP Terminate sacctmgr interactive mode. Identical to the \fBquit\fR command. .TP \fBhelp\fP Display a description of sacctmgr options and commands. .TP \fBlist\fR <\fIENTITY\fR> [<\fISPECS\fR>] Display information about the specified entity. By default, all entries are displayed, you can narrow results by specifying SPECS in your query. Identical to the \fBshow\fR command. .TP \fBload\fR <\fIFILENAME\fR> Load cluster data from the specified file. This is a configuration file generated by running the sacctmgr dump command. This command does not load archive data, see the sacctmgr archive load option instead. .TP \fBmodify\fR <\fIENTITY\fR> \fbwhere\fR <\fISPECS\fR> \fbset\fR <\fISPECS\fR> Modify an entity. .TP \fBproblem\fP Use with show or list to display entity problems. .TP \fBquiet\fP Print no messages other than error messages. .TP \fBquit\fP Terminate the execution of sacctmgr interactive mode. Identical to the \fBexit\fR command. .TP \fBreconfigure\fR Reconfigures the SlurmDBD if running with one. .TP \fBshow\fR <\fIENTITY\fR> [<\fISPECS\fR>] Display information about the specified entity. By default, all entries are displayed, you can narrow results by specifying SPECS in your query. Identical to the \fBlist\fR command. .TP \fBverbose\fP Enable detailed logging. This includes time\-stamps on data structures, record counts, etc. This is an independent command with no options meant for use in interactive mode. .TP \fBversion\fP Display the version number of sacctmgr. .TP \fB!!\fP Repeat the last command. .SH "ENTITIES" .TP \fIaccount\fP A bank account, typically specified at job submit time using the \fI\-\-account=\fR option. These may be arranged in a hierarchical fashion, for example accounts \fIchemistry\fR and \fIphysics\fR may be children of the account \fIscience\fR. The hierarchy may have an arbitrary depth. .TP \fIassociation\fP The entity used to group information consisting of four parameters: \fIaccount\fR, \fIcluster\fR, \fIpartition (optional)\fR, and \fIuser\fR. Used only with the \fIlist\fR or \fIshow\fR command. Add, modify, and delete should be done to a user, account or cluster entity. This will in\-turn update the underlying associations. .TP \fIcluster\fP The \fIClusterName\fR parameter in the \fIslurm.conf\fR configuration file, used to differentiate accounts from on different machines. .TP \fIconfiguration\fP Used only with the \fIlist\fR or \fIshow\fR command to report current system configuration. .TP \fIcoordinator\fR A special privileged user usually an account manager or such that can add users or sub accounts to the account they are coordinator over. This should be a trusted person since they can change limits on account and user associations inside their realm. .TP \fIevent\fR Events like downed or draining nodes on clusters. .TP \fIjob\fR Job - but only two specific fields of the job: Derived Exit Code and the Comment String .TP \fIqos\fR Quality of Service. .TP \fIResource\fP Software resources for the system. Those are software licenses shared among clusters. .TP \fItransaction\fR List of transactions that have occurred during a given time period. .TP \fIuser\fR The login name. Only lowercase usernames are supported. .TP \fIwckeys\fR Workload Characterization Key. An arbitrary string for grouping orthogonal accounts. .SH "GENERAL SPECIFICATIONS FOR ASSOCIATION BASED ENTITIES" \fBNOTE:\fR The group limits (GrpJobs, GrpTRES, etc.) are tested when a job is being considered for being allocated resources. If starting a job would cause any of its group limit to be exceeded, that job will not be considered for scheduling even if that job might preempt other jobs which would release sufficient group resources for the pending job to be initiated. .TP \fIDefaultQOS\fP= The default QOS this association and its children should have. This is overridden if set directly on a user. To clear a previously set value use the modify command with a new value of \-1. .TP \fIFairshare\fP= Number used in conjunction with other accounts to determine job priority. Can also be the string \fIparent\fR, when used on a user this means that the parent association is used for fairshare. If Fairshare=parent is set on an account, that account's children will be effectively reparented for fairshare calculations to the first parent of their parent that is not Fairshare=parent. Limits remain the same, only it's fairshare value is affected. To clear a previously set value use the modify command with a new value of \-1. .TP \fIGraceTime\fP= Specifies, in units of seconds, the preemption grace time to be extended to a job which has been selected for preemption. The default value is zero, no preemption grace time is allowed on this QOS. .P NOTE: This value is only meaningful for QOS PreemptMode=CANCEL) .TP \fIGrpTRESMins\fP= The total number of TRES minutes that can possibly be used by past, present and future jobs running from this association and its children. To clear a previously set value use the modify command with a new value of \-1. .P NOTE: This limit is not enforced if set on the root association of a cluster. So even though it may appear in sacctmgr output, it will not be enforced. .P ALSO NOTE: This limit only applies when using the Priority Multifactor plugin. The time is decayed using the value of PriorityDecayHalfLife or PriorityUsageResetPeriod as set in the slurm.conf. When this limit is reached all associated jobs running will be killed and all future jobs submitted with associations in the group will be delayed until they are able to run inside the limit. .TP \fIGrpTRESRunMins\fP= Used to limit the combined total number of TRES minutes used by all jobs running with this association and its children. This takes into consideration time limit of running jobs and consumes it, if the limit is reached no new jobs are started until other jobs finish to allow time to free up. .TP \fIGrpTRES\fP= Maximum number of TRES running jobs are able to be allocated in aggregate for this association and all associations which are children of this association. To clear a previously set value use the modify command with a new value of \-1. .P NOTE: This limit only applies fully when using the Select Consumable Resource plugin. .TP \fIGrpJobs\fP= Maximum number of running jobs in aggregate for this association and all associations which are children of this association. To clear a previously set value use the modify command with a new value of \-1. .TP \fIGrpSubmitJobs\fP= Maximum number of jobs which can be in a pending or running state at any time in aggregate for this association and all associations which are children of this association. To clear a previously set value use the modify command with a new value of \-1. .TP \fIGrpWall\fP= Maximum wall clock time running jobs are able to be allocated in aggregate for this association and all associations which are children of this association. To clear a previously set value use the modify command with a new value of \-1. .P NOTE: This limit is not enforced if set on the root association of a cluster. So even though it may appear in sacctmgr output, it will not be enforced. .P ALSO NOTE: This limit only applies when using the Priority Multifactor plugin. The time is decayed using the value of PriorityDecayHalfLife or PriorityUsageResetPeriod as set in the slurm.conf. When this limit is reached all associated jobs running will be killed and all future jobs submitted with associations in the group will be delayed until they are able to run inside the limit. .TP \fIMaxTRESMins\fP= Maximum number of TRES minutes each job is able to use in this association. This is overridden if set directly on a user. Default is the cluster's limit. To clear a previously set value use the modify command with a new value of \-1. .TP \fIMaxTRES\fP= Maximum number of TRES each job is able to use in this association. This is overridden if set directly on a user. Default is the cluster's limit. To clear a previously set value use the modify command with a new value of \-1. .P NOTE: This limit only applies fully when using the Select Consumable Resource plugin. .TP \fIMaxJobs\fP= Maximum number of jobs each user is allowed to run at one time in this association. This is overridden if set directly on a user. Default is the cluster's limit. To clear a previously set value use the modify command with a new value of \-1. .TP \fIMaxSubmitJobs\fP= Maximum number of jobs which can this association can have in a pending or running state at any time. Default is the cluster's limit. To clear a previously set value use the modify command with a new value of \-1. .TP \fIMaxWall\fP= Maximum wall clock time each job is able to use in this association. This is overridden if set directly on a user. Default is the cluster's limit. format is or : or
:: or \-
:: or \-
. The value is recorded in minutes with rounding as needed. To clear a previously set value use the modify command with a new value of \-1. .P NOTE: Changing this value will have no effect on any running or pending job. .TP \fIQosLevel\fP Specify the default Quality of Service's that jobs are able to run at for this association. To get a list of valid QOS's use 'sacctmgr list qos'. This value will override its parents value and push down to its children as the new default. Setting a QosLevel to '' (two single quotes with nothing between them) restores its default setting. You can also use the operator += and \-= to add or remove certain QOS's from a QOS list. Valid values include: .RS .TP 5 \fB=\fR Set \fIQosLevel\fP to the specified value. \fBNote:\fR the QOS that can be used at a given account in the hierarchy are inherited by the children of that account. By assigning QOS with the \fB=\fR sign only the assigned QOS can be used by the account and its childern. .TP \fB+=\fR Add the specified value to the current \fIQosLevel\fP. The account will have access to this QOS and the other previously assigned to it. .TP \fB\-=\fR Remove the specified value from the current \fIQosLevel\fP. .RE .TP See the \fBEXAMPLES\fR section below. .SH "SPECIFICATIONS FOR ACCOUNTS" .TP \fICluster\fP= Specific cluster to add account to. Default is all in system. .TP \fIDescription\fP= An arbitrary string describing an account. .TP \fIName\fP= The name of a bank account. Note the name must be unique and can not be represent different bank accounts at different points in the account hierarchy. .TP \fIOrganization\fP= Organization to which the account belongs. .TP \fIParent\fP= Parent account of this account. Default is the root account, a top level account. .TP \fIRawUsage\fP= This allows an administrator to reset the raw usage accrued to an account. The only value currently supported is 0 (zero). This is a settable specification only - it cannot be used as a filter to list accounts. .TP \fIWithAssoc\fP Display all associations for this account. .TP \fIWithCoord\fP Display all coordinators for this account. .TP \fIWithDeleted\fP Display information with previously deleted data. .P NOTE: If using the WithAssoc option you can also query against association specific information to view only certain associations this account may have. These extra options can be found in the \fISPECIFICATIONS FOR ASSOCIATIONS\fP section. You can also use the general specifications list above in the \fIGENERAL SPECIFICATIONS FOR ASSOCIATION BASED ENTITIES\fP section. .SH "LIST/SHOW ACCOUNT FORMAT OPTIONS" .TP \fIAccount\fP The name of a bank account. .TP \fIDescription\fP An arbitrary string describing an account. .TP \fIOrganization\fP Organization to which the account belongs. .TP \fICoordinators\fP List of users that are a coordinator of the account. (Only filled in when using the WithCoordinator option.) .P NOTE: If using the WithAssoc option you can also view the information about the various associations the account may have on all the clusters in the system. The Association format fields are described in the \fILIST/SHOW ASSOCIATION FORMAT OPTIONS\fP section. .SH "SPECIFICATIONS FOR ASSOCIATIONS" .TP \fIClusters\fP= List the associations of the cluster(s). .TP \fIAccounts\fP= List the associations of the account(s). .TP \fIUsers\fP= List the associations of the user(s). .TP \fIPartition\fP= List the associations of the partition(s). .P NOTE: You can also use the general specifications list above in the \fIGENERAL SPECIFICATIONS FOR ASSOCIATION BASED ENTITIES\fP section. \fBOther options unique for listing associations:\fP .TP \fIOnlyDefaults\fP Display only associations that are default associations .TP \fITree\fP Display account names in a hierarchical fashion. .TP \fIWithDeleted\fP Display information with previously deleted data. .TP \fIWithSubAccounts\fP Display information with subaccounts. Only really valuable when used with the account= option. This will display all the subaccount associations along with the accounts listed in the option. .TP \fIWOLimits\fP Display information without limit information. This is for a smaller default format of Cluster,Account,User,Partition .TP \fIWOPInfo\fP Display information without parent information. (i.e. parent id, and parent account name.) This option also invokes WOPLIMITS. .TP \fIWOPLimits\fP Display information without hierarchical parent limits. (i.e. will only display limits where they are set instead of propagating them from the parent.) .SH "LIST/SHOW ASSOCIATION FORMAT OPTIONS" .TP \fIAccount\fP The name of a bank account in the association. .TP \fICluster\fP The name of a cluster in the association. .TP \fIDefaultQOS\fP The QOS the association will use by default if it as access to it in the QOS list mentioned below. .TP \fIFairshare\fP Number used in conjunction with other accounts to determine job priority. Can also be the string \fIparent\fR, when used on a user this means that the parent association is used for fairshare. If Fairshare=parent is set on an account, that account's children will be effectively reparented for fairshare calculations to the first parent of their parent that is not Fairshare=parent. Limits remain the same, only it's fairshare value is affected. .TP \fIGrpTRESMins\fP The total number of TRES minutes that can possibly be used by past, present and future jobs running from this association and its children. .TP \fIGrpTRESRunMins\fP Used to limit the combined total number of TRES minutes used by all jobs running with this association and its children. This takes into consideration time limit of running jobs and consumes it, if the limit is reached no new jobs are started until other jobs finish to allow time to free up. .TP \fIGrpTRES\fP Maximum number of TRES running jobs are able to be allocated in aggregate for this association and all associations which are children of this association. .TP \fIGrpJobs\fP Maximum number of running jobs in aggregate for this association and all associations which are children of this association. .TP \fIGrpSubmitJobs\fP Maximum number of jobs which can be in a pending or running state at any time in aggregate for this association and all associations which are children of this association. .TP \fIGrpWall\fP Maximum wall clock time running jobs are able to be allocated in aggregate for this association and all associations which are children of this association. .TP \fIID\fP The id of the association. .TP \fILFT\fP Associations are kept in a hierarchy: this is the left most spot in the hierarchy. When used with the RGT variable, all associations with a LFT inside this LFT and before the RGT are children of this association. .TP \fIMaxTRESMins\fP Maximum number of TRES minutes each job is able to use. .TP \fIMaxTRES\fP Maximum number of TRES each job is able to use. .TP \fIMaxJobs\fP Maximum number of jobs each user is allowed to run at one time. .TP \fIMaxSubmitJobs\fP Maximum number of jobs pending or running state at any time. .TP \fIMaxWall\fP Maximum wall clock time each job is able to use. .TP \fIQos\fP Valid QOS\' for this association. .TP \fIParentID\fP The association id of the parent of this association. .TP \fIParentName\fP The account name of the parent of this association. .TP \fIPartition\fP The name of a partition in the association. .TP \fIRawQOS\fP The numeric values of valid QOS\' for this association. .TP \fIRGT\fP Associations are kept in a hierarchy: this is the right most spot in the hierarchy. When used with the LFT variable, all associations with a LFT inside this RGT and after the LFT are children of this association. .TP \fIUser\fP The name of a user in the association. .SH "SPECIFICATIONS FOR CLUSTERS" .TP \fIClassification\fP= Type of machine, current classifications are capability and capacity. .TP \fIFlags\fP= Comma separated list of Attributes for a particular cluster. Current Flags include AIX, BGL, BGP, BGQ, Bluegene, CrayXT, FrontEnd, MultipleSlurmd, and SunConstellation .TP \fIName\fP= The name of a cluster. This should be equal to the \fIClusterName\fR parameter in the \fIslurm.conf\fR configuration file for some Slurm\-managed cluster. .TP \fIRPC\fP= Comma separated list of numeric RPC values. .TP \fIWOLimits\fP Display information without limit information. This is for a smaller default format of Cluster,ControlHost,ControlPort,RPC .P NOTE: You can also use the general specifications list above in the \fIGENERAL SPECIFICATIONS FOR ASSOCIATION BASED ENTITIES\fP section. .SH "LIST/SHOW CLUSTER FORMAT OPTIONS" .TP \fIClassification\fP Type of machine, i.e. capability or capacity. .TP \fICluster\fP The name of the cluster. .TP \fIControlHost\fP When a slurmctld registers with the database the ip address of the controller is placed here. .TP \fIControlPort\fP When a slurmctld registers with the database the port the controller is listening on is placed here. .TP \fITRES\fP Trackable RESources (BB (Burst buffer), CPU, Energy, GRES, License, Memory, and Node) this cluster is accounting for. .TP \fIFlags\fP Attributes possessed by the cluster. .TP \fINodeCount\fP The current count of nodes associated with the cluster. .TP \fINodeNames\fP The current Nodes associated with the cluster. .TP \fIPluginIDSelect\fP The numeric value of the select plugin the cluster is using. .TP \fIRPC\fP When a slurmctld registers with the database the rpc version the controller is running is placed here. .P NOTE: You can also view the information about the root association for the cluster. The Association format fields are described in the \fILIST/SHOW ASSOCIATION FORMAT OPTIONS\fP section. .SH "SPECIFICATIONS FOR COORDINATOR" .TP \fIAccount\fP= Account name to add this user as a coordinator to. .TP \fINames\fP= Names of coordinators. .P NOTE: To list coordinators use the WithCoordinator options with list account or list user. .SH "SPECIFICATIONS FOR EVENTS" .TP \fIAll_Clusters\fP Get information on all cluster shortcut. .TP \fIAll_Time\fP Get time period for all time shortcut. .TP \fIClusters\fP= List the events of the cluster(s). Default is the cluster where the command was run. .TP \fIEnd\fP= Period ending of events. Default is now. Valid time formats are... .sp HH:MM[:SS] [AM|PM] .br MMDD[YY] or MM/DD[/YY] or MM.DD[.YY] .br MM/DD[/YY]\-HH:MM[:SS] .br YYYY\-MM\-DD[THH:MM[:SS]] .TP \fIEvent\fP= Specific events to look for, valid options are Cluster or Node, default is both. .TP \fIMaxTRES\fP= Max number of TRES affected by an event. .TP \fIMinTRES\fP= Min number of TRES affected by an event. .TP \fINodes\fP= Node names affected by an event. .TP \fIReason\fP= Reason an event happened. .TP \fIStart\fP= Period start of events. Default is 00:00:00 of previous day, unless states are given with the States= spec events. If this is the case the default behavior is to return events currently in the states specified. Valid time formats are... .sp HH:MM[:SS] [AM|PM] .br MMDD[YY] or MM/DD[/YY] or MM.DD[.YY] .br MM/DD[/YY]\-HH:MM[:SS] .br YYYY\-MM\-DD[THH:MM[:SS]] .TP \fIStates\fP= State of a node in a node event. If this is set, the event type is set automatically to Node. .TP \fIUser\fP= Query against users who set the event. If this is set, the event type is set automatically to Node since only user slurm can perform a cluster event. .SH "LIST/SHOW EVENT FORMAT OPTIONS" .TP \fICluster\fP The name of the cluster event happened on. .TP \fIClusterNodes\fP The hostlist of nodes on a cluster in a cluster event. .TP \fITRES\fP Number of TRES involved with the event. .TP \fIDuration\fP Time period the event was around for. .TP \fIEnd\fP Period when event ended. .TP \fIEvent\fP Name of the event. .TP \fIEventRaw\fP Numeric value of the name of the event. .TP \fINodeName\fP The node affected by the event. In a cluster event, this is blank. .TP \fIReason\fP The reason an event happened. .TP \fIStart\fP Period when event started. .TP \fIState\fP On a node event this is the formatted state of the node during the event. .TP \fIStateRaw\fP On a node event this is the numeric value of the state of the node during the event. .TP \fIUser\fP On a node event this is the user who caused the event to happen. .SH "SPECIFICATIONS FOR JOB" .TP \fIDerivedExitCode\fP The derived exit code can be modified after a job completes based on the user's judgement of whether the job succeeded or failed. The user can only modify the derived exit code of their own job. .TP \f3Comment\fP The job's comment string when the AccountingStoreJobComment parameter in the slurm.conf file is set (or defaults) to YES. The user can only modify the comment string of their own job. .TP The \fIDerivedExitCode\fP and \f3Comment\fP fields are the only fields of a job record in the database that can be modified after job completion. .SH "LIST/SHOW JOB FORMAT OPTIONS" The \fBsacct\fR command is the exclusive command to display job records from the Slurm database. .SH "SPECIFICATIONS FOR QOS" \fBNOTE:\fR The group limits (GrpJobs, GrpNodes, etc.) are tested when a job is being considered for being allocated resources. If starting a job would cause any of its group limit to be exceeded, that job will not be considered for scheduling even if that job might preempt other jobs which would release sufficient group resources for the pending job to be initiated. .TP \fIFlags\fP Used by the slurmctld to override or enforce certain characteristics. .br Valid options are .RS .TP \fIDenyOnLimit\fP If set jobs using this QOS will be rejected at submission time if they do not conform to the QOS 'Max' limits. By default jobs that go over these limits will pend until they conform. .TP \fIEnforceUsageThreshold\fP If set, and the QOS also has a UsageThreshold, any jobs submitted with this QOS that fall below the UsageThreshold will be held until their Fairshare Usage goes above the Threshold. .TP \fINoReserve\fP If this flag is set and backfill scheduling is used, jobs using this QOS will not reserve resources in the backfill schedule's map of resources allocated through time. This flag is intended for use with a QOS that may be preempted by jobs associated with all other QOS (e.g use with a "standby" QOS). If the allocated is used with a QOS which can not be preempted by all other QOS, it could result in starvation of larger jobs. .TP \fIPartitionMaxNodes\fP If set jobs using this QOS will be able to override the requested partition's MaxNodes limit. .TP \fIPartitionMinNodes\fP If set jobs using this QOS will be able to override the requested partition's MinNodes limit. .TP \fIOverPartQOS\fP If set jobs using this QOS will be able to override any limits used by the the requested partition's QOS limits. .TP \fIPartitionTimeLimit\fP If set jobs using this QOS will be able to override the requested partition's TimeLimit. .TP \fIRequiresReservaton\fP If set jobs using this QOS must designate a reservation when submitting a job. This option can be useful in restricting usage of a QOS that may have greater preemptive capability or additional resources to be allowed only within a reservation. .RE .TP \fIGraceTime\fP Preemption grace time to be extended to a job which has been selected for preemption. .TP \fIGrpTRESMins\fP The total number of TRES minutes that can possibly be used by past, present and future jobs running from this QOS. .TP \fIGrpTRESRunMins\fP Used to limit the combined total number of TRES minutes used by all jobs running with this QOS. This takes into consideration time limit of running jobs and consumes it, if the limit is reached no new jobs are started until other jobs finish to allow time to free up. .TP \fIGrpTRES\fP Maximum number of TRES running jobs are able to be allocated in aggregate for this QOS. .TP \fIGrpJobs\fP Maximum number of running jobs in aggregate for this QOS. .TP \fIGrpSubmitJobs\fP Maximum number of jobs which can be in a pending or running state at any time in aggregate for this QOS. .TP \fIGrpWall\fP Maximum wall clock time running jobs are able to be allocated in aggregate for this QOS. If this limit is reached submission requests will be denied and the running jobs will be killed. .TP \fIID\fP The id of the QOS. .TP \fIMaxTRESMins\fP Maximum number of TRES minutes each job is able to use. .TP \fIMaxTRESPerJob\fP Maximum number of TRES each job is able to use. .TP \fIMaxTRESPerNode\fP Maximum number of TRES each node in a job allocation can use. .TP \fIMaxTRESPerUser\fP Maximum number of TRES each user is able to use. .TP \fIMaxJobs\fP Maximum number of jobs each user is allowed to run at one time. .TP \fIMinTRESPerJob\fP Minimum number of TRES each job running under this QOS must request. Otherwise the job will pend until modified. .TP \fIMaxSubmitJobs\fP Maximum number of jobs pending or running state at any time per user. .TP \fIMaxWall\fP Maximum wall clock time each job is able to use. .TP \fIName\fP Name of the QOS. .TP \fIPreempt\fP Other QOS\' this QOS can preempt. .TP \fIPreemptMode\fP Mechanism used to preempt jobs of this QOS if the clusters \fIPreemptType\fP is configured to \fIpreempt/qos\fP. The default preemption mechanism is specified by the cluster\-wide \fIPreemptMode\fP configuration parameter. Possible values are "Cluster" (meaning use cluster default), "Cancel", "Checkpoint" and "Requeue". This option is not compatible with PreemptMode=OFF or PreemptMode=SUSPEND (i.e. preempted jobs must be removed from the resources). .TP \fIPriority\fP What priority will be added to a job\'s priority when using this QOS. .TP \fIRawUsage\fP= This allows an administrator to reset the raw usage accrued to a QOS. The only value currently supported is 0 (zero). This is a settable specification only - it cannot be used as a filter to list accounts. .TP \fIUsageFactor\fP Usage factor when running with this QOS. .TP \fIUsageThreshold\fP A float representing the lowest fairshare of an association allowable to run a job. If an association falls below this threshold and has pending jobs or submits new jobs those jobs will be held until the usage goes back above the threshold. Use \fIsshare\fP to see current shares on the system. .TP \fIWithDeleted\fP Display information with previously deleted data. .SH "LIST/SHOW QOS FORMAT OPTIONS" .TP \fIDescription\fP An arbitrary string describing a QOS. .TP \fIGraceTime\fP Preemption grace time to be extended to a job which has been selected for preemption in the format of hh:mm:ss. The default value is zero, no preemption grace time is allowed on this partition. NOTE: This value is only meaningful for QOS PreemptMode=CANCEL. .TP \fIGrpTRESMins\fP The total number of TRES minutes that can possibly be used by past, present and future jobs running from this QOS. To clear a previously set value use the modify command with a new value of \-1. NOTE: This limit only applies when using the Priority Multifactor plugin. The time is decayed using the value of PriorityDecayHalfLife or PriorityUsageResetPeriod as set in the slurm.conf. When this limit is reached all associated jobs running will be killed and all future jobs submitted with this QOS will be delayed until they are able to run inside the limit. .TP \fIGrpTRES\fP Maximum number of TRES running jobs are able to be allocated in aggregate for this QOS. To clear a previously set value use the modify command with a new value of \-1. .TP \fIGrpJobs\fP Maximum number of running jobs in aggregate for this QOS. To clear a previously set value use the modify command with a new value of \-1. .TP \fIGrpSubmitJobs\fP Maximum number of jobs which can be in a pending or running state at any time in aggregate for this QOS. To clear a previously set value use the modify command with a new value of \-1. .TP \fIGrpWall\fP Maximum wall clock time running jobs are able to be allocated in aggregate for this QOS. To clear a previously set value use the modify command with a new value of \-1. NOTE: This limit only applies when using the Priority Multifactor plugin. The time is decayed using the value of PriorityDecayHalfLife or PriorityUsageResetPeriod as set in the slurm.conf. When this limit is reached all associated jobs running will be killed and all future jobs submitted with this QOS will be delayed until they are able to run inside the limit. .TP \fIMaxTRESMins\fP Maximum number of TRES minutes each job is able to use. To clear a previously set value use the modify command with a new value of \-1. .TP \fIMaxTRESPerJob\fP Maximum number of TRES each job is able to use. To clear a previously set value use the modify command with a new value of \-1. .TP \fIMaxTRESPerNode\fP Maximum number of TRES each node in a job allocation can use. To clear a previously set value use the modify command with a new value of \-1. .TP \fIMaxTRESPerUser\fP Maximum number of TRES each user is able to use. To clear a previously set value use the modify command with a new value of \-1. .TP \fIMaxJobs\fP Maximum number of jobs each user is allowed to run at one time. To clear a previously set value use the modify command with a new value of \-1. .TP \fIMaxSubmitJobs\fP Maximum number of jobs pending or running state at any time per user. To clear a previously set value use the modify command with a new value of \-1. .TP \fIMaxWall\fP Maximum wall clock time each job is able to use. format is or : or
:: or \-
:: or \-
. The value is recorded in minutes with rounding as needed. To clear a previously set value use the modify command with a new value of \-1. .TP \fIMinTRES\fP Minimum number of TRES each job running under this QOS must request. Otherwise the job will pend until modified. To clear a previously set value use the modify command with a new value of \-1. .TP \fIName\fP Name of the QOS. Needed for creation. .TP \fIPreempt\fP Other QOS\' this QOS can preempt. Setting a Preempt to '' (two single quotes with nothing between them) restores its default setting. You can also use the operator += and \-= to add or remove certain QOS's from a QOS list. .TP \fIPreemptMode\fP Mechanism used to preempt jobs of this QOS if the clusters \fIPreemptType\fP is configured to \fIpreempt/qos\fP. The default preemption mechanism is specified by the cluster\-wide \fIPreemptMode\fP configuration parameter. Possible values are "Cluster" (meaning use cluster default), "Cancel", "Checkpoint" and "Requeue". This option is not compatible with PreemptMode=OFF or PreemptMode=SUSPEND (i.e. preempted jobs must be removed from the resources). .TP \fIPriority\fP What priority will be added to a job\'s priority when using this QOS. To clear a previously set value use the modify command with a new value of \-1. .TP \fIUsageFactor\fP Usage factor when running with this QOS. This is a float that is factored into the priority time calculations of running jobs. e.g. if the usagefactor of a QOS was 2 for every TRESBillingUnit second a job ran it would count for 2. Also if the usagefactor was .5, every second would only count for half of the time. Setting this value to 0 will make it so that running jobs will not add time to fairshare or association/qos limits. To clear a previously set value use the modify command with a new value of \-1. .SH "SPECIFICATIONS FOR RESOURCE" \fIClusters\fP= Comma separated list of cluster names on which specified resources are to be available. If no names are designated then the clusters already allowed to use this resource will be altered. .TP \fICount\fP= Number of software resources of a specific name configured on the system being controlled by a resource manager. .TP \fIDescriptions=\fP A brief description of the resource. .TP \fIFlags\fP= Flags that identify specific attributes of the system resource. At this time no flags have been defined. .TP \fIServerType\fP= The type of a software resource manager providing the licenses. For example FlexNext Publisher Flexlm license server or Reprise License Manager RLM. .TP \fINames\fP= Comma separated list of the name of a resource configured on the system being controlled by a resource manager. If this resource is seen on the slurmctld it's name will be name@server to distinguish it from local resources defined in a slurm.conf. .TP \fIPercentAllowed\fP= Percentage of a specific resource that can be used on specified cluster. .TP \fIServer\fP= The name of the server serving up the resource. Default is 'slurmdb' indicating the licenses are being served by the database. .TP \fIType\fP= The type of the resource represented by this record. Currently the only valid type is License. .TP \fIWithClusters\fP Display the clusters percentage of resources. If a resource hasn't been given to a cluster the resource will not be displayed with this flag. .P NOTE: Resource is used to define each resource configured on a system available for usage by Slurm clusters. .SH "LIST/SHOW RESOURCE FORMAT OPTIONS" .TP \fICluster\fP Name of cluster resource is given to. .TP \fICount\fP The count of a specific resource configured on the system globally. .TP \fIAllocated\fP The percent of licenses allocated to a cluster. .TP \fIDescription\fP Description of the resource. .TP \fIServerType\fP The type of the server controlling the licenses. .TP \fIName\fP Name of this resource. .TP \fIServer\fP Server serving up the resource. .TP \fIType\fP Type of resource this record represents. .SH "SPECIFICATIONS FOR TRANSACTIONS" .TP \fIAccounts\fP= Only print out the transactions affecting specified accounts. .TP \fIAction\fP= .TP \fIActor\fP= Only display transactions done by a certain person. .TP \fIClusters\fP= Only print out the transactions affecting specified clusters. .TP \fIEnd\fP= Return all transactions before this Date and time. Default is now. .TP \fIStart\fP= Return all transactions after this Date and time. Default is epoch. Valid time formats for End and Start are... .sp HH:MM[:SS] [AM|PM] .br MMDD[YY] or MM/DD[/YY] or MM.DD[.YY] .br MM/DD[/YY]\-HH:MM[:SS] .br YYYY\-MM\-DD[THH:MM[:SS]] .TP \fIUsers\fP= Only print out the transactions affecting specified users. .TP \fIWithAssoc\fP Get information about which associations were affected by the transactions. .SH "LIST/SHOW TRANSACTIONS FORMAT OPTIONS" .TP \fIAction\fP .TP \fIActor\fP .TP \fIInfo\fP .TP \fITimeStamp\fP .TP \fIWhere\fP .P NOTE: If using the WithAssoc option you can also view the information about the various associations the transaction affected. The Association format fields are described in the \fILIST/SHOW ASSOCIATION FORMAT OPTIONS\fP section. .SH "SPECIFICATIONS FOR USERS" .TP \fIAccount\fP= Account name to add this user to. .TP \fIAdminLevel\fP= Admin level of user. Valid levels are None, Operator, and Admin. .TP \fICluster\fP= Specific cluster to add user to the account on. Default is all in system. .TP \fIDefaultAccount\fP= Identify the default bank account name to be used for a job if none is specified at submission time. .TP \fIDefaultWCKey\fP= Identify the default Workload Characterization Key. .TP \fIName\fP= Name of user. .TP \fIPartition\fP= Partition name. .TP \fIRawUsage\fP= This allows an administrator to reset the raw usage accrued to a user. The only value currently supported is 0 (zero). This is a settable specification only - it cannot be used as a filter to list users. .TP \fIWCKeys\fP= Workload Characterization Key values. .TP \fIWithAssoc\fP Display all associations for this user. .TP \fIWithCoord\fP Display all accounts a user is coordinator for. .TP \fIWithDeleted\fP Display information with previously deleted data. .P NOTE: If using the WithAssoc option you can also query against association specific information to view only certain associations this account may have. These extra options can be found in the \fISPECIFICATIONS FOR ASSOCIATIONS\fP section. You can also use the general specifications list above in the \fIGENERAL SPECIFICATIONS FOR ASSOCIATION BASED ENTITIES\fP section. .SH "LIST/SHOW USER FORMAT OPTIONS" .TP \fIAdminLevel\fP Admin level of user. .TP \fIDefaultAccount\fP The user's default account. .TP \fICoordinators\fP List of users that are a coordinator of the account. (Only filled in when using the WithCoordinator option.) .TP \fIUser\fP The name of a user. .P NOTE: If using the WithAssoc option you can also view the information about the various associations the user may have on all the clusters in the system. The Association format fields are described in the \fILIST/SHOW ASSOCIATION FORMAT OPTIONS\fP section. .SH "LIST/SHOW WCKey" .TP \fIWCKey\fP Workload Characterization Key. .TP \fICluster\fP Specific cluster for the WCKey. .TP \fIUser\fP The name of a user for the WCKey. .P NOTE: If using the WithAssoc option you can also view the information about the various associations the user may have on all the clusters in the system. The Association format fields are described in the \fILIST/SHOW ASSOCIATION FORMAT OPTIONS\fP section. .SH "LIST/SHOW TRES" .TP \fIName\fP The name of the trackable resource. This option is required for TRES types BB (Burst buffer), GRES, and License. Types CPU, Energy, Memory, and Node do not have Names. For example if GRES is the type then name is the denomination of the GRES itself e.g. GPU. .TP \fIID\fP The identification number of the trackable resource as it appears in the database. .TP \fIType\fP The type of the trackable resource. Current types are BB (Burst buffer), CPU, Energy, GRES, License, Memory, and Node. .SH "TRES information" Trackable RESources (TRES) are used in many QOS or Association limits. When setting the limits they are comma separated list. Each TRES has a different limit, i.e. GrpTRESMins=cpu=10,mem=20 would make 2 different limits 1 for 10 cpu minutes and 1 for 20 MB memory minutes. This is the case for each limit that deals with TRES. To remove the limit \-1 is used i.e. GrpTRESMins=cpu-1 would remove only the cpu TRES limit. NOTE: On GrpTRES limits dealing with nodes as a TRES. Each job's node allocation is counted separately (i.e. if a single node has resources allocated to two jobs, this is counted as two allocated nodes). NOTE: When dealing with Memory as a TRES all limits are in MB. .SH "GLOBAL FORMAT OPTION" When using the format option for listing various fields you can put a %NUMBER afterwards to specify how many characters should be printed. e.g. format=name%30 will print 30 characters of field name right justified. A \-30 will print 30 characters left justified. .SH "FLAT FILE DUMP AND LOAD" sacctmgr has the capability to load and dump Slurm association data to and from a file. This method can easily add a new cluster or copy an existing clusters associations into a new cluster with similar accounts. Each file contains Slurm association data for a single cluster. Comments can be put into the file with the # character. Each line of information must begin with one of the four titles; \fBCluster, Parent, Account or User\fP. Following the title is a space, dash, space, entity value, then specifications. Specifications are colon separated. If any variable such as Organization has a space in it, surround the name with single or double quotes. To create a file of associations one can run > sacctmgr dump tux file=tux.cfg .br (file=tux.cfg is optional) To load a previously created file you can run > sacctmgr load file=tux.cfg Other options for load are \- clean \- delete what was already there and start from scratch with this information. .br Cluster= \- specify a different name for the cluster than that which is in the file. Quick explanation how the file works. Since the associations in the system follow a hierarchy, so does the file. Anything that is a parent needs to be defined before any children. The only exception is the understood 'root' account. This is always a default for any cluster and does not need to be defined. To edit/create a file start with a cluster line for the new cluster \fBCluster\ \-\ cluster_name:MaxNodesPerJob=15\fP Anything included on this line will be the defaults for all associations on this cluster. These options are as follows... .TP \fIGrpTRESMins=\fP The total number of TRES minutes that can possibly be used by past, present and future jobs running from this association and its children. .TP \fIGrpTRESRunMins=\fP Used to limit the combined total number of TRES minutes used by all jobs running with this association and its children. This takes into consideration time limit of running jobs and consumes it, if the limit is reached no new jobs are started until other jobs finish to allow time to free up. .TP \fIGrpTRES=\fP Maximum number of TRES running jobs are able to be allocated in aggregate for this association and all associations which are children of this association. .TP \fIGrpJobs=\fP Maximum number of running jobs in aggregate for this association and all associations which are children of this association. .TP \fIGrpNodes=\fP Maximum number of nodes running jobs are able to be allocated in aggregate for this association and all associations which are children of this association. .P NOTE: Each job's node allocation is counted separately (i.e. if a single node has resources allocated to two jobs, this is counted as two allocated nodes). .TP \fIGrpSubmitJobs=\fP Maximum number of jobs which can be in a pending or running state at any time in aggregate for this association and all associations which are children of this association. .TP \fIGrpWall=\fP Maximum wall clock time running jobs are able to be allocated in aggregate for this association and all associations which are children of this association. .TP \fIFairShare=\fP Number used in conjunction with other associations to determine job priority. .TP \fIMaxJobs=\fP Maximum number of jobs the children of this association can run. .TP \fIMaxNodesPerJob=\fP Maximum number of nodes per job the children of this association can run. .TP \fIMaxWallDurationPerJob=\fP Maximum time (not related to job size) children of this accounts jobs can run. .TP \fIQOS=\fP Comma separated list of Quality of Service names (Defined in sacctmgr). .TP Followed by Accounts you want in this fashion... .na \fBParent\ \-\ root\fP (Defined by default) .br \fBAccount\ \-\ cs\fP:MaxNodesPerJob=5:MaxJobs=4:FairShare=399:MaxWallDurationPerJob=40:Description='Computer Science':Organization='LC' .br \fBParent\ \-\ cs\fP .br \fBAccount\ \-\ test\fP:MaxNodesPerJob=1:MaxJobs=1:FairShare=1:MaxWallDurationPerJob=1:Description='Test Account':Organization='Test' .ad .TP Any of the options after a ':' can be left out and they can be in any order. If you want to add any sub accounts just list the Parent THAT HAS ALREADY BEEN CREATED before the account line in this fashion... .TP All account options are .TP \fIDescription=\fP A brief description of the account. .TP \fIGrpTRESMins=\fP Maximum number of TRES hours running jobs are able to be allocated in aggregate for this association and all associations which are children of this association. \fIGrpTRESRunMins=\fP Used to limit the combined total number of TRES minutes used by all jobs running with this association and its children. This takes into consideration time limit of running jobs and consumes it, if the limit is reached no new jobs are started until other jobs finish to allow time to free up. .TP \fIGrpTRES=\fP Maximum number of TRES running jobs are able to be allocated in aggregate for this association and all associations which are children of this association. .TP \fIGrpJobs=\fP Maximum number of running jobs in aggregate for this association and all associations which are children of this association. .TP \fIGrpNodes=\fP Maximum number of nodes running jobs are able to be allocated in aggregate for this association and all associations which are children of this association. .P NOTE: Each job's node allocation is counted separately (i.e. if a single node has resources allocated to two jobs, this is counted as two allocated nodes). .TP \fIGrpSubmitJobs=\fP Maximum number of jobs which can be in a pending or running state at any time in aggregate for this association and all associations which are children of this association. .TP \fIGrpWall=\fP Maximum wall clock time running jobs are able to be allocated in aggregate for this association and all associations which are children of this association. .TP \fIFairShare=\fP Number used in conjunction with other associations to determine job priority. .TP \fIMaxJobs=\fP Maximum number of jobs the children of this association can run. .TP \fIMaxNodesPerJob=\fP Maximum number of nodes per job the children of this association can run. .TP \fIMaxWallDurationPerJob=\fP Maximum time (not related to job size) children of this accounts jobs can run. .TP \fIOrganization= Name of organization that owns this account. .TP \fI\fIQOS(=,+=,\-=)\fP Comma separated list of Quality of Service names (Defined in sacctmgr). .TP .TP To add users to a account add a line like this after a Parent \- line \fBParent\ \-\ test\fP .br .na \fBUser\ \-\ adam\fP:MaxNodesPerJob=2:MaxJobs=3:FairShare=1:MaxWallDurationPerJob=1:AdminLevel=Operator:Coordinator='test' .ad .TP All user options are .TP \fIAdminLevel=\fP Type of admin this user is (Administrator, Operator) .br \fBMust be defined on the first occurrence of the user.\fP .TP \fICoordinator=\fP Comma separated list of accounts this user is coordinator over .br \fBMust be defined on the first occurrence of the user.\fP .TP \fIDefaultAccount=\fP system wide default account name .br \fBMust be defined on the first occurrence of the user.\fP .TP \fIFairShare=\fP Number used in conjunction with other associations to determine job priority. .TP \fIMaxJobs=\fP Maximum number of jobs this user can run. .TP \fIMaxNodesPerJob=\fP Maximum number of nodes per job this user can run. .TP \fIMaxWallDurationPerJob=\fP Maximum time (not related to job size) this user can run. .TP \fIQOS(=,+=,\-=)\fP Comma separated list of Quality of Service names (Defined in sacctmgr). .SH "ARCHIVE FUNCTIONALITY" Sacctmgr has the capability to archive to a flatfile and or load that data if needed later. The archiving is usually done by the slurmdbd and it is highly recommended you only do it through sacctmgr if you completely understand what you are doing. For slurmdbd options see "man slurmdbd" for more information. Loading data into the database can be done from these files to either view old data or regenerate rolled up data. These are the options for both dump and load of archive information. archive dump .TP \fIDirectory=\fP Directory to store the archive data. .TP \fIEvents\fP Archive Events. If not specified and PurgeEventAfter is set all event data removed will be lost permanently. .TP \fIJobs\fP Archive Jobs. If not specified and PurgeJobAfter is set all job data removed will be lost permanently. .TP \fIPurgeEventAfter=\fP Purge cluster event records older than time stated in months. If you want to purge on a shorter time period you can include hours, or days behind the numeric value to get those more frequent purges. (e.g. a value of '12hours' would purge everything older than 12 hours.) .TP \fIPurgeJobAfter=\fP Purge job records older than time stated in months. If you want to purge on a shorter time period you can include hours, or days behind the numeric value to get those more frequent purges. (e.g. a value of '12hours' would purge everything older than 12 hours.) .TP \fIPurgeStepAfter=\fP Purge step records older than time stated in months. If you want to purge on a shorter time period you can include hours, or days behind the numeric value to get those more frequent purges. (e.g. a value of '12hours' would purge everything older than 12 hours.) .TP \fIPurgeSuspendAfter=\fP Purge job suspend records older than time stated in months. If you want to purge on a shorter time period you can include hours, or days behind the numeric value to get those more frequent purges. (e.g. a value of '12hours' would purge everything older than 12 hours.) .TP \fIScript=\fP Run this script instead of the generic form of archive to flat files. .TP \fISteps\fP Archive Steps. If not specified and PurgeStepAfter is set all step data removed will be lost permanently. .TP \fISuspend\fP Archive Suspend Data. If not specified and PurgeSuspendAfter is set all suspend data removed will be lost permanently. .TP \fIArchive Load\fP Load in to the database previously archived data. .TP \fIFile=\fP File to load into database. .TP \fIInsert=\fP SQL to insert directly into the database. This should be used very cautiously since this is writing your sql into the database. .SH "ENVIRONMENT VARIABLES" .PP Some \fBsacctmgr\fR options may be set via environment variables. These environment variables, along with their corresponding options, are listed below. (Note: commandline options will always override these settings) .TP 20 \fBSLURM_CONF\fR The location of the Slurm configuration file. .SH "EXAMPLES" \fBNOTE:\fR There is an order to set up accounting associations. You must define clusters before you add accounts and you must add accounts before you can add users. .eo .br -> sacctmgr create cluster tux .br -> sacctmgr create account name=science fairshare=50 .br -> sacctmgr create account name=chemistry parent=science fairshare=30 .br -> sacctmgr create account name=physics parent=science fairshare=20 .br -> sacctmgr create user name=adam cluster=tux account=physics fairshare=10 .br -> sacctmgr delete user name=adam cluster=tux account=physics .br -> sacctmgr delete account name=physics cluster=tux .br -> sacctmgr modify user where name=adam cluster=tux account=physics set maxjobs=2 maxwall=30:00 .br -> sacctmgr list associations cluster=tux format=Account,Cluster,User,Fairshare tree withd .br -> sacctmgr list transactions StartTime=11/03\-10:30:00 format=Timestamp,Action,Actor .br -> sacctmgr dump cluster=tux file=tux_data_file .br -> sacctmgr load tux_data_file .br .br A user's account can not be changed directly. A new association needs to be created for the user with the new account. Then the association with the old account can be deleted. .br When modifying an object placing the key words 'set' and the optional 'where' is critical to perform correctly below are examples to produce correct results. As a rule of thumb anything you put in front of the set will be used as a quantifier. If you want to put a quantifier after the key word 'set' you should use the key word 'where'. .br .br wrong-> sacctmgr modify user name=adam set fairshare=10 cluster=tux .br .br This will produce an error as the above line reads modify user adam set fairshare=10 and cluster=tux. .br .br right-> sacctmgr modify user name=adam cluster=tux set fairshare=10 .br right-> sacctmgr modify user name=adam set fairshare=10 where cluster=tux .br .br When changing qos for something only use the '=' operator when wanting to explicitly set the qos to something. In most cases you will want to use the '+=' or '\-=' operator to either add to or remove from the existing qos already in place. .br .br If a user already has qos of normal,standby for a parent or it was explicitly set you should use qos+=expedite to add this to the list in this fashion. .br If you are looking to only add the qos expedite to only a certain account and or cluster you can do that by specifying them in the sacctmgr line. .br -> sacctmgr modify user name=adam set qos+=expedite .br .br > sacctmgr modify user name=adam acct=this cluster=tux set qos+=expedite .br .br Let's give an example how to add QOS to user accounts. List all available QOSs in the cluster. .br .br ->sacctmgr show qos format=name Name .br --------- .br normal .br expedite .br .br List all the associations in the cluster. .br ->sacctmgr show assoc format=cluster,account,qos Cluster Account QOS .br -------- ---------- ----- .br zebra root normal .br zebra root normal .br zebra g normal .br zebra g1 normal .br .br Add the QOS expedite to account G1 and display the result. Using the operator += the QOS will be added together with the existing QOS to this account. .br .br ->sacctmgr modify account name=g1 set qos+=expedite .br .br ->sacctmgr show assoc format=cluster,account,qos .br Cluster Account QOS .br -------- -------- ------- .br zebra root normal .br zebra root normal .br zebra g normal .br zebra g1 expedite,normal .br .br Now set the QOS expedite as the only QOS for the account G and display the result. Using the operator = that expedite is the only usable QOS by account G .br .br ->sacctmgr modify account name=G set qos=expedite .br .br >sacctmgr show assoc format=cluster,account,user,qos .br Cluster Account QOS .br --------- -------- ----- .br zebra root normal .br zebra root normal .br zebra g expedite .br zebra g1 expedite,normal .br .br If a new account is added under the account G it will inherit the QOS expedite and it will not have access to QOS normal. .br .br ->sacctmgr add account banana parent=G .br .br ->sacctmgr show assoc format=cluster,account,qos .br Cluster Account QOS .br --------- -------- ----- .br zebra root normal .br zebra root normal .br zebra g expedite .br zebra banana expedite .br zebra g1 expedite,normal .br An example of listing trackable resources .br .br ->sacctmgr show tres .br Type Name ID .br ---------- ----------------- -------- .br cpu 1 .br mem 2 .br energy 3 .br node 4 .br gres gpu:tesla 1001 .br license vcs 1002 .br bb cray 1003 .br .ec .SH "COPYING" Copyright (C) 2008\-2010 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). .br Copyright (C) 2010\-2015 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" \fBslurm.conf\fR(5), \fBslurmdbd\fR(8) slurm-slurm-15-08-7-1/doc/man/man1/salloc.1000066400000000000000000001712321265000126300200460ustar00rootroot00000000000000.TH salloc "1" "Slurm Commands" "April 2015" "Slurm Commands" .SH "NAME" salloc \- Obtain a Slurm job allocation (a set of nodes), execute a command, and then release the allocation when the command is finished. .SH "SYNOPSIS" salloc [\fIoptions\fP] [<\fIcommand\fP> [\fIcommand args\fR]] .SH "DESCRIPTION" salloc is used to allocate a Slurm job allocation, which is a set of resources (nodes), possibly with some set of constraints (e.g. number of processors per node). When salloc successfully obtains the requested allocation, it then runs the command specified by the user. Finally, when the user specified command is complete, salloc relinquishes the job allocation. The command may be any program the user wishes. Some typical commands are xterm, a shell script containing srun commands, and srun (see the EXAMPLES section). If no command is specified, then the value of \fBSallocDefaultCommand\fR in slurm.conf is used. If \fBSallocDefaultCommand\fR is not set, then \fBsalloc\fR runs the user's default shell. The following document describes the the influence of various options on the allocation of cpus to jobs and tasks. .br http://slurm.schedmd.com/cpu_management.html NOTE: The salloc logic includes support to save and restore the terminal line settings and is designed to be executed in the foreground. If you need to execute salloc in the background, set its standard input to some file, for example: "salloc \-n16 a.out Charge resources used by this job to specified account. The \fIaccount\fR is an arbitrary string. The account name may be changed after job submission using the \fBscontrol\fR command. .TP \fB\-\-acctg\-freq\fR Define the job accounting and profiling sampling intervals. This can be used to override the \fIJobAcctGatherFrequency\fR parameter in Slurm's configuration file, \fIslurm.conf\fR. The supported format is as follows: .RS .TP 12 \fB\-\-acctg\-freq=\fR\fI\fR\fB=\fR\fI\fR where \fI\fR=\fI\fR specifies the task sampling interval for the jobacct_gather plugin or a sampling interval for a profiling type by the acct_gather_profile plugin. Multiple, comma-separated \fI\fR=\fI\fR intervals may be specified. Supported datatypes are as follows: .RS .TP \fBtask=\fI\fR where \fI\fR is the task sampling interval in seconds for the jobacct_gather plugins and for task profiling by the acct_gather_profile plugin. NOTE: This frequency is used to monitor memory usage. If memory limits are enforced the highest frequency a user can request is what is configured in the slurm.conf file. They can not turn it off (=0) either. .TP \fBenergy=\fI\fR where \fI\fR is the sampling interval in seconds for energy profiling using the acct_gather_energy plugin .TP \fBnetwork=\fI\fR where \fI\fR is the sampling interval in seconds for infiniband profiling using the acct_gather_infiniband plugin. .TP \fBfilesystem=\fI\fR where \fI\fR is the sampling interval in seconds for filesystem profiling using the acct_gather_filesystem plugin. .TP .RE .RE .br The default value for the task sampling interval is 30. The default value for all other intervals is 0. An interval of 0 disables sampling of the specified type. If the task sampling interval is 0, accounting information is collected only at job termination (reducing Slurm interference with the job). .br .br Smaller (non\-zero) values have a greater impact upon job performance, but a value of 30 seconds is not likely to be noticeable for applications having less than 10,000 tasks. .RE .TP \fB\-B\fR \fB\-\-extra\-node\-info\fR=<\fIsockets\fR[:\fIcores\fR[:\fIthreads\fR]]> Request a specific allocation of resources with details as to the number and type of computational resources within a cluster: number of sockets (or physical processors) per node, cores per socket, and threads per core. The total amount of resources being requested is the product of all of the terms. Each value specified is considered a minimum. An asterisk (*) can be used as a placeholder indicating that all available resources of that type are to be utilized. As with nodes, the individual levels can also be specified in separate options if desired: .nf \fB\-\-sockets\-per\-node\fR=<\fIsockets\fR> \fB\-\-cores\-per\-socket\fR=<\fIcores\fR> \fB\-\-threads\-per\-core\fR=<\fIthreads\fR> .fi If SelectType is configured to select/cons_res, it must have a parameter of CR_Core, CR_Core_Memory, CR_Socket, or CR_Socket_Memory for this option to be honored. This option is not supported on BlueGene systems (select/bluegene plugin is configured). If not specified, the scontrol show job will display 'ReqS:C:T=*:*:*'. .TP \fB\-\-bb\fR=<\fIspec\fR> Burst buffer specification. The form of the specification is system dependent. .TP \fB\-\-begin\fR=<\fItime\fR> Submit the batch script to the Slurm controller immediately, like normal, but tell the controller to defer the allocation of the job until the specified time. Time may be of the form \fIHH:MM:SS\fR to run a job at a specific time of day (seconds are optional). (If that time is already past, the next day is assumed.) You may also specify \fImidnight\fR, \fInoon\fR, \fIfika\fR (3 PM) or \fIteatime\fR (4 PM) and you can have a time\-of\-day suffixed with \fIAM\fR or \fIPM\fR for running in the morning or the evening. You can also say what day the job will be run, by specifying a date of the form \fIMMDDYY\fR or \fIMM/DD/YY\fR \fIYYYY\-MM\-DD\fR. Combine date and time using the following format \fIYYYY\-MM\-DD[THH:MM[:SS]]\fR. You can also give times like \fInow + count time\-units\fR, where the time\-units can be \fIseconds\fR (default), \fIminutes\fR, \fIhours\fR, \fIdays\fR, or \fIweeks\fR and you can tell Slurm to run the job today with the keyword \fItoday\fR and to run the job tomorrow with the keyword \fItomorrow\fR. The value may be changed after job submission using the \fBscontrol\fR command. For example: .nf \-\-begin=16:00 \-\-begin=now+1hour \-\-begin=now+60 (seconds by default) \-\-begin=2010\-01\-20T12:34:00 .fi .RS .PP Notes on date/time specifications: \- Although the 'seconds' field of the HH:MM:SS time specification is allowed by the code, note that the poll time of the Slurm scheduler is not precise enough to guarantee dispatch of the job on the exact second. The job will be eligible to start on the next poll following the specified time. The exact poll interval depends on the Slurm scheduler (e.g., 60 seconds with the default sched/builtin). \- If no time (HH:MM:SS) is specified, the default is (00:00:00). \- If a date is specified without a year (e.g., MM/DD) then the current year is assumed, unless the combination of MM/DD and HH:MM:SS has already passed for that year, in which case the next year is used. .RE .TP \fB\-\-bell\fR Force salloc to ring the terminal bell when the job allocation is granted (and only if stdout is a tty). By default, salloc only rings the bell if the allocation is pending for more than ten seconds (and only if stdout is a tty). Also see the option \fB\-\-no\-bell\fR. .TP \fB\-\-comment\fR=<\fIstring\fR> An arbitrary comment. .TP \fB\-C\fR, \fB\-\-constraint\fR=<\fIlist\fR> Nodes can have \fBfeatures\fR assigned to them by the Slurm administrator. Users can specify which of these \fBfeatures\fR are required by their job using the constraint option. Only nodes having features matching the job constraints will be used to satisfy the request. Multiple constraints may be specified with AND, OR, matching OR, resource counts, etc. Supported \fbconstraint\fR options include: .PD 1 .RS .TP \fBSingle Name\fR Only nodes which have the specified feature will be used. For example, \fB\-\-constraint="intel"\fR .TP \fBNode Count\fR A request can specify the number of nodes needed with some feature by appending an asterisk and count after the feature name. For example "\fB\-\-nodes=16 \-\-constraint=graphics*4 ..."\fR indicates that the job requires 16 nodes and that at least four of those nodes must have the feature "graphics." .TP \fBAND\fR If only nodes with all of specified features will be used. The ampersand is used for an AND operator. For example, \fB\-\-constraint="intel&gpu"\fR .TP \fBOR\fR If only nodes with at least one of specified features will be used. The vertical bar is used for an OR operator. For example, \fB\-\-constraint="intel|amd"\fR .TP \fBMatching OR\fR If only one of a set of possible options should be used for all allocated nodes, then use the OR operator and enclose the options within square brackets. For example: "\fB\-\-constraint=[rack1|rack2|rack3|rack4]"\fR might be used to specify that all nodes must be allocated on a single rack of the cluster, but any of those four racks can be used. .TP \fBMultiple Counts\fR Specific counts of multiple resources may be specified by using the AND operator and enclosing the options within square brackets. For example: "\fB\-\-constraint=[rack1*2&rack2*4]"\fR might be used to specify that two nodes must be allocated from nodes with the feature of "rack1" and four nodes must be allocated from nodes with the feature "rack2". .RE .TP \fB\-\-contiguous\fR If set, then the allocated nodes must form a contiguous set. Not honored with the \fBtopology/tree\fR or \fBtopology/3d_torus\fR plugins, both of which can modify the node ordering. .TP \fB\-\-cores\-per\-socket\fR=<\fIcores\fR> Restrict node selection to nodes with at least the specified number of cores per socket. See additional information under \fB\-B\fR option above when task/affinity plugin is enabled. .TP \fB\-\-cpu\-freq\fR =<\fIp1\fR[\-\fIp2\fR[:\fIp3\fR]]> Request that job steps initiated by srun commands inside this allocation be run at some requested frequency if possible, on the CPUs selected for the step on the compute node(s). \fBp1\fR can be [#### | low | medium | high | highm1] which will set the frequency scaling_speed to the corresponding value, and set the frequency scaling_governor to UserSpace. See below for definition of the values. \fBp1\fR can be [Conservative | OnDemand | Performance | PowerSave] which will set the scaling_governor to the corresponding value. The governor has to be in the list set by the slurm.conf option CpuFreqGovernors. When \fBp2\fR is present, p1 will be the minimum scaling frequency and p2 will be the maximum scaling frequency. \fBp2\fR can be [#### | medium | high | highm1] p2 must be greater than p1. \fBp3\fR can be [Conservative | OnDemand | Performance | PowerSave | UserSpace] which will set the governor to the corresponding value. If \fBp3\fR is UserSpace, the frequency scaling_speed will be set by a power or energy aware scheduling strategy to a value between p1 and p2 that lets the job run within the site's power goal. The job may be delayed if p1 is higher than a frequency that allows the job to run withing the goal. If the current frequency is < min, it will be set to min. Likewise, if the current frequency is > max, it will be set to max. Acceptable values at present include: .RS .TP 14 \fB####\fR frequency in kilohertz .TP \fBLow\fR the lowest available frequency .TP \fBHigh\fR the highest available frequency .TP \fBHighM1\fR (high minus one) will select the next highest available frequency .TP \fBMedium\fR attempts to set a frequency in the middle of the available range .TP \fBConservative\fR attempts to use the Conservative CPU governor .TP \fBOnDemand\fR attempts to use the OnDemand CPU governor (the default value) .TP \fBPerformance\fR attempts to use the Performance CPU governor .TP \fBPowerSave\fR attempts to use the PowerSave CPU governor .TP \fBUserSpace\fR attempts to use the UserSpace CPU governor .TP .RE The following informational environment variable is set in the job step when \fB\-\-cpu\-freq\fR option is requested. .nf SLURM_CPU_FREQ_REQ .fi This environment variable can also be used to supply the value for the CPU frequency request if it is set when the 'srun' command is issued. The \fB\-\-cpu\-freq\fR on the command line will override the environment variable value. The form on the environment variable is the same as the command line. See the \fBENVIRONMENT VARIABLES\fR section for a description of the SLURM_CPU_FREQ_REQ variable. \fBNOTE\fR: This parameter is treated as a request, not a requirement. If the job step's node does not support setting the CPU frequency, or the requested value is outside the bounds of the legal frequencies, an error is logged, but the job step is allowed to continue. \fBNOTE\fR: Setting the frequency for just the CPUs of the job step implies that the tasks are confined to those CPUs. If task confinement (i.e., TaskPlugin=task/affinity or TaskPlugin=task/cgroup with the "ConstrainCores" option) is not configured, this parameter is ignored. \fBNOTE\fR: When the step completes, the frequency and governor of each selected CPU is reset to the configured \fBCpuFreqDef\fR value with a default value of the OnDemand CPU governor. \fBNOTE\fR: When submitting jobs with the \fB\-\-cpu\-freq\fR option with linuxproc as the ProctrackType can cause jobs to run too quickly before Accounting is able to poll for job information. As a result not all of accounting information will be present. .RE .TP \fB\-c\fR, \fB\-\-cpus\-per\-task\fR=<\fIncpus\fR> Advise the Slurm controller that ensuing job steps will require \fIncpus\fR number of processors per task. Without this option, the controller will just try to allocate one processor per task. For instance, consider an application that has 4 tasks, each requiring 3 processors. If our cluster is comprised of quad\-processors nodes and we simply ask for 12 processors, the controller might give us only 3 nodes. However, by using the \-\-cpus\-per\-task=3 options, the controller knows that each task requires 3 processors on the same node, and the controller will grant an allocation of 4 nodes, one for each of the 4 tasks. .TP \fB\-d\fR, \fB\-\-dependency\fR=<\fIdependency_list\fR> Defer the start of this job until the specified dependencies have been satisfied completed. <\fIdependency_list\fR> is of the form <\fItype:job_id[:job_id][,type:job_id[:job_id]]\fR> or <\fItype:job_id[:job_id][?type:job_id[:job_id]]\fR>. All dependencies must be satisfied if the "," separator is used. Any dependency may be satisfied if the "?" separator is used. Many jobs can share the same dependency and these jobs may even belong to different users. The value may be changed after job submission using the scontrol command. Once a job dependency fails due to the termination state of a preceding job, the dependent job will never be run, even if the preceding job is requeued and has a different termination state in a subsequent execution. .PD .RS .TP \fBafter:job_id[:jobid...]\fR This job can begin execution after the specified jobs have begun execution. .TP \fBafterany:job_id[:jobid...]\fR This job can begin execution after the specified jobs have terminated. .TP \fBafternotok:job_id[:jobid...]\fR This job can begin execution after the specified jobs have terminated in some failed state (non-zero exit code, node failure, timed out, etc). .TP \fBafterok:job_id[:jobid...]\fR This job can begin execution after the specified jobs have successfully executed (ran to completion with an exit code of zero). .TP \fBexpand:job_id\fR Resources allocated to this job should be used to expand the specified job. The job to expand must share the same QOS (Quality of Service) and partition. Gang scheduling of resources in the partition is also not supported. .TP \fBsingleton\fR This job can begin execution after any previously launched jobs sharing the same job name and user have terminated. .RE .TP \fB\-D\fR, \fB\-\-chdir\fR=<\fIpath\fR> Change directory to \fIpath\fR before beginning execution. The path can be specified as full path or relative path to the directory where the command is executed. .TP \fB\-\-exclusive[=user]\fR The job allocation can not share nodes with other running jobs (or just other users with the "=user" option). The default shared/exclusive behavior depends on system configuration and the partition's \fBShared\fR option takes precedence over the job's option. .TP \fB\-F\fR, \fB\-\-nodefile\fR=<\fInode file\fR> Much like \-\-nodelist, but the list is contained in a file of name \fInode file\fR. The node names of the list may also span multiple lines in the file. Duplicate node names in the file will be ignored. The order of the node names in the list is not important; the node names will be sorted by Slurm. .TP \fB\-\-get\-user\-env\fR[=\fItimeout\fR][\fImode\fR] This option will load login environment variables for the user specified in the \fB\-\-uid\fR option. The environment variables are retrieved by running something of this sort "su \- \-c /usr/bin/env" and parsing the output. Be aware that any environment variables already set in salloc's environment will take precedence over any environment variables in the user's login environment. The optional \fItimeout\fR value is in seconds. Default value is 3 seconds. The optional \fImode\fR value control the "su" options. With a \fImode\fR value of "S", "su" is executed without the "\-" option. With a \fImode\fR value of "L", "su" is executed with the "\-" option, replicating the login environment. If \fImode\fR not specified, the mode established at Slurm build time is used. Example of use include "\-\-get\-user\-env", "\-\-get\-user\-env=10" "\-\-get\-user\-env=10L", and "\-\-get\-user\-env=S". NOTE: This option only works if the caller has an effective uid of "root". This option was originally created for use by Moab. .TP \fB\-\-gid\fR=<\fIgroup\fR> Submit the job with the specified \fIgroup\fR's group access permissions. \fIgroup\fR may be the group name or the numerical group ID. In the default Slurm configuration, this option is only valid when used by the user root. .TP \fB\-\-gres\fR=<\fIlist\fR> Specifies a comma delimited list of generic consumable resources. The format of each entry on the list is "name[[:type]:count]". The name is that of the consumable resource. The count is the number of those resources with a default value of 1. The specified resources will be allocated to the job on each node. The available generic consumable resources is configurable by the system administrator. A list of available generic consumable resources will be printed and the command will exit if the option argument is "help". Examples of use include "\-\-gres=gpu:2,mic=1", "\-\-gres=gpu:kepler:2", and "\-\-gres=help". .TP \fB\-H, \-\-hold\fR Specify the job is to be submitted in a held state (priority of zero). A held job can now be released using scontrol to reset its priority (e.g. "\fIscontrol release \fR"). .TP \fB\-h\fR, \fB\-\-help\fR Display help information and exit. .TP \fB\-\-hint\fR=<\fItype\fR> Bind tasks according to application hints. .RS .TP .B compute_bound Select settings for compute bound applications: use all cores in each socket, one thread per core. .TP .B memory_bound Select settings for memory bound applications: use only one core in each socket, one thread per core. .TP .B [no]multithread [don't] use extra threads with in-core multi-threading which can benefit communication intensive applications. Only supported with the task/affinity plugin. .TP .B help show this help message .RE .TP \fB\-I\fR, \fB\-\-immediate\fR[=<\fIseconds\fR>] exit if resources are not available within the time period specified. If no argument is given, resources must be available immediately for the request to succeed. By default, \fB\-\-immediate\fR is off, and the command will block until resources become available. Since this option's argument is optional, for proper parsing the single letter option must be followed immediately with the value and not include a space between them. For example "\-I60" and not "\-I 60". .TP \fB\-J\fR, \fB\-\-job\-name\fR=<\fIjobname\fR> Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default job name is the name of the "command" specified on the command line. .TP \fB\-\-jobid\fR=<\fIjobid\fR> Allocate resources as the specified job id. NOTE: Only valid for user root. .TP \fB\-K\fR, \fB\-\-kill\-command\fR[=\fIsignal\fR] salloc always runs a user\-specified command once the allocation is granted. salloc will wait indefinitely for that command to exit. If you specify the \-\-kill\-command option salloc will send a signal to your command any time that the Slurm controller tells salloc that its job allocation has been revoked. The job allocation can be revoked for a couple of reasons: someone used \fBscancel\fR to revoke the allocation, or the allocation reached its time limit. If you do not specify a signal name or number and Slurm is configured to signal the spawned command at job termination, the default signal is SIGHUP for interactive and SIGTERM for non\-interactive sessions. Since this option's argument is optional, for proper parsing the single letter option must be followed immediately with the value and not include a space between them. For example "\-K1" and not "\-K 1". .TP \fB\-k\fR, \fB\-\-no\-kill\fR Do not automatically terminate a job if one of the nodes it has been allocated fails. The user will assume the responsibilities for fault\-tolerance should a node fail. When there is a node failure, any active job steps (usually MPI jobs) on that node will almost certainly suffer a fatal error, but with \-\-no\-kill, the job allocation will not be revoked so the user may launch new job steps on the remaining nodes in their allocation. By default Slurm terminates the entire job allocation if any node fails in its range of allocated nodes. .TP \fB\-L\fR, \fB\-\-licenses\fR=<\fBlicense\fR> Specification of licenses (or other resources available on all nodes of the cluster) which must be allocated to this job. License names can be followed by a colon and count (the default count is one). Multiple license names should be comma separated (e.g. "\-\-licenses=foo:4,bar"). .TP \fB\-m\fR, \fB\-\-distribution\fR= \fIarbitrary\fR|<\fIblock\fR|\fIcyclic\fR|\fIplane=\fR[:\fIblock\fR|\fIcyclic\fR|\fIfcyclic\fR]> Specify alternate distribution methods for remote processes. In salloc, this only sets environment variables that will be used by subsequent srun requests. This option controls the assignment of tasks to the nodes on which resources have been allocated, and the distribution of those resources to tasks for binding (task affinity). The first distribution method (before the ":") controls the distribution of resources across nodes. The optional second distribution method (after the ":") controls the distribution of resources across sockets within a node. Note that with select/cons_res, the number of cpus allocated on each socket and node may be different. Refer to http://slurm.schedmd.com/mc_support.html for more information on resource allocation, assignment of tasks to nodes, and binding of tasks to CPUs. .RS First distribution method: .TP .B block The block distribution method will distribute tasks to a node such that consecutive tasks share a node. For example, consider an allocation of three nodes each with two cpus. A four\-task block distribution request will distribute those tasks to the nodes with tasks one and two on the first node, task three on the second node, and task four on the third node. Block distribution is the default behavior if the number of tasks exceeds the number of allocated nodes. .TP .B cyclic The cyclic distribution method will distribute tasks to a node such that consecutive tasks are distributed over consecutive nodes (in a round\-robin fashion). For example, consider an allocation of three nodes each with two cpus. A four\-task cyclic distribution request will distribute those tasks to the nodes with tasks one and four on the first node, task two on the second node, and task three on the third node. Note that when SelectType is select/cons_res, the same number of CPUs may not be allocated on each node. Task distribution will be round\-robin among all the nodes with CPUs yet to be assigned to tasks. Cyclic distribution is the default behavior if the number of tasks is no larger than the number of allocated nodes. .TP .B plane The tasks are distributed in blocks of a specified size. The options include a number representing the size of the task block. This is followed by an optional specification of the task distribution scheme within a block of tasks and between the blocks of tasks. The number of tasks distributed to each node is the same as for cyclic distribution, but the taskids assigned to each node depend on the plane size. For more details (including examples and diagrams), please see .br http://slurm.schedmd.com/mc_support.html .br and .br http://slurm.schedmd.com/dist_plane.html .TP .B arbitrary The arbitrary method of distribution will allocate processes in\-order as listed in file designated by the environment variable SLURM_HOSTFILE. If this variable is listed it will over ride any other method specified. If not set the method will default to block. Inside the hostfile must contain at minimum the number of hosts requested and be one per line or comma separated. If specifying a task count (\fB\-n\fR, \fB\-\-ntasks\fR=<\fInumber\fR>), your tasks will be laid out on the nodes in the order of the file. .br \fBNOTE:\fR The arbitrary distribution option on a job allocation only controls the nodes to be allocated to the job and not the allocation of CPUs on those nodes. This option is meant primarily to control a job step's task layout in an existing job allocation for the srun command. .TP Second distribution method: .TP .B block The block distribution method will distribute tasks to sockets such that consecutive tasks share a socket. .TP .B cyclic The cyclic distribution method will distribute tasks to sockets such that consecutive tasks are distributed over consecutive sockets (in a round\-robin fashion). Tasks requiring more than one CPU will have all of those CPUs allocated on a single socket if possible. .TP .B fcyclic The fcyclic distribution method will distribute tasks to sockets such that consecutive tasks are distributed over consecutive sockets (in a round\-robin fashion). Tasks requiring more than one CPU will have each CPUs allocated in a cyclic fashion across sockets. .RE .TP \fB\-\-mail\-type\fR=<\fItype\fR> Notify user by email when certain event types occur. Valid \fItype\fR values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to BEGIN, END, FAIL, REQUEUE, and STAGE_OUT), STAGE_OUT (burst buffer stage out completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent of time limit), TIME_LIMIT_80 (reached 80 percent of time limit), and TIME_LIMIT_50 (reached 50 percent of time limit). Multiple \fItype\fR values may be specified in a comma separated list. The user to be notified is indicated with \fB\-\-mail\-user\fR. .TP \fB\-\-mail\-user\fR=<\fIuser\fR> User to receive email notification of state changes as defined by \fB\-\-mail\-type\fR. The default value is the submitting user. .TP \fB\-\-mem\fR=<\fIMB\fR> Specify the real memory required per node in MegaBytes. Default value is \fBDefMemPerNode\fR and the maximum value is \fBMaxMemPerNode\fR. If configured, both of parameters can be seen using the \fBscontrol show config\fR command. This parameter would generally be used if whole nodes are allocated to jobs (\fBSelectType=select/linear\fR). Also see \fB\-\-mem\-per\-cpu\fR. \fB\-\-mem\fR and \fB\-\-mem\-per\-cpu\fR are mutually exclusive. NOTE: A memory size specification is treated as a special case and grants the job access to all of the memory on each node. NOTE: Enforcement of memory limits currently relies upon the task/cgroup plugin or enabling of accounting, which samples memory use on a periodic basis (data need not be stored, just collected). In both cases memory use is based upon the job's Resident Set Size (RSS). A task may exceed the memory limit until the next periodic accounting sample. .TP \fB\-\-mem\-per\-cpu\fR=<\fIMB\fR> Mimimum memory required per allocated CPU in MegaBytes. Default value is \fBDefMemPerCPU\fR and the maximum value is \fBMaxMemPerCPU\fR (see exception below). If configured, both of parameters can be seen using the \fBscontrol show config\fR command. Note that if the job's \fB\-\-mem\-per\-cpu\fR value exceeds the configured \fBMaxMemPerCPU\fR, then the user's limit will be treated as a memory limit per task; \fB\-\-mem\-per\-cpu\fR will be reduced to a value no larger than \fBMaxMemPerCPU\fR; \fB\-\-cpus\-per\-task\fR will be set and the value of \fB\-\-cpus\-per\-task\fR multiplied by the new \fB\-\-mem\-per\-cpu\fR value will equal the original \fB\-\-mem\-per\-cpu\fR value specified by the user. This parameter would generally be used if individual processors are allocated to jobs (\fBSelectType=select/cons_res\fR). If resources are allocated by the core, socket or whole nodes; the number of CPUs allocated to a job may be higher than the task count and the value of \fB\-\-mem\-per\-cpu\fR should be adjusted accordingly. Also see \fB\-\-mem\fR. \fB\-\-mem\fR and \fB\-\-mem\-per\-cpu\fR are mutually exclusive. .TP \fB\-\-mem_bind\fR=[{\fIquiet,verbose\fR},]\fItype\fR Bind tasks to memory. Used only when the task/affinity plugin is enabled and the NUMA memory functions are available. \fBNote that the resolution of CPU and memory binding may differ on some architectures.\fR For example, CPU binding may be performed at the level of the cores within a processor while memory binding will be performed at the level of nodes, where the definition of "nodes" may differ from system to system. \fBThe use of any type other than "none" or "local" is not recommended.\fR If you want greater control, try running a simple test code with the options "\-\-mem_bind=verbose,none" to determine the specific configuration. NOTE: To have Slurm always report on the selected memory binding for all commands executed in a shell, you can enable verbose mode by setting the SLURM_MEM_BIND environment variable value to "verbose". The following informational environment variables are set when \fB\-\-mem_bind\fR is in use: .nf SLURM_MEM_BIND_VERBOSE SLURM_MEM_BIND_TYPE SLURM_MEM_BIND_LIST .fi See the \fBENVIRONMENT VARIABLES\fR section for a more detailed description of the individual SLURM_MEM_BIND* variables. Supported options include: .RS .TP .B q[uiet] quietly bind before task runs (default) .TP .B v[erbose] verbosely report binding before task runs .TP .B no[ne] don't bind tasks to memory (default) .TP .B rank bind by task rank (not recommended) .TP .B local Use memory local to the processor in use .TP .B map_mem: bind by mapping a node's memory to tasks as specified where is ,,.... CPU IDs are interpreted as decimal values unless they are preceded with '0x' in which case they interpreted as hexadecimal values (not recommended) .TP .B mask_mem: bind by setting memory masks on tasks as specified where is ,,.... memory masks are \fBalways\fR interpreted as hexadecimal values. Note that masks must be preceded with a '0x' if they don't begin with [0-9] so they are seen as numerical values by srun. .TP .B help show this help message .RE .TP \fB\-\-mincpus\fR=<\fIn\fR> Specify a minimum number of logical cpus/processors per node. .TP \fB\-N\fR, \fB\-\-nodes\fR=<\fIminnodes\fR[\-\fImaxnodes\fR]> Request that a minimum of \fIminnodes\fR nodes be allocated to this job. A maximum node count may also be specified with \fImaxnodes\fR. If only one number is specified, this is used as both the minimum and maximum node count. The partition's node limits supersede those of the job. If a job's node limits are outside of the range permitted for its associated partition, the job will be left in a PENDING state. This permits possible execution at a later time, when the partition limit is changed. If a job node limit exceeds the number of nodes configured in the partition, the job will be rejected. Note that the environment variable \fBSLURM_NNODES\fR will be set to the count of nodes actually allocated to the job. See the \fBENVIRONMENT VARIABLES \fR section for more information. If \fB\-N\fR is not specified, the default behavior is to allocate enough nodes to satisfy the requirements of the \fB\-n\fR and \fB\-c\fR options. The job will be allocated as many nodes as possible within the range specified and without delaying the initiation of the job. The node count specification may include a numeric value followed by a suffix of "k" (multiplies numeric value by 1,024) or "m" (multiplies numeric value by 1,048,576). .TP \fB\-n\fR, \fB\-\-ntasks\fR=<\fInumber\fR> salloc does not launch tasks, it requests an allocation of resources and executed some command. This option advises the Slurm controller that job steps run within this allocation will launch a maximum of \fInumber\fR tasks and sufficient resources are allocated to accomplish this. The default is one task per node, but note that the \fB\-\-cpus\-per\-task\fR option will change this default. .TP \fB\-\-network\fR=<\fItype\fR> Specify information pertaining to the switch or network. The interpretation of \fItype\fR is system dependent. This option is supported when running Slurm on a Cray natively. It is used to request using Network Performace Counters. Only one value per request is valid. All options are case in\-sensitive. In this configuration supported values include: .RS .TP 6 \fBsystem\fR Use the system\-wide network performance counters. Only nodes requested will be marked in use for the job allocation. If the job does not fill up the entire system the rest of the nodes are not able to be used by other jobs using NPC, if idle their state will appear as PerfCnts. These nodes are still available for other jobs not using NPC. .TP \fBblade\fR Use the blade network performance counters. Only nodes requested will be marked in use for the job allocation. If the job does not fill up the entire blade(s) allocated to the job those blade(s) are not able to be used by other jobs using NPC, if idle their state will appear as PerfCnts. These nodes are still available for other jobs not using NPC. .TP .RE .br .br In all cases the job allocation request \fBmust specify the \-\-exclusive option\fR. Otherwise the request will be denied. .br .br Also with any of these options steps are not allowed to share blades, so resources would remain idle inside an allocation if the step running on a blade does not take up all the nodes on the blade. .br .br The \fBnetwork\fR option is also supported on systems with IBM's Parallel Environment (PE). See IBM's LoadLeveler job command keyword documentation about the keyword "network" for more information. Multiple values may be specified in a comma separated list. All options are case in\-sensitive. Supported values include: .RS .TP 12 \fBBULK_XFER\fR[=<\fIresources\fR>] Enable bulk transfer of data using Remote Direct-Memory Access (RDMA). The optional \fIresources\fR specification is a numeric value which can have a suffix of "k", "K", "m", "M", "g" or "G" for kilobytes, megabytes or gigabytes. NOTE: The \fIresources\fR specification is not supported by the underlying IBM infrastructure as of Parallel Environment version 2.2 and no value should be specified at this time. .TP \fBCAU\fR=<\fIcount\fR> Number of Collectve Acceleration Units (CAU) required. Applies only to IBM Power7-IH processors. Default value is zero. Independent CAU will be allocated for each programming interface (MPI, LAPI, etc.) .TP \fBDEVNAME\fR=<\fIname\fR> Specify the device name to use for communications (e.g. "eth0" or "mlx4_0"). .TP \fBDEVTYPE\fR=<\fItype\fR> Specify the device type to use for communications. The supported values of \fItype\fR are: "IB" (InfiniBand), "HFI" (P7 Host Fabric Interface), "IPONLY" (IP-Only interfaces), "HPCE" (HPC Ethernet), and "KMUX" (Kernel Emulation of HPCE). The devices allocated to a job must all be of the same type. The default value depends upon depends upon what hardware is available and in order of preferences is IPONLY (which is not considered in User Space mode), HFI, IB, HPCE, and KMUX. .TP \fBIMMED\fR =<\fIcount\fR> Number of immediate send slots per window required. Applies only to IBM Power7-IH processors. Default value is zero. .TP \fBINSTANCES\fR =<\fIcount\fR> Specify number of network connections for each task on each network connection. The default instance count is 1. .TP \fBIPV4\fR Use Internet Protocol (IP) version 4 communications (default). .TP \fBIPV6\fR Use Internet Protocol (IP) version 6 communications. .TP \fBLAPI\fR Use the LAPI programming interface. .TP \fBMPI\fR Use the MPI programming interface. MPI is the default interface. .TP \fBPAMI\fR Use the PAMI programming interface. .TP \fBSHMEM\fR Use the OpenSHMEM programming interface. .TP \fBSN_ALL\fR Use all available switch networks (default). .TP \fBSN_SINGLE\fR Use one available switch network. .TP \fBUPC\fR Use the UPC programming interface. .TP \fBUS\fR Use User Space communications. .TP Some examples of network specifications: .TP \fBInstances=2,US,MPI,SN_ALL\fR Create two user space connections for MPI communications on every switch network for each task. .TP \fBUS,MPI,Instances=3,Devtype=IB\fR Create three user space connections for MPI communications on every InfiniBand network for each task. .TP \fBIPV4,LAPI,SN_Single\fR Create a IP version 4 connection for LAPI communications on one switch network for each task. .TP \fBInstances=2,US,LAPI,MPI\fR Create two user space connections each for LAPI and MPI communications on every switch network for each task. Note that SN_ALL is the default option so every switch network is used. Also note that Instances=2 specifies that two connections are established for each protocol (LAPI and MPI) and each task. If there are two networks and four tasks on the node then a total of 32 connections are established (2 instances x 2 protocols x 2 networks x 4 tasks). .RE .TP \fB\-\-nice\fR[=\fIadjustment\fR] Run the job with an adjusted scheduling priority within Slurm. With no adjustment value the scheduling priority is decreased by 100. The adjustment range is from \-10000 (highest priority) to 10000 (lowest priority). Only privileged users can specify a negative adjustment. NOTE: This option is presently ignored if \fISchedulerType=sched/wiki\fR or \fISchedulerType=sched/wiki2\fR. .TP \fB\-\-ntasks\-per\-core\fR=<\fIntasks\fR> Request the maximum \fIntasks\fR be invoked on each core. Meant to be used with the \fB\-\-ntasks\fR option. Related to \fB\-\-ntasks\-per\-node\fR except at the core level instead of the node level. NOTE: This option is not supported unless \fISelectTypeParameters=CR_Core\fR or \fISelectTypeParameters=CR_Core_Memory\fR is configured. .TP \fB\-\-ntasks\-per\-socket\fR=<\fIntasks\fR> Request the maximum \fIntasks\fR be invoked on each socket. Meant to be used with the \fB\-\-ntasks\fR option. Related to \fB\-\-ntasks\-per\-node\fR except at the socket level instead of the node level. NOTE: This option is not supported unless \fISelectTypeParameters=CR_Socket\fR or \fISelectTypeParameters=CR_Socket_Memory\fR is configured. .TP \fB\-\-ntasks\-per\-node\fR=<\fIntasks\fR> Request that \fIntasks\fR be invoked on each node. If used with the \fB\-\-ntasks\fR option, the \fB\-\-ntasks\fR option will take precedence and the \fB\-\-ntasks\-per\-node\fR will be treated as a \fImaximum\fR count of tasks per node. Meant to be used with the \fB\-\-nodes\fR option. This is related to \fB\-\-cpus\-per\-task\fR=\fIncpus\fR, but does not require knowledge of the actual number of cpus on each node. In some cases, it is more convenient to be able to request that no more than a specific number of tasks be invoked on each node. Examples of this include submitting a hybrid MPI/OpenMP app where only one MPI "task/rank" should be assigned to each node while allowing the OpenMP portion to utilize all of the parallelism present in the node, or submitting a single setup/cleanup/monitoring job to each node of a pre\-existing allocation as one step in a larger job script. .TP \fB\-\-no\-bell\fR Silence salloc's use of the terminal bell. Also see the option \fB\-\-bell\fR. .TP \fB\-\-no\-shell\fR immediately exit after allocating resources, without running a command. However, the Slurm job will still be created and will remain active and will own the allocated resources as long as it is active. You will have a Slurm job id with no associated processes or tasks. You can submit \fBsrun\fR commands against this resource allocation, if you specify the \fB\-\-jobid=\fR option with the job id of this Slurm job. Or, this can be used to temporarily reserve a set of resources so that other jobs cannot use them for some period of time. (Note that the Slurm job is subject to the normal constraints on jobs, including time limits, so that eventually the job will terminate and the resources will be freed, or you can terminate the job manually using the \fBscancel\fR command.) .TP \fB\-O\fR, \fB\-\-overcommit\fR Overcommit resources. When applied to job allocation, only one CPU is allocated to the job per node and options used to specify the number of tasks per node, socket, core, etc. are ignored. When applied to job step allocations (the \fBsrun\fR command when executed within an existing job allocation), this option can be used to launch more than one task per CPU. Normally, \fBsrun\fR will not allocate more than one process per CPU. By specifying \fB\-\-overcommit\fR you are explicitly allowing more than one process per CPU. However no more than \fBMAX_TASKS_PER_NODE\fR tasks are permitted to execute per node. NOTE: \fBMAX_TASKS_PER_NODE\fR is defined in the file \fIslurm.h\fR and is not a variable, it is set at Slurm build time. .TP \fB\-\-power\fR=<\fIflags\fR> Comma separated list of power management plugin options. Currently available flags include: level (all nodes allocated to the job should have identical power caps, may be disabled by the Slurm configuration option PowerParameters=job_no_level). .TP \fB\-\-priority\fR= Request a specific job priority. May be subject to configuration specific constraints. Only Slurm operators and administrators can set the priority of a job. .TP \fB\-\-profile\fR= enables detailed data collection by the acct_gather_profile plugin. Detailed data are typically time-series that are stored in an HDF5 file for the job. .RS .TP 10 \fBAll\fR All data types are collected. (Cannot be combined with other values.) .TP \fBNone\fR No data types are collected. This is the default. (Cannot be combined with other values.) .TP \fBEnergy\fR Energy data is collected. .TP \fBTask\fR Task (I/O, Memory, ...) data is collected. .TP \fBLustre\fR Lustre data is collected. .TP \fBNetwork\fR Network (InfiniBand) data is collected. .RE .TP \fB\-p\fR, \fB\-\-partition\fR=<\fIpartition_names\fR> Request a specific partition for the resource allocation. If not specified, the default behavior is to allow the slurm controller to select the default partition as designated by the system administrator. If the job can use more than one partition, specify their names in a comma separate list and the one offering earliest initiation will be used with no regard given to the partition name ordering (although higher priority partitions will be considered first). When the job is initiated, the name of the partition used will be placed first in the job record partition string. .TP \fB\-Q\fR, \fB\-\-quiet\fR Suppress informational messages from salloc. Errors will still be displayed. .TP \fB\-\-qos\fR=<\fIqos\fR> Request a quality of service for the job. QOS values can be defined for each user/cluster/account association in the Slurm database. Users will be limited to their association's defined set of qos's when the Slurm configuration parameter, AccountingStorageEnforce, includes "qos" in it's definition. .TP \fB\-\-reboot\fR Force the allocated nodes to reboot before starting the job. This is only supported with some system configurations and will otherwise be silently ignored. .TP \fB\-\-reservation\fR=<\fIname\fR> Allocate resources for the job from the named reservation. .TP \fB\-s\fR, \fB\-\-share\fR The job allocation can share resources with other running jobs. The resources to be shared can be nodes, sockets, cores, or hyperthreads depending upon configuration. The default shared behavior depends on system configuration and the partition's \fBShared\fR option takes precedence over the job's option. This option may result in the allocation being granted sooner than if the \-\-share option was not set and allow higher system utilization, but application performance will likely suffer due to competition for resources. Also see the \-\-exclusive option. .TP \fB\-S\fR, \fB\-\-core\-spec\fR=<\fInum\fR> Count of specialized cores per node reserved by the job for system operations and not used by the application. The application will not use these cores, but will be charged for their allocation. Default value is dependent upon the node's configured CoreSpecCount value. If a value of zero is designated and the Slurm configuration option AllowSpecResourcesUsage is enabled, the job will be allowed to override CoreSpecCount and use the specialized resources on nodes it is allocated. This option can not be used with the \fB\-\-thread\-spec\fR option. .TP \fB\-\-sicp\fR Identify a job as one which jobs submitted to other clusters can be dependent upon. .TP \fB\-\-signal\fR=<\fIsig_num\fR>[@<\fIsig_time\fR>] When a job is within \fIsig_time\fR seconds of its end time, send it the signal \fIsig_num\fR. Due to the resolution of event handling by Slurm, the signal may be sent up to 60 seconds earlier than specified. \fIsig_num\fR may either be a signal number or name (e.g. "10" or "USR1"). \fIsig_time\fR must have an integer value between 0 and 65535. By default, no signal is sent before the job's end time. If a \fIsig_num\fR is specified without any \fIsig_time\fR, the default time will be 60 seconds. .TP \fB\-\-sockets\-per\-node\fR=<\fIsockets\fR> Restrict node selection to nodes with at least the specified number of sockets. See additional information under \fB\-B\fR option above when task/affinity plugin is enabled. .TP \fB\-\-switches\fR=<\fIcount\fR>[@<\fImax\-time\fR>] When a tree topology is used, this defines the maximum count of switches desired for the job allocation and optionally the maximum time to wait for that number of switches. If Slurm finds an allocation containing more switches than the count specified, the job remains pending until it either finds an allocation with desired switch count or the time limit expires. It there is no switch count limit, there is no delay in starting the job. Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days\-hours", "days\-hours:minutes" and "days\-hours:minutes:seconds". The job's maximum time delay may be limited by the system administrator using the \fBSchedulerParameters\fR configuration parameter with the \fBmax_switch_wait\fR parameter option. The default max\-time is the max_switch_wait SchedulerParameters. .TP \fB\-t\fR, \fB\-\-time\fR=<\fItime\fR> Set a limit on the total run time of the job allocation. If the requested time limit exceeds the partition's time limit, the job will be left in a PENDING state (possibly indefinitely). The default time limit is the partition's default time limit. When the time limit is reached, each task in each job step is sent SIGTERM followed by SIGKILL. The interval between signals is specified by the Slurm configuration parameter \fBKillWait\fR. The \fBOverTimeLimit\fR configuration parameter may permit the job to run longer than scheduled. Time resolution is one minute and second values are rounded up to the next minute. A time limit of zero requests that no time limit be imposed. Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days\-hours", "days\-hours:minutes" and "days\-hours:minutes:seconds". .TP \fB\-\-thread\-spec\fR=<\fInum\fR> Count of specialized threads per node reserved by the job for system operations and not used by the application. The application will not use these threads, but will be charged for their allocation. This option can not be used with the \fB\-\-core\-spec\fR option. .TP \fB\-\-threads\-per\-core\fR=<\fIthreads\fR> Restrict node selection to nodes with at least the specified number of threads per core. NOTE: "Threads" refers to the number of processing units on each core rather than the number of application tasks to be launched per core. See additional information under \fB\-B\fR option above when task/affinity plugin is enabled. .TP \fB\-\-time\-min\fR=<\fItime\fR> Set a minimum time limit on the job allocation. If specified, the job may have it's \fB\-\-time\fR limit lowered to a value no lower than \fB\-\-time\-min\fR if doing so permits the job to begin execution earlier than otherwise possible. The job's time limit will not be changed after the job is allocated resources. This is performed by a backfill scheduling algorithm to allocate resources otherwise reserved for higher priority jobs. Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days\-hours", "days\-hours:minutes" and "days\-hours:minutes:seconds". .TP \fB\-\-tmp\fR=<\fIMB\fR> Specify a minimum amount of temporary disk space. .TP \fB\-u\fR, \fB\-\-usage\fR Display brief help message and exit. .TP \fB\-\-uid\fR=<\fIuser\fR> Attempt to submit and/or run a job as \fIuser\fR instead of the invoking user id. The invoking user's credentials will be used to check access permissions for the target partition. This option is only valid for user root. This option may be used by user root may use this option to run jobs as a normal user in a RootOnly partition for example. If run as root, \fBsalloc\fR will drop its permissions to the uid specified after node allocation is successful. \fIuser\fR may be the user name or numerical user ID. .TP \fB\-V\fR, \fB\-\-version\fR Display version information and exit. .TP \fB\-v\fR, \fB\-\-verbose\fR Increase the verbosity of salloc's informational messages. Multiple \fB\-v\fR's will further increase salloc's verbosity. By default only errors will be displayed. .TP \fB\-w\fR, \fB\-\-nodelist\fR=<\fInode name list\fR> Request a specific list of hosts. The job will contain \fIall\fR of these hosts and possibly additional hosts as needed to satisfy resource requirements. The list may be specified as a comma\-separated list of hosts, a range of hosts (host[1\-5,7,...] for example), or a filename. The host list will be assumed to be a filename if it contains a "/" character. If you specify a minimum node or processor count larger than can be satisfied by the supplied host list, additional resources will be allocated on other nodes as needed. Duplicate node names in the list will be ignored. The order of the node names in the list is not important; the node names will be sorted by Slurm. .TP \fB\-\-wait\-all\-nodes\fR=<\fIvalue\fR> Controls when the execution of the command begins. By default the job will begin execution as soon as the allocation is made. .RS .TP 5 0 Begin execution as soon as allocation can be made. Do not wait for all nodes to be ready for use (i.e. booted). .TP 1 Do not begin execution until all nodes are ready for use. .RE .TP \fB\-\-wckey\fR=<\fIwckey\fR> Specify wckey to be used with job. If TrackWCKey=no (default) in the slurm.conf this value is ignored. .TP \fB\-x\fR, \fB\-\-exclude\fR=<\fInode name list\fR> Explicitly exclude certain nodes from the resources granted to the job. .PP The following options support Blue Gene systems, but may be applicable to other systems as well. .TP \fB\-\-blrts\-image\fR=<\fIpath\fR> Path to blrts image for bluegene block. BGL only. Default from \fIblugene.conf\fR if not set. .TP \fB\-\-cnload\-image\fR=<\fIpath\fR> Path to compute node image for bluegene block. BGP only. Default from \fIblugene.conf\fR if not set. .TP \fB\-\-conn\-type\fR=<\fItype\fR> Require the block connection type to be of a certain type. On Blue Gene the acceptable of \fItype\fR are MESH, TORUS and NAV. If NAV, or if not set, then Slurm will try to fit a what the DefaultConnType is set to in the bluegene.conf if that isn't set the default is TORUS. You should not normally set this option. If running on a BGP system and wanting to run in HTC mode (only for 1 midplane and below). You can use HTC_S for SMP, HTC_D for Dual, HTC_V for virtual node mode, and HTC_L for Linux mode. For systems that allow a different connection type per dimension you can supply a comma separated list of connection types may be specified, one for each dimension (i.e. M,T,T,T will give you a torus connection is all dimensions expect the first). .TP \fB\-g\fR, \fB\-\-geometry\fR=<\fIXxYxZ\fR> | <\fIAxXxYxZ\fR> Specify the geometry requirements for the job. On BlueGene/L and BlueGene/P systems there are three numbers giving dimensions in the X, Y and Z directions, while on BlueGene/Q systems there are four numbers giving dimensions in the A, X, Y and Z directions and can not be used to allocate sub-blocks. For example "\-\-geometry=1x2x3x4", specifies a block of nodes having 1 x 2 x 3 x 4 = 24 nodes (actually midplanes on BlueGene). .TP \fB\-\-ioload\-image\fR=<\fIpath\fR> Path to io image for bluegene block. BGP only. Default from \fIblugene.conf\fR if not set. .TP \fB\-\-linux\-image\fR=<\fIpath\fR> Path to linux image for bluegene block. BGL only. Default from \fIblugene.conf\fR if not set. .TP \fB\-\-mloader\-image\fR=<\fIpath\fR> Path to mloader image for bluegene block. Default from \fIblugene.conf\fR if not set. .TP \fB\-R\fR, \fB\-\-no\-rotate\fR Disables rotation of the job's requested geometry in order to fit an appropriate block. By default the specified geometry can rotate in three dimensions. .TP \fB\-\-ramdisk\-image\fR=<\fIpath\fR> Path to ramdisk image for bluegene block. BGL only. Default from \fIblugene.conf\fR if not set. .SH "INPUT ENVIRONMENT VARIABLES" .PP Upon startup, salloc will read and handle the options set in the following environment variables. Note: Command line options always override environment variables settings. .TP 22 \fBSALLOC_ACCOUNT\fR Same as \fB\-A, \-\-account\fR .TP \fBSALLOC_ACCTG_FREQ\fR Same as \fB\-\-acctg\-freq\fR .TP \fBSALLOC_BELL\fR Same as \fB\-\-bell\fR .TP \fBSALLOC_BURST_BUFFER\fR Same as \fB\-\-bb\fR .TP \fBSALLOC_CONN_TYPE\fR Same as \fB\-\-conn\-type\fR .TP \fBSALLOC_CORE_SPEC\fR Same as \fB\-\-core\-spec\fR .TP \fBSALLOC_DEBUG\fR Same as \fB\-v, \-\-verbose\fR .TP \fBSALLOC_EXCLUSIVE\fR Same as \fB\-\-exclusive\fR .TP \fBSALLOC_GEOMETRY\fR Same as \fB\-g, \-\-geometry\fR .TP \fBSALLOC_HINT\fR or \fBSLURM_HINT\fR Same as \fB\-\-hint\fR .TP \fBSALLOC_IMMEDIATE\fR Same as \fB\-I, \-\-immediate\fR .TP \fBSALLOC_JOBID\fR Same as \fB\-\-jobid\fR .TP \fBSALLOC_KILL_CMD\fR Same as \fB\-K\fR, \fB\-\-kill\-command\fR .TP \fBSALLOC_MEM_BIND\fR Same as \fB\-\-mem_bind\fR .TP \fBSALLOC_NETWORK\fR Same as \fB\-\-network\fR .TP \fBSALLOC_NO_BELL\fR Same as \fB\-\-no\-bell\fR .TP \fBSALLOC_NO_ROTATE\fR Same as \fB\-R, \-\-no\-rotate\fR .TP \fBSALLOC_OVERCOMMIT\fR Same as \fB\-O, \-\-overcommit\fR .TP \fBSALLOC_PARTITION\fR Same as \fB\-p, \-\-partition\fR .TP \fBSALLOC_POWER\fR Same as \fB\-\-power\fR .TP \fBSALLOC_PROFILE\fR Same as \fB\-\-profile\fR .TP \fBSALLOC_QOS\fR Same as \fB\-\-qos\fR .TP \fBSALLOC_REQ_SWITCH\fR When a tree topology is used, this defines the maximum count of switches desired for the job allocation and optionally the maximum time to wait for that number of switches. See \fB\-\-switches\fR. .TP \fBSALLOC_RESERVATION\fR Same as \fB\-\-reservation\fR .TP \fBSALLOC_SICP\fR Same as \fB\-\-sicp\fR .TP \fBSALLOC_SIGNAL\fR Same as \fB\-\-signal\fR .TP \fBSALLOC_THREAD_SPEC\fR Same as \fB\-\-thread\-spec\fR .TP \fBSALLOC_TIMELIMIT\fR Same as \fB\-t, \-\-time\fR .TP \fBSALLOC_WAIT_ALL_NODES\fR Same as \fB\-\-wait\-all\-nodes\fR .TP \fBSALLOC_WCKEY\fR Same as \fB\-\-wckey\fR .TP \fBSALLOC_WAIT4SWITCH\fR Max time waiting for requested switches. See \fB\-\-switches\fR .TP \fBSLURM_CONF\fR The location of the Slurm configuration file. .TP \fBSLURM_EXIT_ERROR\fR Specifies the exit code generated when a Slurm error occurs (e.g. invalid options). This can be used by a script to distinguish application exit codes from various Slurm error conditions. Also see \fBSLURM_EXIT_IMMEDIATE\fR. .TP \fBSLURM_EXIT_IMMEDIATE\fR Specifies the exit code generated when the \fB\-\-immediate\fR option is used and resources are not currently available. This can be used by a script to distinguish application exit codes from various Slurm error conditions. Also see \fBSLURM_EXIT_ERROR\fR. .SH "OUTPUT ENVIRONMENT VARIABLES" .PP salloc will set the following environment variables in the environment of the executed program: .TP \fBBASIL_RESERVATION_ID\fR The reservation ID on Cray systems running ALPS/BASIL only. .TP \fBSLURM_CLUSTER_NAME\fR Name of the cluster on which the job is executing. .TP \fBMPIRUN_NOALLOCATE\fR Do not allocate a block on Blue Gene L/P systems only. .TP \fBMPIRUN_NOFREE\fR Do not free a block on Blue Gene L/P systems only. .TP \fBMPIRUN_PARTITION\fR The block name on Blue Gene systems only. .TP \fBSLURM_CPUS_PER_TASK\fR Number of cpus requested per task. Only set if the \fB\-\-cpus\-per\-task\fR option is specified. .TP \fBSLURM_DISTRIBUTION\fR Same as \fB\-m, \-\-distribution\fR .TP \fBSLURM_JOB_ID\fR (and \fBSLURM_JOBID\fR for backwards compatibility) The ID of the job allocation. .TP \fBSLURM_JOB_CPUS_PER_NODE\fR Count of processors available to the job on this node. Note the select/linear plugin allocates entire nodes to jobs, so the value indicates the total count of CPUs on each node. The select/cons_res plugin allocates individual processors to jobs, so this number indicates the number of processors on each node allocated to the job allocation. .TP \fBSLURM_JOB_NODELIST\fR (and \fBSLURM_NODELIST\fR for backwards compatibility) List of nodes allocated to the job. .TP \fBSLURM_JOB_NUM_NODES\fR (and \fBSLURM_NNODES\fR for backwards compatibility) Total number of nodes in the job allocation. .TP \fBSLURM_JOB_PARTITION\fR Name of the partition in which the job is running. .TP \fBSLURM_MEM_BIND\fR Set to value of the \-\-mem_bind\fR option. .TP \fBSLURM_SUBMIT_DIR\fR The directory from which \fBsalloc\fR was invoked. .TP \fBSLURM_SUBMIT_HOST\fR The hostname of the computer from which \fBsalloc\fR was invoked. .TP \fBSLURM_NODE_ALIASES\fR Sets of node name, communication address and hostname for nodes allocated to the job from the cloud. Each element in the set if colon separated and each set is comma separated. For example: SLURM_NODE_ALIASES=ec0:1.2.3.4:foo,ec1:1.2.3.5:bar .TP \fBSLURM_NTASKS\fR Same as \fB\-n, \-\-ntasks\fR .TP \fBSLURM_NTASKS_PER_NODE\fR Set to value of the \-\-ntasks\-per\-node\fR option, if specified. .TP \fBSLURM_PROFILE\fR Same as \fB\-\-profile\fR .TP \fBSLURM_TASKS_PER_NODE\fR Number of tasks to be initiated on each node. Values are comma separated and in the same order as SLURM_NODELIST. If two or more consecutive nodes are to have the same task count, that count is followed by "(x#)" where "#" is the repetition count. For example, "SLURM_TASKS_PER_NODE=2(x3),1" indicates that the first three nodes will each execute three tasks and the fourth node will execute one task. .SH "SIGNALS" .LP While salloc is waiting for a PENDING job allocation, most signals will cause salloc to revoke the allocation request and exit. However if the allocation has been granted and salloc has already started the specified command, then salloc will ignore most signals. salloc will not exit or release the allocation until the command exits. One notable exception is SIGHUP. A SIGHUP signal will cause salloc to release the allocation and exit without waiting for the command to finish. Another exception is SIGTERM, which will be forwarded to the spawned process. .SH "EXAMPLES" .LP To get an allocation, and open a new xterm in which srun commands may be typed interactively: .IP $ salloc \-N16 xterm .br salloc: Granted job allocation 65537 .br (at this point the xterm appears, and salloc waits for xterm to exit) .br salloc: Relinquishing job allocation 65537 .LP To grab an allocation of nodes and launch a parallel application on one command line (See the \fBsalloc\fR man page for more examples): .IP salloc \-N5 srun \-n10 myprogram .SH "COPYING" Copyright (C) 2006\-2007 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). .br Copyright (C) 2008\-2010 Lawrence Livermore National Security. .br Copyright (C) 2010\-2015 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBsinfo\fR(1), \fBsattach\fR(1), \fBsbatch\fR(1), \fBsqueue\fR(1), \fBscancel\fR(1), \fBscontrol\fR(1), \fBslurm.conf\fR(5), \fBsched_setaffinity\fR (2), \fBnuma\fR (3) slurm-slurm-15-08-7-1/doc/man/man1/sattach.1000066400000000000000000000067041265000126300202210ustar00rootroot00000000000000.TH sattach "1" "Slurm Commands" "April 2015" "Slurm Commands" .SH "NAME" .LP sattach \- Attach to a Slurm job step. .SH "SYNOPSIS" .LP sattach [\fIoptions\fP] .SH "DESCRIPTION" .LP sattach attaches to a running Slurm job step. By attaching, it makes available the IO streams of all of the tasks of a running Slurm job step. It also suitable for use with a parallel debugger like TotalView. .SH "OPTIONS" .LP .TP \fB\-h\fR, \fB\-\-help\fR Display help information and exit. .TP \fB\-\-input\-filter\fR[=]<\fItask number\fR> .PD 0 .TP \fB\-\-output\-filter\fR[=]<\fItask number\fR> .PD 0 .TP \fB\-\-error\-filter\fR[=]<\fItask number\fR> .PD Only transmit standard input to a single task, or print the standard output or standard error from a single task. The filtering is performed locally in sattach. .TP \fB\-l\fR, \fB\-\-label\fR Prepend each line of task standard output or standard error with the task number of its origin. .TP \fB\-\-layout\fR Contacts the slurmctld to obtain the task layout information for the job step, prints the task layout information, and then exits without attaching to the job step. .TP \fB\-\-pty\fR Execute task zero in pseudo terminal. Not compatible with the \fB\-\-input\-filter\fR, \fB\-\-output\-filter\fR, or \fB\-\-error\-filter\fR options. Notes: The terminal size and resize events are ignored by sattach. Proper operation requires that the job step be initiated by srun using the \-\-pty option. Not currently supported on AIX platforms. .TP \fB\-Q\fR, \fB\-\-quiet\fR Suppress informational messages from sattach. Errors will still be displayed. .TP \fB\-u\fR, \fB\-\-usage\fR Display brief usage message and exit. .TP \fB\-V\fR, \fB\-\-version\fR Display Slurm version number and exit. .TP \fB\-v\fR, \fB\-\-verbose\fR Increase the verbosity of sattach's informational messages. Multiple \fB\-v\fR's will further increase sattach's verbosity. .SH "INPUT ENVIRONMENT VARIABLES" .PP Upon startup, salloc will read and handle the options set in the following environment variables. Note: Command line options always override environment variables settings. .TP .TP 20 \fBSLURM_CONF\fR The location of the Slurm configuration file. .TP \fBSLURM_EXIT_ERROR\fR Specifies the exit code generated when a Slurm error occurs (e.g. invalid options). This can be used by a script to distinguish application exit codes from various Slurm error conditions. .SH "EXAMPLES" .LP sattach 15.0 sattach \-\-output\-filter 5 65386.15 .SH "COPYING" Copyright (C) 2006\-2007 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). .br Copyright (C) 2008\-2009 Lawrence Livermore National Security. .br Copyright (C) 2010\-2013 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBsinfo\fR(1), \fBsalloc\fR(1), \fBsbatch\fR(1), \fBsqueue\fR(1), \fBscancel\fR(1), \fBscontrol\fR(1), \fBslurm.conf\fR(5), \fBsched_setaffinity\fR (2), \fBnuma\fR (3) slurm-slurm-15-08-7-1/doc/man/man1/sbatch.1000066400000000000000000002172021265000126300200330ustar00rootroot00000000000000.TH sbatch "1" "Slurm Commands" "April 2015" "Slurm Commands" .SH "NAME" sbatch \- Submit a batch script to Slurm. .SH "SYNOPSIS" sbatch [\fIoptions\fP] \fIscript\fP [\fIargs\fP...] .SH "DESCRIPTION" sbatch submits a batch script to Slurm. The batch script may be given to sbatch through a file name on the command line, or if no file name is specified, sbatch will read in a script from standard input. The batch script may contain options preceded with "#SBATCH" before any executable commands in the script. sbatch exits immediately after the script is successfully transferred to the Slurm controller and assigned a Slurm job ID. The batch script is not necessarily granted resources immediately, it may sit in the queue of pending jobs for some time before its required resources become available. By default both standard output and standard error are directed to a file of the name "slurm\-%j.out", where the "%j" is replaced with the job allocation number. The file will be generated on the first node of the job allocation. Other than the batch script itself, Slurm does no movement of user files. When the job allocation is finally granted for the batch script, Slurm runs a single copy of the batch script on the first node in the set of allocated nodes. The following document describes the influence of various options on the allocation of cpus to jobs and tasks. .br http://slurm.schedmd.com/cpu_management.html .SH "OPTIONS" .LP .TP \fB\-a\fR, \fB\-\-array\fR=<\fIindexes\fR> Submit a job array, multiple jobs to be executed with identical parameters. The \fIindexes\fR specification identifies what array index values should be used. Multiple values may be specified using a comma separated list and/or a range of values with a "\-" separator. For example, "\-\-array=0\-15" or "\-\-array=0,6,16\-32". A step function can also be specified with a suffix containing a colon and number. For example, "\-\-array=0\-15:4" is equivalent to "\-\-array=0,4,8,12". A maximum number of simultaneously running tasks from the job array may be specified using a "%" separator. For example "\-\-array=0\-15%4" will limit the number of simultaneously running tasks from this job array to 4. The minimum index value is 0. the maximum value is one less than the configuration parameter MaxArraySize. .TP \fB\-A\fR, \fB\-\-account\fR=<\fIaccount\fR> Charge resources used by this job to specified account. The \fIaccount\fR is an arbitrary string. The account name may be changed after job submission using the \fBscontrol\fR command. .TP \fB\-\-acctg\-freq\fR Define the job accounting and profiling sampling intervals. This can be used to override the \fIJobAcctGatherFrequency\fR parameter in Slurm's configuration file, \fIslurm.conf\fR. The supported format is as follows: .RS .TP 12 \fB\-\-acctg\-freq=\fR\fI\fR\fB=\fR\fI\fR where \fI\fR=\fI\fR specifies the task sampling interval for the jobacct_gather plugin or a sampling interval for a profiling type by the acct_gather_profile plugin. Multiple, comma-separated \fI\fR=\fI\fR intervals may be specified. Supported datatypes are as follows: .RS .TP \fBtask=\fI\fR where \fI\fR is the task sampling interval in seconds for the jobacct_gather plugins and for task profiling by the acct_gather_profile plugin. NOTE: This frequency is used to monitor memory usage. If memory limits are enforced the highest frequency a user can request is what is configured in the slurm.conf file. They can not turn it off (=0) either. .TP \fBenergy=\fI\fR where \fI\fR is the sampling interval in seconds for energy profiling using the acct_gather_energy plugin .TP \fBnetwork=\fI\fR where \fI\fR is the sampling interval in seconds for infiniband profiling using the acct_gather_infiniband plugin. .TP \fBfilesystem=\fI\fR where \fI\fR is the sampling interval in seconds for filesystem profiling using the acct_gather_filesystem plugin. .TP .RE .RE .br The default value for the task sampling interval is 30 seconds. The default value for all other intervals is 0. An interval of 0 disables sampling of the specified type. If the task sampling interval is 0, accounting information is collected only at job termination (reducing Slurm interference with the job). .br .br Smaller (non\-zero) values have a greater impact upon job performance, but a value of 30 seconds is not likely to be noticeable for applications having less than 10,000 tasks. .RE .TP \fB\-B\fR \fB\-\-extra\-node\-info\fR=<\fIsockets\fR[:\fIcores\fR[:\fIthreads\fR]]> Request a specific allocation of resources with details as to the number and type of computational resources within a cluster: number of sockets (or physical processors) per node, cores per socket, and threads per core. The total amount of resources being requested is the product of all of the terms. Each value specified is considered a minimum. An asterisk (*) can be used as a placeholder indicating that all available resources of that type are to be utilized. As with nodes, the individual levels can also be specified in separate options if desired: .nf \fB\-\-sockets\-per\-node\fR=<\fIsockets\fR> \fB\-\-cores\-per\-socket\fR=<\fIcores\fR> \fB\-\-threads\-per\-core\fR=<\fIthreads\fR> .fi If SelectType is configured to select/cons_res, it must have a parameter of CR_Core, CR_Core_Memory, CR_Socket, or CR_Socket_Memory for this option to be honored. This option is not supported on BlueGene systems (select/bluegene plugin is configured). If not specified, the scontrol show job will display 'ReqS:C:T=*:*:*'. .TP \fB\-\-bb\fR=<\fIspec\fR> Burst buffer specification. The form of the specification is system dependent. .TP \fB\-\-begin\fR=<\fItime\fR> Submit the batch script to the Slurm controller immediately, like normal, but tell the controller to defer the allocation of the job until the specified time. Time may be of the form \fIHH:MM:SS\fR to run a job at a specific time of day (seconds are optional). (If that time is already past, the next day is assumed.) You may also specify \fImidnight\fR, \fInoon\fR, \fIfika\fR (3 PM) or \fIteatime\fR (4 PM) and you can have a time\-of\-day suffixed with \fIAM\fR or \fIPM\fR for running in the morning or the evening. You can also say what day the job will be run, by specifying a date of the form \fIMMDDYY\fR or \fIMM/DD/YY\fR \fIYYYY\-MM\-DD\fR. Combine date and time using the following format \fIYYYY\-MM\-DD[THH:MM[:SS]]\fR. You can also give times like \fInow + count time\-units\fR, where the time\-units can be \fIseconds\fR (default), \fIminutes\fR, \fIhours\fR, \fIdays\fR, or \fIweeks\fR and you can tell Slurm to run the job today with the keyword \fItoday\fR and to run the job tomorrow with the keyword \fItomorrow\fR. The value may be changed after job submission using the \fBscontrol\fR command. For example: .nf \-\-begin=16:00 \-\-begin=now+1hour \-\-begin=now+60 (seconds by default) \-\-begin=2010\-01\-20T12:34:00 .fi .RS .PP Notes on date/time specifications: \- Although the 'seconds' field of the HH:MM:SS time specification is allowed by the code, note that the poll time of the Slurm scheduler is not precise enough to guarantee dispatch of the job on the exact second. The job will be eligible to start on the next poll following the specified time. The exact poll interval depends on the Slurm scheduler (e.g., 60 seconds with the default sched/builtin). \- If no time (HH:MM:SS) is specified, the default is (00:00:00). \- If a date is specified without a year (e.g., MM/DD) then the current year is assumed, unless the combination of MM/DD and HH:MM:SS has already passed for that year, in which case the next year is used. .RE .TP \fB\-\-checkpoint\fR=<\fItime\fR> Specifies the interval between creating checkpoints of the job step. By default, the job step will have no checkpoints created. Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days\-hours", "days\-hours:minutes" and "days\-hours:minutes:seconds". .TP \fB\-\-checkpoint\-dir\fR=<\fIdirectory\fR> Specifies the directory into which the job or job step's checkpoint should be written (used by the checkpoint/blcrm and checkpoint/xlch plugins only). The default value is the current working directory. Checkpoint files will be of the form ".ckpt" for jobs and "..ckpt" for job steps. .TP \fB\-\-comment\fR=<\fIstring\fR> An arbitrary comment enclosed in double quotes if using spaces or some special characters. .TP \fB\-C\fR, \fB\-\-constraint\fR=<\fIlist\fR> Nodes can have \fBfeatures\fR assigned to them by the Slurm administrator. Users can specify which of these \fBfeatures\fR are required by their job using the constraint option. Only nodes having features matching the job constraints will be used to satisfy the request. Multiple constraints may be specified with AND, OR, matching OR, resource counts, etc. Supported \fbconstraint\fR options include: .PD 1 .RS .TP \fBSingle Name\fR Only nodes which have the specified feature will be used. For example, \fB\-\-constraint="intel"\fR .TP \fBNode Count\fR A request can specify the number of nodes needed with some feature by appending an asterisk and count after the feature name. For example "\fB\-\-nodes=16 \-\-constraint=graphics*4 ..."\fR indicates that the job requires 16 nodes and that at least four of those nodes must have the feature "graphics." .TP \fBAND\fR If only nodes with all of specified features will be used. The ampersand is used for an AND operator. For example, \fB\-\-constraint="intel&gpu"\fR .TP \fBOR\fR If only nodes with at least one of specified features will be used. The vertical bar is used for an OR operator. For example, \fB\-\-constraint="intel|amd"\fR .TP \fBMatching OR\fR If only one of a set of possible options should be used for all allocated nodes, then use the OR operator and enclose the options within square brackets. For example: "\fB\-\-constraint=[rack1|rack2|rack3|rack4]"\fR might be used to specify that all nodes must be allocated on a single rack of the cluster, but any of those four racks can be used. .TP \fBMultiple Counts\fR Specific counts of multiple resources may be specified by using the AND operator and enclosing the options within square brackets. For example: "\fB\-\-constraint=[rack1*2&rack2*4]"\fR might be used to specify that two nodes must be allocated from nodes with the feature of "rack1" and four nodes must be allocated from nodes with the feature "rack2". .RE .TP \fB\-\-contiguous\fR If set, then the allocated nodes must form a contiguous set. Not honored with the \fBtopology/tree\fR or \fBtopology/3d_torus\fR plugins, both of which can modify the node ordering. .TP \fB\-\-cores\-per\-socket\fR=<\fIcores\fR> Restrict node selection to nodes with at least the specified number of cores per socket. See additional information under \fB\-B\fR option above when task/affinity plugin is enabled. .TP \fB\-\-cpu\-freq\fR =<\fIp1\fR[\-\fIp2\fR[:\fIp3\fR]]> Request that job steps initiated by srun commands inside this sbatch script be run at some requested frequency if possible, on the CPUs selected for the step on the compute node(s). \fBp1\fR can be [#### | low | medium | high | highm1] which will set the frequency scaling_speed to the corresponding value, and set the frequency scaling_governor to UserSpace. See below for definition of the values. \fBp1\fR can be [Conservative | OnDemand | Performance | PowerSave] which will set the scaling_governor to the corresponding value. The governor has to be in the list set by the slurm.conf option CpuFreqGovernors. When \fBp2\fR is present, p1 will be the minimum scaling frequency and p2 will be the maximum scaling frequency. \fBp2\fR can be [#### | medium | high | highm1] p2 must be greater than p1. \fBp3\fR can be [Conservative | OnDemand | Performance | PowerSave | UserSpace] which will set the governor to the corresponding value. If \fBp3\fR is UserSpace, the frequency scaling_speed will be set by a power or energy aware scheduling strategy to a value between p1 and p2 that lets the job run within the site's power goal. The job may be delayed if p1 is higher than a frequency that allows the job to run withing the goal. If the current frequency is < min, it will be set to min. Likewise, if the current frequency is > max, it will be set to max. Acceptable values at present include: .RS .TP 14 \fB####\fR frequency in kilohertz .TP \fBLow\fR the lowest available frequency .TP \fBHigh\fR the highest available frequency .TP \fBHighM1\fR (high minus one) will select the next highest available frequency .TP \fBMedium\fR attempts to set a frequency in the middle of the available range .TP \fBConservative\fR attempts to use the Conservative CPU governor .TP \fBOnDemand\fR attempts to use the OnDemand CPU governor (the default value) .TP \fBPerformance\fR attempts to use the Performance CPU governor .TP \fBPowerSave\fR attempts to use the PowerSave CPU governor .TP \fBUserSpace\fR attempts to use the UserSpace CPU governor .TP .RE The following informational environment variable is set in the job step when \fB\-\-cpu\-freq\fR option is requested. .nf SLURM_CPU_FREQ_REQ .fi This environment variable can also be used to supply the value for the CPU frequency request if it is set when the 'srun' command is issued. The \fB\-\-cpu\-freq\fR on the command line will override the environment variable value. The form on the environment variable is the same as the command line. See the \fBENVIRONMENT VARIABLES\fR section for a description of the SLURM_CPU_FREQ_REQ variable. \fBNOTE\fR: This parameter is treated as a request, not a requirement. If the job step's node does not support setting the CPU frequency, or the requested value is outside the bounds of the legal frequencies, an error is logged, but the job step is allowed to continue. \fBNOTE\fR: Setting the frequency for just the CPUs of the job step implies that the tasks are confined to those CPUs. If task confinement (i.e., TaskPlugin=task/affinity or TaskPlugin=task/cgroup with the "ConstrainCores" option) is not configured, this parameter is ignored. \fBNOTE\fR: When the step completes, the frequency and governor of each selected CPU is reset to the configured \fBCpuFreqDef\fR value with a default value of the OnDemand CPU governor. \fBNOTE\fR: When submitting jobs with the \fB\-\-cpu\-freq\fR option with linuxproc as the ProctrackType can cause jobs to run too quickly before Accounting is able to poll for job information. As a result not all of accounting information will be present. .RE .TP \fB\-c\fR, \fB\-\-cpus\-per\-task\fR=<\fIncpus\fR> Advise the Slurm controller that ensuing job steps will require \fIncpus\fR number of processors per task. Without this option, the controller will just try to allocate one processor per task. For instance, consider an application that has 4 tasks, each requiring 3 processors. If our cluster is comprised of quad\-processors nodes and we simply ask for 12 processors, the controller might give us only 3 nodes. However, by using the \-\-cpus\-per\-task=3 options, the controller knows that each task requires 3 processors on the same node, and the controller will grant an allocation of 4 nodes, one for each of the 4 tasks. .TP \fB\-d\fR, \fB\-\-dependency\fR=<\fIdependency_list\fR> Defer the start of this job until the specified dependencies have been satisfied completed. <\fIdependency_list\fR> is of the form <\fItype:job_id[:job_id][,type:job_id[:job_id]]\fR> or <\fItype:job_id[:job_id][?type:job_id[:job_id]]\fR>. All dependencies must be satisfied if the "," separator is used. Any dependency may be satisfied if the "?" separator is used. Many jobs can share the same dependency and these jobs may even belong to different users. The value may be changed after job submission using the scontrol command. Once a job dependency fails due to the termination state of a preceding job, the dependent job will never be run, even if the preceding job is requeued and has a different termination state in a subsequent execution. .PD .RS .TP \fBafter:job_id[:jobid...]\fR This job can begin execution after the specified jobs have begun execution. .TP \fBafterany:job_id[:jobid...]\fR This job can begin execution after the specified jobs have terminated. .TP \fBafternotok:job_id[:jobid...]\fR This job can begin execution after the specified jobs have terminated in some failed state (non-zero exit code, node failure, timed out, etc). .TP \fBafterok:job_id[:jobid...]\fR This job can begin execution after the specified jobs have successfully executed (ran to completion with an exit code of zero). .TP \fBexpand:job_id\fR Resources allocated to this job should be used to expand the specified job. The job to expand must share the same QOS (Quality of Service) and partition. Gang scheduling of resources in the partition is also not supported. .TP \fBsingleton\fR This job can begin execution after any previously launched jobs sharing the same job name and user have terminated. .RE .TP \fB\-D\fR, \fB\-\-workdir\fR=<\fIdirectory\fR> Set the working directory of the batch script to \fIdirectory\fR before it is executed. The path can be specified as full path or relative path to the directory where the command is executed. .TP \fB\-e\fR, \fB\-\-error\fR=<\fIfilename pattern\fR> Instruct Slurm to connect the batch script's standard error directly to the file name specified in the "\fIfilename pattern\fR". By default both standard output and standard error are directed to the same file. For job arrays, the default file name is "slurm-%A_%a.out", "%A" is replaced by the job ID and "%a" with the array index. For other jobs, the default file name is "slurm-%j.out", where the "%j" is replaced by the job ID. See the \fB\-\-input\fR option for filename specification options. .TP \fB\-\-exclusive[=user]\fR The job allocation can not share nodes with other running jobs (or just other users with the "=user" option). The default shared/exclusive behavior depends on system configuration and the partition's \fBShared\fR option takes precedence over the job's option. .TP \fB\-\-export\fR=<\fIenvironment variables | ALL | NONE\fR> Identify which environment variables are propagated to the batch job. Multiple environment variable names should be comma separated. Environment variable names may be specified to propagate the current value of those variables (e.g. "\-\-export=EDITOR") or specific values for the variables may be exported (e.g.. "\-\-export=EDITOR=/bin/vi") in addition to the environment variables that would otherwise be set. This option particularly important for jobs that are submitted on one cluster and execute on a different cluster (e.g. with different paths). By default all environment variables are propagated. If the argument is \fINONE\fR or specific environment variable names, then the \fB\-\-get\-user\-env\fR option will implicitly be set to load other environment variables based upon the user's configuration on the cluster which executes the job. .TP \fB\-\-export\-file\fR=<\fIfilename\fR | \fIfd\fR> If a number between 3 and OPEN_MAX is specified as the argument to this option, a readable file descriptor will be assumed (STDIN and STDOUT are not supported as valid arguments). Otherwise a filename is assumed. Export environment variables defined in <\fIfilename\fR> or read from <\fIfd\fR> to the job's execution environment. The content is one or more environment variable definitions of the form NAME=value, each separated by a null character. This allows the use of special characters in environment definitions. .TP \fB\-F\fR, \fB\-\-nodefile\fR=<\fInode file\fR> Much like \-\-nodelist, but the list is contained in a file of name \fInode file\fR. The node names of the list may also span multiple lines in the file. Duplicate node names in the file will be ignored. The order of the node names in the list is not important; the node names will be sorted by Slurm. .TP \fB\-\-get\-user\-env\fR[=\fItimeout\fR][\fImode\fR] This option will tell sbatch to retrieve the login environment variables for the user specified in the \fB\-\-uid\fR option. The environment variables are retrieved by running something of this sort "su \- \-c /usr/bin/env" and parsing the output. Be aware that any environment variables already set in sbatch's environment will take precedence over any environment variables in the user's login environment. Clear any environment variables before calling sbatch that you do not want propagated to the spawned program. The optional \fItimeout\fR value is in seconds. Default value is 8 seconds. The optional \fImode\fR value control the "su" options. With a \fImode\fR value of "S", "su" is executed without the "\-" option. With a \fImode\fR value of "L", "su" is executed with the "\-" option, replicating the login environment. If \fImode\fR not specified, the mode established at Slurm build time is used. Example of use include "\-\-get\-user\-env", "\-\-get\-user\-env=10" "\-\-get\-user\-env=10L", and "\-\-get\-user\-env=S". This option was originally created for use by Moab. .TP \fB\-\-gid\fR=<\fIgroup\fR> If \fBsbatch\fR is run as root, and the \fB\-\-gid\fR option is used, submit the job with \fIgroup\fR's group access permissions. \fIgroup\fR may be the group name or the numerical group ID. .TP \fB\-\-gres\fR=<\fIlist\fR> Specifies a comma delimited list of generic consumable resources. The format of each entry on the list is "name[[:type]:count]". The name is that of the consumable resource. The count is the number of those resources with a default value of 1. The specified resources will be allocated to the job on each node. The available generic consumable resources is configurable by the system administrator. A list of available generic consumable resources will be printed and the command will exit if the option argument is "help". Examples of use include "\-\-gres=gpu:2,mic=1", "\-\-gres=gpu:kepler:2", and "\-\-gres=help". .TP \fB\-H, \-\-hold\fR Specify the job is to be submitted in a held state (priority of zero). A held job can now be released using scontrol to reset its priority (e.g. "\fIscontrol release \fR"). .TP \fB\-h\fR, \fB\-\-help\fR Display help information and exit. .TP \fB\-\-hint\fR=<\fItype\fR> Bind tasks according to application hints. .RS .TP .B compute_bound Select settings for compute bound applications: use all cores in each socket, one thread per core. .TP .B memory_bound Select settings for memory bound applications: use only one core in each socket, one thread per core. .TP .B [no]multithread [don't] use extra threads with in-core multi-threading which can benefit communication intensive applications. Only supported with the task/affinity plugin. .TP .B help show this help message .RE .TP \fB\-I\fR, \fB\-\-immediate\fR The batch script will only be submitted to the controller if the resources necessary to grant its job allocation are immediately available. If the job allocation will have to wait in a queue of pending jobs, the batch script will not be submitted. NOTE: There is limited support for this option with batch jobs. .TP \fB\-\-ignore\-pbs\fR Ignore any "#PBS" options specified in the batch script. .TP \fB\-i\fR, \fB\-\-input\fR=<\fIfilename pattern\fR> Instruct Slurm to connect the batch script's standard input directly to the file name specified in the "\fIfilename pattern\fR". By default, "/dev/null" is open on the batch script's standard input and both standard output and standard error are directed to a file of the name "slurm\-%j.out", where the "%j" is replaced with the job allocation number, as described below. The filename pattern may contain one or more replacement symbols, which are a percent sign "%" followed by a letter (e.g. %j). Supported replacement symbols are: .PD .RS .TP \fB%A\fR Job array's master job allocation number. .TP \fB%a\fR Job array ID (index) number. .TP \fB%j\fR Job allocation number. .TP \fB%N\fR Node name. Only one file is created, so %N will be replaced by the name of the first node in the job, which is the one that runs the script. .TP \fB%u\fR User name. .RE .TP \fB\-J\fR, \fB\-\-job\-name\fR=<\fIjobname\fR> Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just "sbatch" if the script is read on sbatch's standard input. .TP \fB\-\-jobid\fR=<\fIjobid\fR> Allocate resources as the specified job id. NOTE: Only valid for user root. .TP \fB\-k\fR, \fB\-\-no\-kill\fR Do not automatically terminate a job if one of the nodes it has been allocated fails. The user will assume the responsibilities for fault\-tolerance should a node fail. When there is a node failure, any active job steps (usually MPI jobs) on that node will almost certainly suffer a fatal error, but with \-\-no\-kill, the job allocation will not be revoked so the user may launch new job steps on the remaining nodes in their allocation. By default Slurm terminates the entire job allocation if any node fails in its range of allocated nodes. .TP \fB\-\-kill-on-invalid-dep\fR=<\fIyes|no\fR> If a job has an invalid dependency and it can never run this parameter tells Slurm to terminate it or not. A terminated job state will be JOB_CANCELLED. If this option is not specified the system wide behavior applies. By default the job stays pending with reason DependencyNeverSatisfied or if the kill_invalid_depend is specified in slurm.conf the job is terminated. .TP \fB\-L\fR, \fB\-\-licenses\fR=<\fBlicense\fR> Specification of licenses (or other resources available on all nodes of the cluster) which must be allocated to this job. License names can be followed by a colon and count (the default count is one). Multiple license names should be comma separated (e.g. "\-\-licenses=foo:4,bar"). To submit jobs using remote licenses, those served by the slurmdbd, specify the name of the server providing the licenses. For example "\-\-license=nastran@slurmdb:12". .TP \fB\-M\fR, \fB\-\-clusters\fR=<\fIstring\fR> Clusters to issue commands to. Multiple cluster names may be comma separated. The job will be submitted to the one cluster providing the earliest expected job initiation time. The default value is the current cluster. A value of \(aq\fIall\fR' will query to run on all clusters. Note the \fB\-\-export\fR option to control environment variables exported between clusters. .TP \fB\-m\fR, \fB\-\-distribution\fR= \fIarbitrary\fR|<\fIblock\fR|\fIcyclic\fR|\fIplane=\fR[:\fIblock\fR|\fIcyclic\fR|\fIfcyclic\fR]> Specify alternate distribution methods for remote processes. In sbatch, this only sets environment variables that will be used by subsequent srun requests. This option controls the assignment of tasks to the nodes on which resources have been allocated, and the distribution of those resources to tasks for binding (task affinity). The first distribution method (before the ":") controls the distribution of resources across nodes. The optional second distribution method (after the ":") controls the distribution of resources across sockets within a node. Note that with select/cons_res, the number of cpus allocated on each socket and node may be different. Refer to http://slurm.schedmd.com/mc_support.html for more information on resource allocation, assignment of tasks to nodes, and binding of tasks to CPUs. .RS First distribution method: .TP .B block The block distribution method will distribute tasks to a node such that consecutive tasks share a node. For example, consider an allocation of three nodes each with two cpus. A four\-task block distribution request will distribute those tasks to the nodes with tasks one and two on the first node, task three on the second node, and task four on the third node. Block distribution is the default behavior if the number of tasks exceeds the number of allocated nodes. .TP .B cyclic The cyclic distribution method will distribute tasks to a node such that consecutive tasks are distributed over consecutive nodes (in a round\-robin fashion). For example, consider an allocation of three nodes each with two cpus. A four\-task cyclic distribution request will distribute those tasks to the nodes with tasks one and four on the first node, task two on the second node, and task three on the third node. Note that when SelectType is select/cons_res, the same number of CPUs may not be allocated on each node. Task distribution will be round\-robin among all the nodes with CPUs yet to be assigned to tasks. Cyclic distribution is the default behavior if the number of tasks is no larger than the number of allocated nodes. .TP .B plane The tasks are distributed in blocks of a specified size. The options include a number representing the size of the task block. This is followed by an optional specification of the task distribution scheme within a block of tasks and between the blocks of tasks. The number of tasks distributed to each node is the same as for cyclic distribution, but the taskids assigned to each node depend on the plane size. For more details (including examples and diagrams), please see .br http://slurm.schedmd.com/mc_support.html .br and .br http://slurm.schedmd.com/dist_plane.html .TP .B arbitrary The arbitrary method of distribution will allocate processes in\-order as listed in file designated by the environment variable SLURM_HOSTFILE. If this variable is listed it will override any other method specified. If not set the method will default to block. Inside the hostfile must contain at minimum the number of hosts requested and be one per line or comma separated. If specifying a task count (\fB\-n\fR, \fB\-\-ntasks\fR=<\fInumber\fR>), your tasks will be laid out on the nodes in the order of the file. .br \fBNOTE:\fR The arbitrary distribution option on a job allocation only controls the nodes to be allocated to the job and not the allocation of CPUs on those nodes. This option is meant primarily to control a job step's task layout in an existing job allocation for the srun command. .TP Second distribution method: .TP .B block The block distribution method will distribute tasks to sockets such that consecutive tasks share a socket. .TP .B cyclic The cyclic distribution method will distribute tasks to sockets such that consecutive tasks are distributed over consecutive sockets (in a round\-robin fashion). Tasks requiring more than one CPU will have all of those CPUs allocated on a single socket if possible. .TP .B fcyclic The fcyclic distribution method will distribute tasks to sockets such that consecutive tasks are distributed over consecutive sockets (in a round\-robin fashion). Tasks requiring more than one CPU will have each CPUs allocated in a cyclic fashion across sockets. .RE .TP \fB\-\-mail\-type\fR=<\fItype\fR> Notify user by email when certain event types occur. Valid \fItype\fR values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to BEGIN, END, FAIL, REQUEUE, and STAGE_OUT), STAGE_OUT (burst buffer stage out completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent of time limit), TIME_LIMIT_80 (reached 80 percent of time limit), and TIME_LIMIT_50 (reached 50 percent of time limit). Multiple \fItype\fR values may be specified in a comma separated list. The user to be notified is indicated with \fB\-\-mail\-user\fR. Mail notifications on job BEGIN, END and FAIL apply to a job array as a whole rather than generating individual email messages for each task in the job array. .TP \fB\-\-mail\-user\fR=<\fIuser\fR> User to receive email notification of state changes as defined by \fB\-\-mail\-type\fR. The default value is the submitting user. .TP \fB\-\-mem\fR=<\fIMB\fR> Specify the real memory required per node in MegaBytes. Default value is \fBDefMemPerNode\fR and the maximum value is \fBMaxMemPerNode\fR. If configured, both parameters can be seen using the \fBscontrol show config\fR command. This parameter would generally be used if whole nodes are allocated to jobs (\fBSelectType=select/linear\fR). Also see \fB\-\-mem\-per\-cpu\fR. \fB\-\-mem\fR and \fB\-\-mem\-per\-cpu\fR are mutually exclusive. NOTE: A memory size specification is treated as a special case and grants the job access to all of the memory on each node. NOTE: Enforcement of memory limits currently relies upon the task/cgroup plugin or enabling of accounting, which samples memory use on a periodic basis (data need not be stored, just collected). In both cases memory use is based upon the job's Resident Set Size (RSS). A task may exceed the memory limit until the next periodic accounting sample. .TP \fB\-\-mem\-per\-cpu\fR=<\fIMB\fR> Mimimum memory required per allocated CPU in MegaBytes. Default value is \fBDefMemPerCPU\fR and the maximum value is \fBMaxMemPerCPU\fR (see exception below). If configured, both parameters can be seen using the \fBscontrol show config\fR command. Note that if the job's \fB\-\-mem\-per\-cpu\fR value exceeds the configured \fBMaxMemPerCPU\fR, then the user's limit will be treated as a memory limit per task; \fB\-\-mem\-per\-cpu\fR will be reduced to a value no larger than \fBMaxMemPerCPU\fR; \fB\-\-cpus\-per\-task\fR will be set and the value of \fB\-\-cpus\-per\-task\fR multiplied by the new \fB\-\-mem\-per\-cpu\fR value will equal the original \fB\-\-mem\-per\-cpu\fR value specified by the user. This parameter would generally be used if individual processors are allocated to jobs (\fBSelectType=select/cons_res\fR). If resources are allocated by the core, socket or whole nodes; the number of CPUs allocated to a job may be higher than the task count and the value of \fB\-\-mem\-per\-cpu\fR should be adjusted accordingly. Also see \fB\-\-mem\fR. \fB\-\-mem\fR and \fB\-\-mem\-per\-cpu\fR are mutually exclusive. .TP \fB\-\-mem_bind\fR=[{\fIquiet,verbose\fR},]\fItype\fR Bind tasks to memory. Used only when the task/affinity plugin is enabled and the NUMA memory functions are available. \fBNote that the resolution of CPU and memory binding may differ on some architectures.\fR For example, CPU binding may be performed at the level of the cores within a processor while memory binding will be performed at the level of nodes, where the definition of "nodes" may differ from system to system. \fBThe use of any type other than "none" or "local" is not recommended.\fR If you want greater control, try running a simple test code with the options "\-\-mem_bind=verbose,none" to determine the specific configuration. NOTE: To have Slurm always report on the selected memory binding for all commands executed in a shell, you can enable verbose mode by setting the SLURM_MEM_BIND environment variable value to "verbose". The following informational environment variables are set when \fB\-\-mem_bind\fR is in use: .nf SLURM_MEM_BIND_VERBOSE SLURM_MEM_BIND_TYPE SLURM_MEM_BIND_LIST .fi See the \fBENVIRONMENT VARIABLES\fR section for a more detailed description of the individual SLURM_MEM_BIND* variables. Supported options include: .RS .TP .B q[uiet] quietly bind before task runs (default) .TP .B v[erbose] verbosely report binding before task runs .TP .B no[ne] don't bind tasks to memory (default) .TP .B rank bind by task rank (not recommended) .TP .B local Use memory local to the processor in use .TP .B map_mem: bind by mapping a node's memory to tasks as specified where is ,,.... CPU IDs are interpreted as decimal values unless they are preceded with '0x' in which case they interpreted as hexadecimal values (not recommended) .TP .B mask_mem: bind by setting memory masks on tasks as specified where is ,,.... memory masks are \fBalways\fR interpreted as hexadecimal values. Note that masks must be preceded with a '0x' if they don't begin with [0-9] so they are seen as numerical values by srun. .TP .B help show this help message .RE .TP \fB\-\-mincpus\fR=<\fIn\fR> Specify a minimum number of logical cpus/processors per node. .TP \fB\-N\fR, \fB\-\-nodes\fR=<\fIminnodes\fR[\-\fImaxnodes\fR]> Request that a minimum of \fIminnodes\fR nodes be allocated to this job. A maximum node count may also be specified with \fImaxnodes\fR. If only one number is specified, this is used as both the minimum and maximum node count. The partition's node limits supersede those of the job. If a job's node limits are outside of the range permitted for its associated partition, the job will be left in a PENDING state. This permits possible execution at a later time, when the partition limit is changed. If a job node limit exceeds the number of nodes configured in the partition, the job will be rejected. Note that the environment variable \fBSLURM_NNODES\fR will be set to the count of nodes actually allocated to the job. See the \fBENVIRONMENT VARIABLES \fR section for more information. If \fB\-N\fR is not specified, the default behavior is to allocate enough nodes to satisfy the requirements of the \fB\-n\fR and \fB\-c\fR options. The job will be allocated as many nodes as possible within the range specified and without delaying the initiation of the job. The node count specification may include a numeric value followed by a suffix of "k" (multiplies numeric value by 1,024) or "m" (multiplies numeric value by 1,048,576). .TP \fB\-n\fR, \fB\-\-ntasks\fR=<\fInumber\fR> sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the Slurm controller that job steps run within the allocation will launch a maximum of \fInumber\fR tasks and to provide for sufficient resources. The default is one task per node, but note that the \fB\-\-cpus\-per\-task\fR option will change this default. .TP \fB\-\-network\fR=<\fItype\fR> Specify information pertaining to the switch or network. The interpretation of \fItype\fR is system dependent. This option is supported when running Slurm on a Cray natively. It is used to request using Network Performace Counters. Only one value per request is valid. All options are case in\-sensitive. In this configuration supported values include: .RS .TP 6 \fBsystem\fR Use the system\-wide network performance counters. Only nodes requested will be marked in use for the job allocation. If the job does not fill up the entire system the rest of the nodes are not able to be used by other jobs using NPC, if idle their state will appear as PerfCnts. These nodes are still available for other jobs not using NPC. .TP \fBblade\fR Use the blade network performance counters. Only nodes requested will be marked in use for the job allocation. If the job does not fill up the entire blade(s) allocated to the job those blade(s) are not able to be used by other jobs using NPC, if idle their state will appear as PerfCnts. These nodes are still available for other jobs not using NPC. .TP .RE .br .br In all cases the job allocation request \fBmust specify the \-\-exclusive option\fR. Otherwise the request will be denied. .br .br Also with any of these options steps are not allowed to share blades, so resources would remain idle inside an allocation if the step running on a blade does not take up all the nodes on the blade. .br .br The \fBnetwork\fR option is also supported on systems with IBM's Parallel Environment (PE). See IBM's LoadLeveler job command keyword documentation about the keyword "network" for more information. Multiple values may be specified in a comma separated list. All options are case in\-sensitive. Supported values include: .RS .TP 12 \fBBULK_XFER\fR[=<\fIresources\fR>] Enable bulk transfer of data using Remote Direct-Memory Access (RDMA). The optional \fIresources\fR specification is a numeric value which can have a suffix of "k", "K", "m", "M", "g" or "G" for kilobytes, megabytes or gigabytes. NOTE: The \fIresources\fR specification is not supported by the underlying IBM infrastructure as of Parallel Environment version 2.2 and no value should be specified at this time. .TP \fBCAU\fR=<\fIcount\fR> Number of Collective Acceleration Units (CAU) required. Applies only to IBM Power7-IH processors. Default value is zero. Independent CAU will be allocated for each programming interface (MPI, LAPI, etc.) .TP \fBDEVNAME\fR=<\fIname\fR> Specify the device name to use for communications (e.g. "eth0" or "mlx4_0"). .TP \fBDEVTYPE\fR=<\fItype\fR> Specify the device type to use for communications. The supported values of \fItype\fR are: "IB" (InfiniBand), "HFI" (P7 Host Fabric Interface), "IPONLY" (IP-Only interfaces), "HPCE" (HPC Ethernet), and "KMUX" (Kernel Emulation of HPCE). The devices allocated to a job must all be of the same type. The default value depends upon depends upon what hardware is available and in order of preferences is IPONLY (which is not considered in User Space mode), HFI, IB, HPCE, and KMUX. .TP \fBIMMED\fR =<\fIcount\fR> Number of immediate send slots per window required. Applies only to IBM Power7-IH processors. Default value is zero. .TP \fBINSTANCES\fR =<\fIcount\fR> Specify number of network connections for each task on each network connection. The default instance count is 1. .TP \fBIPV4\fR Use Internet Protocol (IP) version 4 communications (default). .TP \fBIPV6\fR Use Internet Protocol (IP) version 6 communications. .TP \fBLAPI\fR Use the LAPI programming interface. .TP \fBMPI\fR Use the MPI programming interface. MPI is the default interface. .TP \fBPAMI\fR Use the PAMI programming interface. .TP \fBSHMEM\fR Use the OpenSHMEM programming interface. .TP \fBSN_ALL\fR Use all available switch networks (default). .TP \fBSN_SINGLE\fR Use one available switch network. .TP \fBUPC\fR Use the UPC programming interface. .TP \fBUS\fR Use User Space communications. .TP Some examples of network specifications: .TP \fBInstances=2,US,MPI,SN_ALL\fR Create two user space connections for MPI communications on every switch network for each task. .TP \fBUS,MPI,Instances=3,Devtype=IB\fR Create three user space connections for MPI communications on every InfiniBand network for each task. .TP \fBIPV4,LAPI,SN_Single\fR Create a IP version 4 connection for LAPI communications on one switch network for each task. .TP \fBInstances=2,US,LAPI,MPI\fR Create two user space connections each for LAPI and MPI communications on every switch network for each task. Note that SN_ALL is the default option so every switch network is used. Also note that Instances=2 specifies that two connections are established for each protocol (LAPI and MPI) and each task. If there are two networks and four tasks on the node then a total of 32 connections are established (2 instances x 2 protocols x 2 networks x 4 tasks). .RE .TP \fB\-\-nice\fR[=\fIadjustment\fR] Run the job with an adjusted scheduling priority within Slurm. With no adjustment value the scheduling priority is decreased by 100. The adjustment range is from \-10000 (highest priority) to 10000 (lowest priority). Only privileged users can specify a negative adjustment. NOTE: This option is presently ignored if \fISchedulerType=sched/wiki\fR or \fISchedulerType=sched/wiki2\fR. .TP \fB\-\-no\-requeue\fR Specifies that the batch job should never be requeued under any circumstances. Setting this option will prevent system administrators from being able to restart the job (for example, after a scheduled downtime), recover from a node failure, or be requeued upon preemption by a higher priority job. When a job is requeued, the batch script is initiated from its beginning. Also see the \fB\-\-requeue\fR option. The \fIJobRequeue\fR configuration parameter controls the default behavior on the cluster. .TP \fB\-\-ntasks\-per\-core\fR=<\fIntasks\fR> Request the maximum \fIntasks\fR be invoked on each core. Meant to be used with the \fB\-\-ntasks\fR option. Related to \fB\-\-ntasks\-per\-node\fR except at the core level instead of the node level. NOTE: This option is not supported unless \fISelectTypeParameters=CR_Core\fR or \fISelectTypeParameters=CR_Core_Memory\fR is configured. .TP \fB\-\-ntasks\-per\-socket\fR=<\fIntasks\fR> Request the maximum \fIntasks\fR be invoked on each socket. Meant to be used with the \fB\-\-ntasks\fR option. Related to \fB\-\-ntasks\-per\-node\fR except at the socket level instead of the node level. NOTE: This option is not supported unless \fISelectTypeParameters=CR_Socket\fR or \fISelectTypeParameters=CR_Socket_Memory\fR is configured. .TP \fB\-\-ntasks\-per\-node\fR=<\fIntasks\fR> Request that \fIntasks\fR be invoked on each node. If used with the \fB\-\-ntasks\fR option, the \fB\-\-ntasks\fR option will take precedence and the \fB\-\-ntasks\-per\-node\fR will be treated as a \fImaximum\fR count of tasks per node. Meant to be used with the \fB\-\-nodes\fR option. This is related to \fB\-\-cpus\-per\-task\fR=\fIncpus\fR, but does not require knowledge of the actual number of cpus on each node. In some cases, it is more convenient to be able to request that no more than a specific number of tasks be invoked on each node. Examples of this include submitting a hybrid MPI/OpenMP app where only one MPI "task/rank" should be assigned to each node while allowing the OpenMP portion to utilize all of the parallelism present in the node, or submitting a single setup/cleanup/monitoring job to each node of a pre\-existing allocation as one step in a larger job script. .TP \fB\-O\fR, \fB\-\-overcommit\fR Overcommit resources. When applied to job allocation, only one CPU is allocated to the job per node and options used to specify the number of tasks per node, socket, core, etc. are ignored. When applied to job step allocations (the \fBsrun\fR command when executed within an existing job allocation), this option can be used to launch more than one task per CPU. Normally, \fBsrun\fR will not allocate more than one process per CPU. By specifying \fB\-\-overcommit\fR you are explicitly allowing more than one process per CPU. However no more than \fBMAX_TASKS_PER_NODE\fR tasks are permitted to execute per node. NOTE: \fBMAX_TASKS_PER_NODE\fR is defined in the file \fIslurm.h\fR and is not a variable, it is set at Slurm build time. .TP \fB\-o\fR, \fB\-\-output\fR=<\fIfilename pattern\fR> Instruct Slurm to connect the batch script's standard output directly to the file name specified in the "\fIfilename pattern\fR". By default both standard output and standard error are directed to the same file. For job arrays, the default file name is "slurm-%A_%a.out", "%A" is replaced by the job ID and "%a" with the array index. For other jobs, the default file name is "slurm-%j.out", where the "%j" is replaced by the job ID. See the \fB\-\-input\fR option for filename specification options. .TP \fB\-\-open\-mode\fR=append|truncate Open the output and error files using append or truncate mode as specified. The default value is specified by the system configuration parameter \fIJobFileAppend\fR. .TP \fB\-\-parsable\fR Outputs only the job id number and the cluster name if present. The values are separated by a semicolon. Errors will still be displayed. .TP \fB\-p\fR, \fB\-\-partition\fR=<\fIpartition_names\fR> Request a specific partition for the resource allocation. If not specified, the default behavior is to allow the slurm controller to select the default partition as designated by the system administrator. If the job can use more than one partition, specify their names in a comma separate list and the one offering earliest initiation will be used with no regard given to the partition name ordering (although higher priority partitions will be considered first). When the job is initiated, the name of the partition used will be placed first in the job record partition string. .TP \fB\-\-power\fR=<\fIflags\fR> Comma separated list of power management plugin options. Currently available flags include: level (all nodes allocated to the job should have identical power caps, may be disabled by the Slurm configuration option PowerParameters=job_no_level). .TP \fB\-\-priority\fR= Request a specific job priority. May be subject to configuration specific constraints. Only Slurm operators and administrators can set the priority of a job. .TP \fB\-\-profile\fR= enables detailed data collection by the acct_gather_profile plugin. Detailed data are typically time-series that are stored in an HDF5 file for the job. .RS .TP 10 \fBAll\fR All data types are collected. (Cannot be combined with other values.) .TP \fBNone\fR No data types are collected. This is the default. (Cannot be combined with other values.) .TP \fBEnergy\fR Energy data is collected. .TP \fBTask\fR Task (I/O, Memory, ...) data is collected. .TP \fBLustre\fR Lustre data is collected. .TP \fBNetwork\fR Network (InfiniBand) data is collected. .RE .TP \fB\-\-propagate\fR[=\fIrlimitfR] Allows users to specify which of the modifiable (soft) resource limits to propagate to the compute nodes and apply to their jobs. If \fIrlimits\fR is not specified, then all resource limits will be propagated. The following rlimit names are supported by Slurm (although some options may not be supported on some systems): .RS .TP 10 \fBALL\fR All limits listed below .TP \fBAS\fR The maximum address space for a process .TP \fBCORE\fR The maximum size of core file .TP \fBCPU\fR The maximum amount of CPU time .TP \fBDATA\fR The maximum size of a process's data segment .TP \fBFSIZE\fR The maximum size of files created. Note that if the user sets FSIZE to less than the current size of the slurmd.log, job launches will fail with a 'File size limit exceeded' error. .TP \fBMEMLOCK\fR The maximum size that may be locked into memory .TP \fBNOFILE\fR The maximum number of open files .TP \fBNPROC\fR The maximum number of processes available .TP \fBRSS\fR The maximum resident set size .TP \fBSTACK\fR The maximum stack size .RE .TP \fB\-Q\fR, \fB\-\-quiet\fR Suppress informational messages from sbatch. Errors will still be displayed. .TP \fB\-\-qos\fR=<\fIqos\fR> Request a quality of service for the job. QOS values can be defined for each user/cluster/account association in the Slurm database. Users will be limited to their association's defined set of qos's when the Slurm configuration parameter, AccountingStorageEnforce, includes "qos" in it's definition. .TP \fB\-\-reboot\fR Force the allocated nodes to reboot before starting the job. This is only supported with some system configurations and will otherwise be silently ignored. .TP \fB\-\-requeue\fR Specifies that the batch job should eligible to being requeue. The job may be requeued explicitly by a system administrator, after node failure, or upon preemption by a higher priority job. When a job is requeued, the batch script is initiated from its beginning. Also see the \fB\-\-no\-requeue\fR option. The \fIJobRequeue\fR configuration parameter controls the default behavior on the cluster. .TP \fB\-\-reservation\fR=<\fIname\fR> Allocate resources for the job from the named reservation. .TP \fB\-s\fR, \fB\-\-share\fR The job allocation can share resources with other running jobs. The resources to be shared can be nodes, sockets, cores, or hyperthreads depending upon configuration. The default shared behavior depends on system configuration and the partition's \fBShared\fR option takes precedence over the job's option. This option may result in the allocation being granted sooner than if the \-\-share option was not set and allow higher system utilization, but application performance will likely suffer due to competition for resources. Also see the \-\-exclusive option. .TP \fB\-S\fR, \fB\-\-core\-spec\fR=<\fInum\fR> Count of specialized cores per node reserved by the job for system operations and not used by the application. The application will not use these cores, but will be charged for their allocation. Default value is dependent upon the node's configured CoreSpecCount value. If a value of zero is designated and the Slurm configuration option AllowSpecResourcesUsage is enabled, the job will be allowed to override CoreSpecCount and use the specialized resources on nodes it is allocated. This option can not be used with the \fB\-\-thread\-spec\fR option. .TP \fB\-\-sicp\fR Identify a job as one which jobs submitted to other clusters can be dependent upon. .TP \fB\-\-signal\fR=[B:]<\fIsig_num\fR>[@<\fIsig_time\fR>] When a job is within \fIsig_time\fR seconds of its end time, send it the signal \fIsig_num\fR. Due to the resolution of event handling by Slurm, the signal may be sent up to 60 seconds earlier than specified. \fIsig_num\fR may either be a signal number or name (e.g. "10" or "USR1"). \fIsig_time\fR must have an integer value between 0 and 65535. By default, no signal is sent before the job's end time. If a \fIsig_num\fR is specified without any \fIsig_time\fR, the default time will be 60 seconds. Use the "B:" option to signal only the batch shell, none of the other processes will be signaled. By default all job steps will be signalled, but not the batch shell itself. .TP \fB\-\-sockets\-per\-node\fR=<\fIsockets\fR> Restrict node selection to nodes with at least the specified number of sockets. See additional information under \fB\-B\fR option above when task/affinity plugin is enabled. .TP \fB\-\-switches\fR=<\fIcount\fR>[@<\fImax\-time\fR>] When a tree topology is used, this defines the maximum count of switches desired for the job allocation and optionally the maximum time to wait for that number of switches. If Slurm finds an allocation containing more switches than the count specified, the job remains pending until it either finds an allocation with desired switch count or the time limit expires. It there is no switch count limit, there is no delay in starting the job. Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days\-hours", "days\-hours:minutes" and "days\-hours:minutes:seconds". The job's maximum time delay may be limited by the system administrator using the \fBSchedulerParameters\fR configuration parameter with the \fBmax_switch_wait\fR parameter option. The default max\-time is the max_switch_wait SchedulerParameters. .TP \fB\-t\fR, \fB\-\-time\fR=<\fItime\fR> Set a limit on the total run time of the job allocation. If the requested time limit exceeds the partition's time limit, the job will be left in a PENDING state (possibly indefinitely). The default time limit is the partition's default time limit. When the time limit is reached, each task in each job step is sent SIGTERM followed by SIGKILL. The interval between signals is specified by the Slurm configuration parameter \fBKillWait\fR. The \fBOverTimeLimit\fR configuration parameter may permit the job to run longer than scheduled. Time resolution is one minute and second values are rounded up to the next minute. A time limit of zero requests that no time limit be imposed. Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days\-hours", "days\-hours:minutes" and "days\-hours:minutes:seconds". .TP \fB\-\-tasks\-per\-node\fR=<\fIn\fR> Specify the number of tasks to be launched per node. Equivalent to \fB\-\-ntasks\-per\-node\fR. .TP \fB\-\-test\-only\fR Validate the batch script and return an estimate of when a job would be scheduled to run given the current job queue and all the other arguments specifying the job requirements. No job is actually submitted. .TP \fB\-\-thread\-spec\fR=<\fInum\fR> Count of specialized threads per node reserved by the job for system operations and not used by the application. The application will not use these threads, but will be charged for their allocation. This option can not be used with the \fB\-\-core\-spec\fR option. .TP \fB\-\-threads\-per\-core\fR=<\fIthreads\fR> Restrict node selection to nodes with at least the specified number of threads per core. NOTE: "Threads" refers to the number of processing units on each core rather than the number of application tasks to be launched per core. See additional information under \fB\-B\fR option above when task/affinity plugin is enabled. .TP \fB\-\-time\-min\fR=<\fItime\fR> Set a minimum time limit on the job allocation. If specified, the job may have it's \fB\-\-time\fR limit lowered to a value no lower than \fB\-\-time\-min\fR if doing so permits the job to begin execution earlier than otherwise possible. The job's time limit will not be changed after the job is allocated resources. This is performed by a backfill scheduling algorithm to allocate resources otherwise reserved for higher priority jobs. Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days\-hours", "days\-hours:minutes" and "days\-hours:minutes:seconds". .TP \fB\-\-tmp\fR=<\fIMB\fR> Specify a minimum amount of temporary disk space. .TP \fB\-u\fR, \fB\-\-usage\fR Display brief help message and exit. .TP \fB\-\-uid\fR=<\fIuser\fR> Attempt to submit and/or run a job as \fIuser\fR instead of the invoking user id. The invoking user's credentials will be used to check access permissions for the target partition. User root may use this option to run jobs as a normal user in a RootOnly partition for example. If run as root, \fBsbatch\fR will drop its permissions to the uid specified after node allocation is successful. \fIuser\fR may be the user name or numerical user ID. .TP \fB\-V\fR, \fB\-\-version\fR Display version information and exit. .TP \fB\-v\fR, \fB\-\-verbose\fR Increase the verbosity of sbatch's informational messages. Multiple \fB\-v\fR's will further increase sbatch's verbosity. By default only errors will be displayed. .TP \fB\-w\fR, \fB\-\-nodelist\fR=<\fInode name list\fR> Request a specific list of hosts. The job will contain \fIall\fR of these hosts and possibly additional hosts as needed to satisfy resource requirements. The list may be specified as a comma\-separated list of hosts, a range of hosts (host[1\-5,7,...] for example), or a filename. The host list will be assumed to be a filename if it contains a "/" character. If you specify a minimum node or processor count larger than can be satisfied by the supplied host list, additional resources will be allocated on other nodes as needed. Duplicate node names in the list will be ignored. The order of the node names in the list is not important; the node names will be sorted by Slurm. .TP \fB\-\-wait\-all\-nodes\fR=<\fIvalue\fR> Controls when the execution of the command begins. By default the job will begin execution as soon as the allocation is made. .RS .TP 5 0 Begin execution as soon as allocation can be made. Do not wait for all nodes to be ready for use (i.e. booted). .TP 1 Do not begin execution until all nodes are ready for use. .RE .TP \fB\-\-wckey\fR=<\fIwckey\fR> Specify wckey to be used with job. If TrackWCKey=no (default) in the slurm.conf this value is ignored. .TP \fB\-\-wrap\fR=<\fIcommand string\fR> Sbatch will wrap the specified command string in a simple "sh" shell script, and submit that script to the slurm controller. When \-\-wrap is used, a script name and arguments may not be specified on the command line; instead the sbatch-generated wrapper script is used. .TP \fB\-x\fR, \fB\-\-exclude\fR=<\fInode name list\fR> Explicitly exclude certain nodes from the resources granted to the job. .PP The following options support Blue Gene systems, but may be applicable to other systems as well. .TP \fB\-\-blrts\-image\fR=<\fIpath\fR> Path to Blue Gene\/L Run Time Supervisor, or blrts, image for bluegene block. BGL only. Default from \fIblugene.conf\fR if not set. .TP \fB\-\-cnload\-image\fR=<\fIpath\fR> Path to compute node image for bluegene block. BGP only. Default from \fIblugene.conf\fR if not set. .TP \fB\-\-conn\-type\fR=<\fItype\fR> Require the block connection type to be of a certain type. On Blue Gene the acceptable of \fItype\fR are MESH, TORUS and NAV. If NAV, or if not set, then Slurm will try to fit a what the DefaultConnType is set to in the bluegene.conf if that isn't set the default is TORUS. You should not normally set this option. If running on a BGP system and wanting to run in HTC mode (only for 1 midplane and below). You can use HTC_S for SMP, HTC_D for Dual, HTC_V for virtual node mode, and HTC_L for Linux mode. For systems that allow a different connection type per dimension you can supply a comma separated list of connection types may be specified, one for each dimension (i.e. M,T,T,T will give you a torus connection is all dimensions expect the first). .TP \fB\-g\fR, \fB\-\-geometry\fR=<\fIXxYxZ\fR> | <\fIAxXxYxZ\fR> Specify the geometry requirements for the job. On BlueGene/L and BlueGene/P systems there are three numbers giving dimensions in the X, Y and Z directions, while on BlueGene/Q systems there are four numbers giving dimensions in the A, X, Y and Z directions and can not be used to allocate sub-blocks. For example "\-\-geometry=1x2x3x4", specifies a block of nodes having 1 x 2 x 3 x 4 = 24 nodes (actually midplanes on BlueGene). .TP \fB\-\-ioload\-image\fR=<\fIpath\fR> Path to io image for bluegene block. BGP only. Default from \fIblugene.conf\fR if not set. .TP \fB\-\-linux\-image\fR=<\fIpath\fR> Path to linux image for bluegene block. BGL only. Default from \fIblugene.conf\fR if not set. .TP \fB\-\-mloader\-image\fR=<\fIpath\fR> Path to mloader image for bluegene block. Default from \fIblugene.conf\fR if not set. .TP \fB\-R\fR, \fB\-\-no\-rotate\fR Disables rotation of the job's requested geometry in order to fit an appropriate block. By default the specified geometry can rotate in three dimensions. .TP \fB\-\-ramdisk\-image\fR=<\fIpath\fR> Path to ramdisk image for bluegene block. BGL only. Default from \fIblugene.conf\fR if not set. .SH "INPUT ENVIRONMENT VARIABLES" .PP Upon startup, sbatch will read and handle the options set in the following environment variables. Note that environment variables will override any options set in a batch script, and command line options will override any environment variables. .TP 22 \fBSBATCH_ACCOUNT\fR Same as \fB\-A, \-\-account\fR .TP \fBSBATCH_ACCTG_FREQ\fR Same as \fB\-\-acctg\-freq\fR .TP \fBSBATCH_ARRAY_INX\fR Same as \fB\-a, \-\-array\fR .TP \fBSBATCH_BLRTS_IMAGE\fR Same as \fB\-\-blrts\-image\fR .TP \fBSBATCH_BURST_BUFFER\fR Same as \fB\-\-bb\fR .TP \fBSBATCH_CHECKPOINT\fR Same as \fB\-\-checkpoint\fR .TP \fBSBATCH_CHECKPOINT_DIR\fR Same as \fB\-\-checkpoint\-dir\fR .TP \fBSBATCH_CLUSTERS\fR or \fBSLURM_CLUSTERS\fR Same as \fB\-\-clusters\fR .TP \fBSBATCH_CNLOAD_IMAGE\fR Same as \fB\-\-cnload\-image\fR .TP \fBSBATCH_CONN_TYPE\fR Same as \fB\-\-conn\-type\fR .TP \fBSBATCH_CORE_SPEC\fR Same as \fB\-\-core\-spec\fR .TP \fBSBATCH_DEBUG\fR Same as \fB\-v, \-\-verbose\fR .TP \fBSBATCH_DISTRIBUTION\fR Same as \fB\-m, \-\-distribution\fR .TP \fBSBATCH_EXCLUSIVE\fR Same as \fB\-\-exclusive\fR .TP \fBSBATCH_EXPORT\fR Same as \fB\-\-export\fR .TP \fBSBATCH_GEOMETRY\fR Same as \fB\-g, \-\-geometry\fR .TP \fBSBATCH_GET_USER_ENV\fR Same as \fB\-\-get\-user\-env\fR .TP \fBSBATCH_HINT\fR or \fBSLURM_HINT\fR Same as \fB\-\-hint\fR .TP \fBSBATCH_IGNORE_PBS\fR Same as \fB\-\-ignore\-pbs\fR .TP \fBSBATCH_IMMEDIATE\fR Same as \fB\-I, \-\-immediate\fR .TP \fBSBATCH_IOLOAD_IMAGE\fR Same as \fB\-\-ioload\-image\fR .TP \fBSBATCH_JOBID\fR Same as \fB\-\-jobid\fR .TP \fBSBATCH_JOB_NAME\fR Same as \fB\-J, \-\-job\-name\fR .TP \fBSBATCH_LINUX_IMAGE\fR Same as \fB\-\-linux\-image\fR .TP \fBSBATCH_MEM_BIND\fR Same as \fB\-\-mem_bind\fR .TP \fBSBATCH_MLOADER_IMAGE\fR Same as \fB\-\-mloader\-image\fR .TP \fBSBATCH_NETWORK\fR Same as \fB\-\-network\fR .TP \fBSBATCH_NO_REQUEUE\fR Same as \fB\-\-no\-requeue\fR .TP \fBSBATCH_NO_ROTATE\fR Same as \fB\-R, \-\-no\-rotate\fR .TP \fBSBATCH_OPEN_MODE\fR Same as \fB\-\-open\-mode\fR .TP \fBSBATCH_OVERCOMMIT\fR Same as \fB\-O, \-\-overcommit\fR .TP \fBSBATCH_PARTITION\fR Same as \fB\-p, \-\-partition\fR .TP \fBSBATCH_POWER\fR Same as \fB\-\-power\fR .TP \fBSBATCH_PROFILE\fR Same as \fB\-\-profile\fR .TP \fBSBATCH_QOS\fR Same as \fB\-\-qos\fR .TP \fBSBATCH_RAMDISK_IMAGE\fR Same as \fB\-\-ramdisk\-image\fR .TP \fBSBATCH_RESERVATION\fR Same as \fB\-\-reservation\fR .TP \fBSBATCH_REQ_SWITCH\fR When a tree topology is used, this defines the maximum count of switches desired for the job allocation and optionally the maximum time to wait for that number of switches. See \fB\-\-switches\fR .TP \fBSBATCH_REQUEUE\fR Same as \fB\-\-requeue\fR .TP \fBSBATCH_SICP\fR Same as \fB\-\-sicp\fR .TP \fBSBATCH_SIGNAL\fR Same as \fB\-\-signal\fR .TP \fBSBATCH_THREAD_SPEC\fR Same as \fB\-\-thread\-spec\fR .TP \fBSBATCH_TIMELIMIT\fR Same as \fB\-t, \-\-time\fR .TP \fBSBATCH_WAIT_ALL_NODES\fR Same as \fB\-\-wait\-all\-nodes\fR .TP \fBSBATCH_WAIT4SWITCH\fR Max time waiting for requested switches. See \fB\-\-switches\fR .TP \fBSBATCH_WCKEY\fR Same as \fB\-\-wckey\fR .TP \fBSLURM_CONF\fR The location of the Slurm configuration file. .TP \fBSLURM_EXIT_ERROR\fR Specifies the exit code generated when a Slurm error occurs (e.g. invalid options). This can be used by a script to distinguish application exit codes from various Slurm error conditions. .TP \fBSLURM_STEP_KILLED_MSG_NODE_ID\fR=ID If set, only the specified node will log when the job or step are killed by a signal. .SH "OUTPUT ENVIRONMENT VARIABLES" .PP The Slurm controller will set the following variables in the environment of the batch script. .TP \fBBASIL_RESERVATION_ID\fR The reservation ID on Cray systems running ALPS/BASIL only. .TP \fBMPIRUN_NOALLOCATE\fR Do not allocate a block on Blue Gene L/P systems only. .TP \fBMPIRUN_NOFREE\fR Do not free a block on Blue Gene L/P systems only. .TP \fBMPIRUN_PARTITION\fR The block name on Blue Gene systems only. .TP \fBSBATCH_CPU_BIND\fR Set to value of the \-\-cpu_bind\fR option. .TP \fBSBATCH_CPU_BIND_VERBOSE\fR Set to "verbose" if the \-\-cpu_bind\fR option includes the verbose option. Set to "quiet" otherwise. .TP \fBSBATCH_CPU_BIND_TYPE\fR Set to the CPU binding type specified with the \-\-cpu_bind\fR option. Possible values two possible comma separated strings. The first possible string identifies the entity to be bound to: "threads", "cores", "sockets", "ldoms" and "boards". The second string identifies manner in which tasks are bound: "none", "rank", "map_cpu", "mask_cpu", "rank_ldom", "map_ldom" or "mask_ldom". .TP \fBSBATCH_CPU_BIND_LIST\fR Set to bit mask used for CPU binding. .TP \fBSBATCH_MEM_BIND\fR Set to value of the \-\-mem_bind\fR option. .TP \fBSBATCH_MEM_BIND_VERBOSE\fR Set to "verbose" if the \-\-mem_bind\fR option includes the verbose option. Set to "quiet" otherwise. .TP \fBSBATCH_MEM_BIND_TYPE\fR Set to the memory binding type specified with the \-\-mem_bind\fR option. Possible values are "none", "rank", "map_map", "mask_mem" and "local". .TP \fBSBATCH_MEM_BIND_LIST\fR Set to bit mask used for memory binding. .TP \fBSLURM_ARRAY_TASK_ID\fR Job array ID (index) number. .TP \fBSLURM_ARRAY_TASK_MAX\fR Job array's maximum ID (index) number. .TP \fBSLURM_ARRAY_TASK_MIN\fR Job array's minimum ID (index) number. .TP \fBSLURM_ARRAY_TASK_STEP\fR Job array's index step size. .TP \fBSLURM_ARRAY_JOB_ID\fR Job array's master job ID number. .TP \fBSLURM_CHECKPOINT_IMAGE_DIR\fR Directory into which checkpoint images should be written if specified on the execute line. .TP \fBSLURM_CLUSTER_NAME\fR Name of the cluster on which the job is executing. .TP \fBSLURM_CPUS_ON_NODE\fR Number of CPUS on the allocated node. .TP \fBSLURM_CPUS_PER_TASK\fR Number of cpus requested per task. Only set if the \fB\-\-cpus\-per\-task\fR option is specified. .TP \fBSLURM_DISTRIBUTION\fR Same as \fB\-m, \-\-distribution\fR .TP \fBSLURM_GTIDS\fR Global task IDs running on this node. Zero origin and comma separated. .TP \fBSLURM_JOB_ID\fR (and \fBSLURM_JOBID\fR for backwards compatibility) The ID of the job allocation. .TP \fBSLURM_JOB_CPUS_PER_NODE\fR Count of processors available to the job on this node. Note the select/linear plugin allocates entire nodes to jobs, so the value indicates the total count of CPUs on the node. The select/cons_res plugin allocates individual processors to jobs, so this number indicates the number of processors on this node allocated to the job. .TP \fBSLURM_JOB_DEPENDENCY\fR Set to value of the \-\-dependency option. .TP \fBSLURM_JOB_NAME\fR Name of the job. .TP \fBSLURM_JOB_NODELIST\fR (and \fBSLURM_NODELIST\fR for backwards compatibility) List of nodes allocated to the job. .TP \fBSLURM_JOB_NUM_NODES\fR (and \fBSLURM_NNODES\fR for backwards compatibility) Total number of nodes in the job's resource allocation. .TP \fBSLURM_JOB_PARTITION\fR Name of the partition in which the job is running. .TP \fBSLURM_LOCALID\fR Node local task ID for the process within a job. .TP \fBSLURM_NODE_ALIASES\fR Sets of node name, communication address and hostname for nodes allocated to the job from the cloud. Each element in the set if colon separated and each set is comma separated. For example: SLURM_NODE_ALIASES=ec0:1.2.3.4:foo,ec1:1.2.3.5:bar .TP \fBSLURM_NODEID\fR ID of the nodes allocated. .TP \fBSLURMD_NODENAME\fR Names of all the allocated nodes. .TP \fBSLURM_NTASKS\fR (and \fBSLURM_NPROCS\fR for backwards compatibility) Same as \fB\-n, \-\-ntasks\fR .TP \fBSLURM_NTASKS_PER_CORE\fR Number of tasks requested per core. Only set if the \fB\-\-ntasks\-per\-core\fR option is specified. .TP \fBSLURM_NTASKS_PER_NODE\fR Number of tasks requested per node. Only set if the \fB\-\-ntasks\-per\-node\fR option is specified. .TP \fBSLURM_NTASKS_PER_SOCKET\fR Number of tasks requested per socket. Only set if the \fB\-\-ntasks\-per\-socket\fR option is specified. .TP \fBSLURM_PRIO_PROCESS\fR The scheduling priority (nice value) at the time of job submission. This value is propagated to the spawned processes. .TP \fBSLURM_PROCID\fR The MPI rank (or relative process ID) of the current process .TP \fBSLURM_PROFILE\fR Same as \fB\-\-profile\fR .TP \fBSLURM_RESTART_COUNT\fR If the job has been restarted due to system failure or has been explicitly requeued, this will be sent to the number of times the job has been restarted. .TP \fBSLURM_SUBMIT_DIR\fR The directory from which \fBsbatch\fR was invoked. .TP \fBSLURM_SUBMIT_HOST\fR The hostname of the computer from which \fBsbatch\fR was invoked. .TP \fBSLURM_TASKS_PER_NODE\fR Number of tasks to be initiated on each node. Values are comma separated and in the same order as SLURM_NODELIST. If two or more consecutive nodes are to have the same task count, that count is followed by "(x#)" where "#" is the repetition count. For example, "SLURM_TASKS_PER_NODE=2(x3),1" indicates that the first three nodes will each execute three tasks and the fourth node will execute one task. .TP \fBSLURM_TASK_PID\fR The process ID of the task being started. .TP \fBSLURM_TOPOLOGY_ADDR\fR This is set only if the system has the topology/tree plugin configured. The value will be set to the names network switches which may be involved in the job's communications from the system's top level switch down to the leaf switch and ending with node name. A period is used to separate each hardware component name. .TP \fBSLURM_TOPOLOGY_ADDR_PATTERN\fR This is set only if the system has the topology/tree plugin configured. The value will be set component types listed in SLURM_TOPOLOGY_ADDR. Each component will be identified as either "switch" or "node". A period is used to separate each hardware component type. .SH "EXAMPLES" .LP Specify a batch script by filename on the command line. The batch script specifies a 1 minute time limit for the job. .IP $ cat myscript .br #!/bin/sh .br #SBATCH \-\-time=1 .br srun hostname |sort .br .br $ sbatch \-N4 myscript .br salloc: Granted job allocation 65537 .br .br $ cat slurm\-65537.out .br host1 .br host2 .br host3 .br host4 .LP Pass a batch script to sbatch on standard input: .IP $ sbatch \-N4 < #!/bin/sh .br > srun hostname |sort .br > EOF .br sbatch: Submitted batch job 65541 .br .br $ cat slurm\-65541.out .br host1 .br host2 .br host3 .br host4 .SH "COPYING" Copyright (C) 2006\-2007 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). .br Copyright (C) 2008\-2010 Lawrence Livermore National Security. .br Copyright (C) 2010\-2015 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBsinfo\fR(1), \fBsattach\fR(1), \fBsalloc\fR(1), \fBsqueue\fR(1), \fBscancel\fR(1), \fBscontrol\fR(1), \fBslurm.conf\fR(5), \fBsched_setaffinity\fR (2), \fBnuma\fR (3) slurm-slurm-15-08-7-1/doc/man/man1/sbcast.1000066400000000000000000000102371265000126300200450ustar00rootroot00000000000000.TH sbcast "1" "Slurm Commands" "April 2015" "Slurm Commands" .SH "NAME" sbcast \- transmit a file to the nodes allocated to a Slurm job. .SH "SYNOPSIS" \fBsbcast\fR [\-CfFjpstvV] SOURCE DEST .SH "DESCRIPTION" \fBsbcast\fR is used to transmit a file to all nodes allocated to the currently active Slurm job. This command should only be executed from within a Slurm batch job or within the shell spawned after a Slurm job's resource allocation. \fBSOURCE\fR is the name of a file on the current node. \fBDEST\fR should be the fully qualified pathname for the file copy to be created on each node. \fBDEST\fR should be on a file system local to that node. Note that parallel file systems \fImay\fR provide better performance than \fBsbcast\fR can provide, although performance will vary by file size, degree of parallelism, and network type. .SH "OPTIONS" .TP \fB\-C\fR, \fB\-\-compress\fR Compress the file being transmitted. .TP \fB\-f\fR, \fB\-\-force\fR If the destination file already exists, replace it. .TP \fB\-F\fR \fInumber\fR, \fB\-\-fanout\fR=\fInumber\fR Specify the fanout of messages used for file transfer. Maximum value is currently eight. .TP \fB\-j\fR \fIjobID[.stepID]\fR, \fB\-\-jobid\fR=\fIjobID[.stepID]\fR Specify the job ID to use with optional step ID. If run inside an allocation this is unneeded as the job ID will read from the environment. .TP \fB\-p\fR, \fB\-\-preserve\fR Preserves modification times, access times, and modes from the original file. .TP \fB\-s\fR \fIsize\fR, \fB\-\-size\fR=\fIsize\fR Specify the block size used for file broadcast. The size can have a suffix of \fIk\fR or \fIm\fR for kilobytes or megabytes respectively (defaults to bytes). This size subject to rounding and range limits to maintain good performance. This value may need to be set on systems with very limited memory. .TP \fB\-t\fB \fIseconds\fR, fB\-\-timeout\fR=\fIseconds\fR Specify the message timeout in seconds. The default value is \fIMessageTimeout\fR as reported by "scontrol show config". Setting a higher value may be necessitated by relatively slow I/O performance on the compute node disks. .TP \fB\-v\fR, \fB\-\-verbose\fR Provide detailed event logging through program execution. .TP \fB\-V\fR, \fB\-\-version\fR Print version information and exit. .SH "ENVIRONMENT VARIABLES" .PP Some \fBsbcast\fR options may be set via environment variables. These environment variables, along with their corresponding options, are listed below. (Note: Command line options will always override these settings.) .TP 20 \fBSBCAST_COMPRESS\fR \fB\-C, \-\-compress\fR .TP \fBSBCAST_FANOUT\fR \fB\-F\fB \fInumber\fR, fB\-\-fanout\fR=\fInumber\fR .TP \fBSBCAST_FORCE\fR \fB\-f, \-\-force\fR .TP \fBSBCAST_PRESERVE\fR \fB\-p, \-\-preserve\fR .TP \fBSBCAST_SIZE\fR \fB\-s\fR \fIsize\fR, \fB\-\-size\fR=\fIsize\fR .TP \fBSBCAST_TIMEOUT\fR \fB\-t\fB \fIseconds\fR, fB\-\-timeout\fR=\fIseconds\fR .TP \fBSLURM_CONF\fR The location of the Slurm configuration file. .SH "AUTHORIZATION" When using the Slurm db, users who have AdminLevel's defined (Operator or Admin) and users who are account coordinators are given the authority to invoke sbcast on other user's jobs. .SH "EXAMPLE" Using a batch script, transmit local file \fBmy.prog\fR to \fB/tmp/my.proc\fR on the local nodes and then execute it. .nf > cat my.job #!/bin/bash sbcast my.prog /tmp/my.prog srun /tmp/my.prog > sbatch \-\-nodes=8 my.job srun: jobid 12345 submitted .fi .SH "COPYING" Copyright (C) 2006-2010 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). .br Copyright (C) 2010\-2013 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" \fBsrun\fR(1) slurm-slurm-15-08-7-1/doc/man/man1/scancel.1000066400000000000000000000215671265000126300202060ustar00rootroot00000000000000.TH scancel "1" "Slurm Commands" "April 2015" "Slurm Commands" .SH "NAME" scancel \- Used to signal jobs or job steps that are under the control of Slurm. .SH "SYNOPSIS" \fBscancel\fR [\fIOPTIONS\fR...] [\fIjob_id\fR[_\fIarray_id\fR][.\fIstep_id\fR]] [\fIjob_id\fR[_\fIarray_id\fR][.\fIstep_id\fR]...] .SH "DESCRIPTION" \fBscancel\fR is used to signal or cancel jobs, job arrays or job steps. An arbitrary number of jobs or job steps may be signaled using job specification filters or a space separated list of specific job and/or job step IDs. If the job ID of a job array is specified with an array ID value then only that job array element will be cancelled. If the job ID of a job array is specified without an array ID value then all job array elements will be cancelled. A job or job step can only be signaled by the owner of that job or user root. If an attempt is made by an unauthorized user to signal a job or job step, an error message will be printed and the job will not be signaled. .SH "OPTIONS" .TP \fB\-A\fR, \fB\-\-account\fR=\fIaccount\fR Restrict the scancel operation to jobs under this charge account. .TP \fB\-b\fR, \fB\-\-batch\fR Signal only the batch step (the shell script), but not any other steps nor any children of the shell script. This is useful when the shell script has to trap the signal and take some application defined action. This is not applicable if \fIstep_id\fR is specified. NOTE: The shell itself may exit upon receipt of many signals. You may avoid this by explicitly trap signals within the shell script (e.g. "trap "). See the shell documentation for details. Also see the \fB\-f\fR, \fB\-\-full\fR option. .TP \fB-\-ctld\fR Send the job signal request to the slurmctld daemon rather than directly to the slurmd daemons. This increases overhead, but offers better fault tolerance. This is the default behavior on architectures using front end nodes (e.g. BlueGene and Cray computers) or when the \fB\-\-clusters\fR option is used. .TP \fB\-f\fR, \fB\-\-full\fR Signal all steps associated with the job including any batch step (the shell script plus all of its child processes). By default, signals other than SIGKILL are not sent to the batch step. Also see the \fB\-b\fR, \fB\-\-batch\fR option. .TP \fB\-\-help\fR Print a help message describing all \fBscancel\fR options. .TP \fB\-i\fR, \fB\-\-interactive\fR Interactive mode. Confirm each job_id.step_id before performing the cancel operation. .TP \fB\-M\fR, \fB\-\-clusters\fR=<\fIstring\fR> Cluster to issue commands to. .TP \fB\-n\fR, \fB\-\-jobname\fR=\fIjob_name\fR, \fB\-\-name\fR=\fIjob_name\fR Restrict the scancel operation to jobs with this job name. .TP \fB\-p\fR, \fB\-\-partition\fR=\fIpartition_name\fR Restrict the scancel operation to jobs in this partition. .TP \fB\-q\fR, \fB\-\-qos\fR=\fIqos\fR Restrict the scancel operation to jobs with this quality of service. .TP \fB\-Q\fR, \fB\-\-quiet\fR Do not report an error if the specified job is already completed. This option is incompatible with the \fB\-\-verbose\fR option. .TP \fB\-R\fR, \fB\-\-reservation\fR=\fIreservation_name\fR Restrict the scancel operation to jobs with this reservation name. .TP \fB\-s\fR, \fB\-\-signal\fR=\fIsignal_name\fR The name or number of the signal to send. If this option is not used the specified job or step will be terminated. \fBNote\fR. If this option is used the signal is sent directly to the slurmd where the job is running bypassing the slurmctld thus the job state will not change even if the signal is delivered to it. Use the \fIscontrol\fR command if you want the job state change be known to slurmctld. .TP \fB\-t\fR, \fB\-\-state\fR=\fIjob_state_name\fR Restrict the scancel operation to jobs in this state. \fIjob_state_name\fR may have a value of either "PENDING", "RUNNING" or "SUSPENDED". .TP \fB\-u\fR, \fB\-\-user\fR=\fIuser_name\fR Restrict the scancel operation to jobs owned by this user. .TP \fB\-\-usage\fR Print a brief help message listing the \fBscancel\fR options. .TP \fB\-v\fR, \fB\-\-verbose\fR Print additional logging. Multiple v's increase logging detail. This option is incompatible with the \fB\-\-quiet\fR option. .TP \fB\-V\fR, \fB\-\-version\fR Print the version number of the scancel command. .TP \fB\-w\fR, \fB\-\-nodelist=\fIhost1,host2,...\fR Cancel any jobs using any of the given hosts. The list may be specified as a comma\-separated list of hosts, a range of hosts (host[1\-5,7,...] for example), or a filename. The host list will be assumed to be a filename only if it contains a "/" character. .TP \fB\-\-wckey\fR=\fIwckey\fR Restrict the scancel operation to jobs using this workload characterization key. .TP ARGUMENTS .TP \fIjob_id\fP The Slurm job ID to be signaled. .TP \fIstep_id\fP The step ID of the job step to be signaled. If not specified, the operation is performed at the level of a job. If neither \fB\-\-batch\fR nor \fB\-\-signal\fR are used, the entire job will be terminated. When \fB\-\-batch\fR is used, the batch shell processes will be signaled. The child processes of the shell will not be signalled by Slurm, but the shell may forward the signal. When \fB\-\-batch\fR is not used but \fB\-\-signal\fR is used, then all job steps will be signalled, but the batch script itself will not be signalled. .SH "ENVIRONMENT VARIABLES" .PP Some \fBscancel\fR options may be set via environment variables. These environment variables, along with their corresponding options, are listed below. (Note: commandline options will always override these settings) .TP 20 \fBSCANCEL_ACCOUNT\fR \fB\-A\fR, \fB\-\-account\fR=\fIaccount\fR .TP \fBSCANCEL_BATCH\fR \fB\-b, \-\-batch\fR .TP \fBSCANCEL_CTLD\fR \fB\-\-ctld\fR .TP \fBSCANCEL_FULL\fR \fB\-f, \-\-full\fR .TP \fBSCANCEL_INTERACTIVE\fR \fB\-i\fR, \fB\-\-interactive\fR .TP \fBSCANCEL_NAME\fR \fB\-n\fR, \fB\-\-name\fR=\fIjob_name\fR .TP \fBSCANCEL_PARTITION\fR \fB\-p\fR, \fB\-\-partition\fR=\fIpartition_name\fR .TP \fBSCANCEL_QOS\fR \fB\-q\fR, \fB\-\-qos\fR=\fIqos\fR .TP \fBSCANCEL_STATE\fR \fB\-t\fR, \fB\-\-state\fR=\fIjob_state_name\fR .TP \fBSCANCEL_USER\fR \fB\-u\fR, \fB\-\-user\fR=\fIuser_name\fR .TP \fBSCANCEL_VERBOSE\fR \fB\-v\fR, \fB\-\-verbose\fR .TP \fBSCANCEL_WCKEY\fR \fB\-\-wckey\fR=\fIwckey\fR .TP \fBSLURM_CONF\fR The location of the Slurm configuration file. .SH "NOTES" .LP If multiple filters are supplied (e.g. \fB\-\-partition\fR and \fB\-\-name\fR) only the jobs satisfying all of the filtering options will be signaled. .LP Cancelling a job step will not result in the job being terminated. The job must be cancelled to release a resource allocation. .LP To cancel a job, invoke \fBscancel\fR without \-\-signal option. This will send first a SIGCONT to all steps to eventually wake them up followed by a SIGTERM, then wait the KillWait duration defined in the slurm.conf file and finally if they have not terminated send a SIGKILL. This gives time for the running job/step(s) to clean up. .LP If a signal value of "KILL" is sent to an entire job, this will cancel the active job steps but not cancel the job itself. .LP On Cray systems, all signals \fBexcept\fR SIGCHLD, SIGCONT, SIGSTOP, SIGTSTP, SIGTTIN, SIGTTOU, SIGURG, or SIGWINCH cause the ALPS reservation to be released. The job however will not be terminated except in the case of SIGKILL and may then be used for post processing. .SH "AUTHORIZATION" When using the Slurm db, users who have AdminLevel's defined (Operator or Admin) and users who are account coordinators are given the authority to invoke scancel on other user's jobs. .SH "EXAMPLES" .TP Send SIGTERM to steps 1 and 3 of job 1234: scancel \-\-signal=TERM 1234.1 1234.3 .TP Cancel job 1234 along with all of its steps: scancel 1234 .TP Send SIGKILL to all steps of job 1235, but do not cancel the job itself: scancel \-\-signal=KILL 1235 .TP Send SIGUSR1 to the batch shell processes of job 1236: scancel \-\-signal=USR1 \-\-batch 1236 .TP Cancel job all pending jobs belonging to user "bob" in partition "debug": scancel \-\-state=PENDING \-\-user=bob \-\-partition=debug .TP Cancel only array ID 4 of job array 1237 scancel 1237_4 .SH "COPYING" Copyright (C) 2002-2007 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). .br Copyright (C) 2008-2011 Lawrence Livermore National Security. .br Copyright (C) 2010\-2015 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" \fBslurm_kill_job\fR (3), \fBslurm_kill_job_step\fR (3) slurm-slurm-15-08-7-1/doc/man/man1/scontrol.1000066400000000000000000002150711265000126300204340ustar00rootroot00000000000000.TH scontrol "1" "Slurm Commands" "September 2015" "Slurm Commands" .SH "NAME" scontrol \- Used view and modify Slurm configuration and state. .SH "SYNOPSIS" \fBscontrol\fR [\fIOPTIONS\fR...] [\fICOMMAND\fR...] .SH "DESCRIPTION" \fBscontrol\fR is used to view or modify Slurm configuration including: job, job step, node, partition, reservation, and overall system configuration. Most of the commands can only be executed by user root. If an attempt to view or modify configuration information is made by an unauthorized user, an error message will be printed and the requested action will not occur. If no command is entered on the execute line, \fBscontrol\fR will operate in an interactive mode and prompt for input. It will continue prompting for input and executing commands until explicitly terminated. If a command is entered on the execute line, \fBscontrol\fR will execute that command and terminate. All commands and options are case\-insensitive, although node names, partition names, and reservation names are case\-sensitive (node names "LX" and "lx" are distinct). All commands and options can be abbreviated to the extent that the specification is unique. A modified Slurm configuration can be written to a file using the \fIscontrol write config\fR command. The resulting file will be named using the convention "slurm.conf." and located in the same directory as the original "slurm.conf" file. The directory containing the original slurm.conf must be writable for this to occur. .SH "OPTIONS" .TP \fB\-a\fR, \fB\-\-all\fR When the \fIshow\fR command is used, then display all partitions, their jobs and jobs steps. This causes information to be displayed about partitions that are configured as hidden and partitions that are unavailable to user's group. .TP \fB\-d\fR, \fB\-\-details\fR Causes the \fIshow\fR command to provide additional details where available. Repeating the option more than once (e.g., "\-dd") will cause the \fIshow job\fR command to also list the batch script, if the job was a batch job. .TP \fB\-h\fR, \fB\-\-help\fR Print a help message describing the usage of scontrol. .TP \fB\-\-hide\fR Do not display information about hidden partitions, their jobs and job steps. By default, neither partitions that are configured as hidden nor those partitions unavailable to user's group will be displayed (i.e. this is the default behavior). .TP \fB\-M\fR, \fB\-\-clusters\fR=<\fIstring\fR> The cluster to issue commands to. Only one cluster name may be specified. .TP \fB\-o\fR, \fB\-\-oneliner\fR Print information one line per record. .TP \fB\-Q\fR, \fB\-\-quiet\fR Print no warning or informational messages, only fatal error messages. .TP \fB\-v\fR, \fB\-\-verbose\fR Print detailed event logging. Multiple \fB\-v\fR's will further increase the verbosity of logging. By default only errors will be displayed. .TP \fB\-V\fR , \fB\-\-version\fR Print version information and exit. .TP \fBCOMMANDS\fR .TP \fBall\fP Show all partitions, their jobs and jobs steps. This causes information to be displayed about partitions that are configured as hidden and partitions that are unavailable to user's group. .TP \fBabort\fP Instruct the Slurm controller to terminate immediately and generate a core file. See "man slurmctld" for information about where the core file will be written. .TP \fBcheckpoint\fP \fICKPT_OP\fP \fIID\fP Perform a checkpoint activity on the job step(s) with the specified identification. \fIID\fP can be used to identify a specific job (e.g. "", which applies to all of its existing steps) or a specific job step (e.g. "."). Acceptable values for \fICKPT_OP\fP include: .RS .TP 12 \fIable\fP Test if presently not disabled, report start time if checkpoint in progress .TP \fIcreate\fP Create a checkpoint and continue the job or job step .TP \fIdisable\fP Disable future checkpoints .TP \fIenable\fP Enable future checkpoints .TP \fIerror\fP Report the result for the last checkpoint request, error code and message .TP \fIrestart\fP Restart execution of the previously checkpointed job or job step .TP \fIrequeue\fP Create a checkpoint and requeue the batch job, combines vacate and restart operations .TP \fIvacate\fP Create a checkpoint and terminate the job or job step .RE Acceptable values for \fICKPT_OP\fP include: .RS .TP 20 \fIMaxWait=\fP Maximum time for checkpoint to be written. Default value is 10 seconds. Valid with \fIcreate\fP and \fIvacate\fP options only. .TP \fIImageDir=\fP Location of checkpoint file. Valid with \fIcreate\fP, \fIvacate\fP and \fIrestart\fP options only. This value takes precedent over any \-\-checkpoint\-dir value specified at job submission time. .TP \fIStickToNodes\fP If set, resume job on the same nodes are previously used. Valid with the \fIrestart\fP option only. .RE .TP \fBcluster\fR \fICLUSTER_NAME\fP The cluster to issue commands to. Only one cluster name may be specified. .TP \fBcreate\fP \fISPECIFICATION\fP Create a new partition or reservation. See the full list of parameters below. Include the tag "res" to create a reservation without specifying a reservation name. .TP \fBcompleting\fP Display all jobs in a COMPLETING state along with associated nodes in either a COMPLETING or DOWN state. .TP \fBdelete\fP \fISPECIFICATION\fP Delete the entry with the specified \fISPECIFICATION\fP. The two \fISPECIFICATION\fP choices are \fIPartitionName=\fP and \fIReservation=\fP. On Dynamically laid out Bluegene systems \fIBlockName=\fP also works. Reservations and partitions should have no associated jobs at the time of their deletion (modify the job's first). If the specified partition is in use, the request is denied. .TP \fBdetails\fP Causes the \fIshow\fP command to provide additional details where available. Job information will include CPUs and NUMA memory allocated on each node. Note that on computers with hyperthreading enabled and Slurm configured to allocate cores, each listed CPU represents one physical core. Each hyperthread on that core can be allocated a separate task, so a job's CPU count and task count may differ. See the \fB\-\-cpu_bind\fR and \fB\-\-mem_bind\fR option descriptions in srun man pages for more information. The \fBdetails\fP option is currently only supported for the \fIshow job\fP command. To also list the batch script for batch jobs, in addition to the details, use the \fBscript\fP option described below instead of this option. .TP \fBerrnumstr\fP \fIERRNO\fP Given a Slurm error number, return a descriptive string. .TP \fBexit\fP Terminate the execution of scontrol. This is an independent command with no options meant for use in interactive mode. .TP \fBhelp\fP Display a description of scontrol options and commands. .TP \fBhide\fP Do not display partition, job or jobs step information for partitions that are configured as hidden or partitions that are unavailable to the user's group. This is the default behavior. .TP \fBhold\fP \fIjob_list\fP Prevent a pending job from beginning started (sets it's priority to 0). Use the \fIrelease\fP command to permit the job to be scheduled. The job_list argument is a comma separated list of job IDs OR "jobname=" with the job's name, which will attempt to hold all jobs having that name. Note that when a job is held by a system administrator using the \fBhold\fP command, only a system administrator may release the job for execution (also see the \fBuhold\fP command). When the job is held by its owner, it may also be released by the job's owner. .TP \fBnotify\fP \fIjob_id\fP \fImessage\fP Send a message to standard error of the salloc or srun command or batch job associated with the specified \fIjob_id\fP. .TP \fBoneliner\fP Print information one line per record. .TP \fBpidinfo\fP \fIproc_id\fP Print the Slurm job id and scheduled termination time corresponding to the supplied process id, \fIproc_id\fP, on the current node. This will work only with processes on node on which scontrol is run, and only for those processes spawned by Slurm and their descendants. .TP \fBlistpids\fP [\fIjob_id\fP[.\fIstep_id\fP]] [\fINodeName\fP] Print a listing of the process IDs in a job step (if JOBID.STEPID is provided), or all of the job steps in a job (if \fIjob_id\fP is provided), or all of the job steps in all of the jobs on the local node (if \fIjob_id\fP is not provided or \fIjob_id\fP is "*"). This will work only with processes on the node on which scontrol is run, and only for those processes spawned by Slurm and their descendants. Note that some Slurm configurations (\fIProctrackType\fP value of \fIpgid\fP or \fIaix\fP) are unable to identify all processes associated with a job or job step. Note that the NodeName option is only really useful when you have multiple slurmd daemons running on the same host machine. Multiple slurmd daemons on one host are, in general, only used by Slurm developers. .TP \fBping\fP Ping the primary and secondary slurmctld daemon and report if they are responding. .TP \fBquiet\fP Print no warning or informational messages, only fatal error messages. .TP \fBquit\fP Terminate the execution of scontrol. .TP \fBreboot_nodes\fP [\fINodeList\fP] Reboot all nodes in the system when they become idle using the \fBRebootProgram\fP as configured in Slurm's slurm.conf file. Accepts an option list of nodes to reboot. By default all nodes are rebooted. NOTE: This command does not prevent additional jobs from being scheduled on these nodes, so many jobs can be executed on the nodes prior to them being rebooted. You can explicitly drain the nodes in order to reboot nodes as soon as possible, but the nodes must also explicitly be returned to service after being rebooted. You can alternately create an advanced reservation to prevent additional jobs from being initiated on nodes to be rebooted. NOTE: Nodes will be placed in a state of "MAINT" until rebooted and returned to service with a normal state. Alternately the node's state "MAINT" may be cleared by using the scontrol command to set the node state to "RESUME", which clears the "MAINT" flag. .TP \fBreconfigure\fP Instruct all Slurm daemons to re\-read the configuration file. This command does not restart the daemons. This mechanism would be used to modify configuration parameters (Epilog, Prolog, SlurmctldLogFile, SlurmdLogFile, etc.). The Slurm controller (slurmctld) forwards the request all other daemons (slurmd daemon on each compute node). Running jobs continue execution. Most configuration parameters can be changed by just running this command, however, Slurm daemons should be shutdown and restarted if any of these parameters are to be changed: AuthType, BackupAddr, BackupController, ControlAddr, ControlMach, PluginDir, StateSaveLocation, SlurmctldPort or SlurmdPort. The slurmctld daemon must be restarted if nodes are added to or removed from the cluster. .TP \fBrelease\fP \fIjob_list\fP Release a previously held job to begin execution. The job_list argument is a comma separated list of job IDs OR "jobname=" with the job's name, which will attempt to hold all jobs having that name. Also see \fBhold\fR. .TP \fBrequeue\fP \fIjob_list\fP Requeue a running, suspended or finished Slurm batch job into pending state. The job_list argument is a comma separated list of job IDs. .TP \fBrequeuehold\fP \fIjob_list\fP Requeue a running, suspended or finished Slurm batch job into pending state, moreover the job is put in held state (priority zero). The job_list argument is a comma separated list of job IDs. A held job can be released using scontrol to reset its priority (e.g. "scontrol release "). The command accepts the following option: .RS .TP \fIState=SpecialExit\fP The "SpecialExit" keyword specifies that the job has to be put in a special state \fBJOB_SPECIAL_EXIT\fP. The "scontrol show job" command will display the JobState as \fBSPECIAL_EXIT\fP, while the "squeue" command as \fBSE\fP. .RE .TP \fBresume\fP \fIjob_list\fP Resume a previously suspended job. The job_list argument is a comma separated list of job IDs. Also see \fBsuspend\fR. \fBNOTE:\fR A suspended job releases its CPUs for allocation to other jobs. Resuming a previously suspended job may result in multiple jobs being allocated the same CPUs, which could trigger gang scheduling with some configurations or severe degradation in performance with other configurations. Use of the scancel command to send SIGSTOP and SIGCONT signals would stop a job without releasing its CPUs for allocaiton to other jobs and would be a preferable mechanism in many cases. Use with caution. .TP \fBschedloglevel\fP \fILEVEL\fP Enable or disable scheduler logging. \fILEVEL\fP may be "0", "1", "disable" or "enable". "0" has the same effect as "disable". "1" has the same effect as "enable". This value is temporary and will be overwritten when the slurmctld daemon reads the slurm.conf configuration file (e.g. when the daemon is restarted or \fBscontrol reconfigure\fR is executed) if the SlurmSchedLogLevel parameter is present. .TP \fBscript\fP Causes the \fIshow job\fP command to list the batch script for batch jobs in addition to the detail information described under the \fBdetails\fP option above. .TP \fBsetdebug\fP \fILEVEL\fP Change the debug level of the slurmctld daemon. \fILEVEL\fP may be an integer value between zero and nine (using the same values as \fISlurmctldDebug\fP in the \fIslurm.conf\fP file) or the name of the most detailed message type to be printed: "quiet", "fatal", "error", "info", "verbose", "debug", "debug2", "debug3", "debug4", or "debug5". This value is temporary and will be overwritten whenever the slurmctld daemon reads the slurm.conf configuration file (e.g. when the daemon is restarted or \fBscontrol reconfigure\fR is executed). .TP \fBsetdebugflags\fP [+|\-]\fIFLAG\fP Add or remove DebugFlags of the slurmctld daemon. See "man slurm.conf" for a list of supported DebugFlags. NOTE: Changing the value of some DebugFlags will have no effect without restarting the slurmctld daemon, which would set DebugFlags based upon the contents of the slurm.conf configuration file. .TP \fBshow\fP \fIENTITY\fP \fIID\fP Display the state of the specified entity with the specified identification. \fIENTITY\fP may be \fIaliases\fP, \fIcache\fP, \fIconfig\fP, \fIdaemons\fP, \fIfrontend\fP, \fIjob\fP, \fInode\fP, \fIpartition\fP, \fIpowercap\fP, \fIreservation\fP, \fIslurmd\fP, \fIstep\fP, \fItopology\fP, \fIhostlist\fP, \fIhostlistsorted\fP or \fIhostnames\fP (also \fIblock\fP or \fIsubmp\fP on BlueGene systems). \fIID\fP can be used to identify a specific element of the identified entity: job ID, node name, partition name, reservation name, or job step ID for \fIjob\fP, \fInode\fP, \fIpartition\fP, or \fIstep\fP respectively. For an \fIENTITY\fP of \fItopology\fP, the \fIID\fP may be a node or switch name. If one node name is specified, all switches connected to that node (and their parent switches) will be shown. If more than one node name is specified, only switches that connect to all named nodes will be shown. \fIaliases\fP will return all \fINodeName\fP values associated to a given \fINodeHostname\fP (useful to get the list of virtual nodes associated with a real node in a configuration where multiple slurmd daemons execute on a single compute node). \fIcache\fP displays the current contents of the slurmctld's internal cache for users and associations. \fIconfig\fP displays parameter names from the configuration files in mixed case (e.g. SlurmdPort=7003) while derived parameters names are in upper case only (e.g. SLURM_VERSION). \fIhostnames\fP takes an optional hostlist expression as input and writes a list of individual host names to standard output (one per line). If no hostlist expression is supplied, the contents of the SLURM_NODELIST environment variable is used. For example "tux[1\-3]" is mapped to "tux1","tux2" and "tux3" (one hostname per line). \fIhostlist\fP takes a list of host names and prints the hostlist expression for them (the inverse of \fIhostnames\fP). \fIhostlist\fP can also take the absolute pathname of a file (beginning with the character '/') containing a list of hostnames. Multiple node names may be specified using simple node range expressions (e.g. "lx[10\-20]"). All other \fIID\fP values must identify a single element. The job step ID is of the form "job_id.step_id", (e.g. "1234.1"). \fIslurmd\fP reports the current status of the slurmd daemon executing on the same node from which the scontrol command is executed (the local host). It can be useful to diagnose problems. By default \fIhostlist\fP does not sort the node list or make it unique (e.g. tux2,tux1,tux2 = tux[2,1-2]). If you wanted a sorted list use \fIhostlistsorted\fP (e.g. tux2,tux1,tux2 = tux[1-2,2]). By default, all elements of the entity type specified are printed. For an \fIENTITY\fP of \fIjob\fP, if the job does not specify socket-per-node, cores-per-socket or threads-per-core then it will display '*' in ReqS:C:T=*:*:* field. .TP \fBshutdown\fP \fIOPTION\fP Instruct Slurm daemons to save current state and terminate. By default, the Slurm controller (slurmctld) forwards the request all other daemons (slurmd daemon on each compute node). An \fIOPTION\fP of \fIslurmctld\fP or \fIcontroller\fP results in only the slurmctld daemon being shutdown and the slurmd daemons remaining active. .TP \fBsuspend\fP \fIjob_list\fP Suspend a running job. The job_list argument is a comma separated list of job IDs. Use the \fIresume\fP command to resume its execution. User processes must stop on receipt of SIGSTOP signal and resume upon receipt of SIGCONT for this operation to be effective. Not all architectures and configurations support job suspension. If a suspended job is requeued, it will be placed in a held state. .TP \fBtakeover\fP Instruct Slurm's backup controller (slurmctld) to take over system control. Slurm's backup controller requests control from the primary and waits for its termination. After that, it switches from backup mode to controller mode. If primary controller can not be contacted, it directly switches to controller mode. This can be used to speed up the Slurm controller fail\-over mechanism when the primary node is down. This can be used to minimize disruption if the computer executing the primary Slurm controller is scheduled down. (Note: Slurm's primary controller will take the control back at startup.) .TP \fBuhold\fP \fIjob_list\fP Prevent a pending job from being started (sets it's priority to 0). The job_list argument is a space separated list of job IDs or job names. Use the \fIrelease\fP command to permit the job to be scheduled. This command is designed for a system administrator to hold a job so that the job owner may release it rather than requiring the intervention of a system administrator (also see the \fBhold\fP command). .TP \fBupdate\fP \fISPECIFICATION\fP Update job, step, node, partition, powercapping or reservation configuration per the supplied specification. \fISPECIFICATION\fP is in the same format as the Slurm configuration file and the output of the \fIshow\fP command described above. It may be desirable to execute the \fIshow\fP command (described above) on the specific entity you which to update, then use cut\-and\-paste tools to enter updated configuration values to the \fIupdate\fP. Note that while most configuration values can be changed using this command, not all can be changed using this mechanism. In particular, the hardware configuration of a node or the physical addition or removal of nodes from the cluster may only be accomplished through editing the Slurm configuration file and executing the \fIreconfigure\fP command (described above). .TP \fBverbose\fP Print detailed event logging. This includes time\-stamps on data structures, record counts, etc. .TP \fBversion\fP Display the version number of scontrol being executed. .TP \fBwait_job\fP \fIjob_id\fP Wait until a job and all of its nodes are ready for use or the job has entered some termination state. This option is particularly useful in the Slurm Prolog or in the batch script itself if nodes are powered down and restarted automatically as needed. .TP \fBwrite config\fP Write the current configuration to a file with the naming convention of "slurm.conf." in the same directory as the original slurm.conf file. .TP \fB!!\fP Repeat the last command executed. .TP \fBSPECIFICATIONS FOR UPDATE COMMAND, JOBS\fR .TP \fIAccount\fP= Account name to be changed for this job's resource use. Value may be cleared with blank data value, "Account=". .TP \fIArrayTaskThrottle\fP= Speciify the maximum number of tasks in a job array that can execute at the same time. Set the count to zero in order to eliminate any limit. The task throttle count for a job array is reported as part of its ArrayTaskId field, preceded with a percent sign. For example "ArrayTaskId=1\-10%2" indicates the maximum number of running tasks is limited to 2. .TP \fIBurstBuffer\fP= Burst buffer specification to be changed for this job's resource use. Value may be cleared with blank data value, "BurstBuffer=". Format is burst buffer plugin specific. .TP \fIConn\-Type\fP= Reset the node connection type. Supported only on IBM BlueGene systems. Possible values on are "MESH", "TORUS" and "NAV" (mesh else torus). .TP \fIContiguous\fP= Set the job's requirement for contiguous (consecutive) nodes to be allocated. Possible values are "YES" and "NO". Only the Slurm administrator or root can change this parameter. .TP \fIDependency\fP= Defer job's initiation until specified job dependency specification is satisfied. Cancel dependency with an empty dependency_list (e.g. "Dependency="). <\fIdependency_list\fR> is of the form <\fItype:job_id[:job_id][,type:job_id[:job_id]]\fR>. Many jobs can share the same dependency and these jobs may even belong to different users. .PD .RS .TP \fBafter:job_id[:jobid...]\fR This job can begin execution after the specified jobs have begun execution. .TP \fBafterany:job_id[:jobid...]\fR This job can begin execution after the specified jobs have terminated. .TP \fBafternotok:job_id[:jobid...]\fR This job can begin execution after the specified jobs have terminated in some failed state (non-zero exit code, node failure, timed out, etc). .TP \fBafterok:job_id[:jobid...]\fR This job can begin execution after the specified jobs have successfully executed (ran to completion with an exit code of zero). .TP \fBsingleton\fR This job can begin execution after any previously launched jobs sharing the same job name and user have terminated. .RE .TP \fIEligibleTime\fP= See \fIStartTime\fP. .TP \fIExcNodeList\fP= Set the job's list of excluded node. Multiple node names may be specified using simple node range expressions (e.g. "lx[10\-20]"). Value may be cleared with blank data value, "ExcNodeList=". .TP \fIFeatures\fP= Set the job's required node features. The list of features may include multiple feature names separated by ampersand (AND) and/or vertical bar (OR) operators. For example: \fBFeatures="opteron&video"\fR or \fBFeatures="fast|faster"\fR. In the first example, only nodes having both the feature "opteron" AND the feature "video" will be used. There is no mechanism to specify that you want one node with feature "opteron" and another node with feature "video" in case no node has both features. If only one of a set of possible options should be used for all allocated nodes, then use the OR operator and enclose the options within square brackets. For example: "\fBFeatures=[rack1|rack2|rack3|rack4]"\fR might be used to specify that all nodes must be allocated on a single rack of the cluster, but any of those four racks can be used. A request can also specify the number of nodes needed with some feature by appending an asterisk and count after the feature name. For example "\fBFeatures=graphics*4"\fR indicates that at least four allocated nodes must have the feature "graphics." Constraints with node counts may only be combined with AND operators. Value may be cleared with blank data value, for example "Features=". .TP \fIGeometry\fP= Reset the required job geometry. On Blue Gene the value should be three digits separated by "x" or ",". The digits represent the allocation size in X, Y and Z dimensions (e.g. "2x3x4"). .TP \fIGres\fP= Specifies a comma delimited list of generic consumable resources. The format of each entry on the list is "name[:count[*cpu]]". The name is that of the consumable resource. The count is the number of those resources with a default value of 1. The specified resources will be allocated to the job on each node allocated unless "*cpu" is appended, in which case the resources will be allocated on a per cpu basis. The available generic consumable resources is configurable by the system administrator. A list of available generic consumable resources will be printed and the command will exit if the option argument is "help". Examples of use include "Gres=gpus:2*cpu,disk=40G" and "Gres=help". .TP \fIJobId\fP= Identify the job(s) to be updated. The job_list may be a comma separated list of job IDs. Either \fIJobId\fP or \fIJobName\fP is required. .TP \fILicenses\fP= Specification of licenses (or other resources available on all nodes of the cluster) as described in salloc/sbatch/srun man pages. .TP \fIMinCPUsNode\fP= Set the job's minimum number of CPUs per node to the specified value. .TP \fIMinMemoryCPU\fP= Set the job's minimum real memory required per allocated CPU to the specified value. Either \fIMinMemoryCPU\fP or \fIMinMemoryNode\fP may be set, but not both. .TP \fIMinMemoryNode\fP= Set the job's minimum real memory required per node to the specified value. Either \fIMinMemoryCPU\fP or \fIMinMemoryNode\fP may be set, but not both. .TP \fIMinTmpDiskNode\fP= Set the job's minimum temporary disk space required per node to the specified value. Only the Slurm administrator or root can change this parameter. .TP \fIJobName\fP= Identify the name of jobs to be modified or set the job's name to the specified value. When used to identify jobs to be modified, all jobs belonging to all users are modified unless the \fIUserID\fP option is used to identify a specific user. Either \fIJobId\fP or \fIJobName\fP is required. .TP \fINice\fP[=delta] Adjust job's priority by the specified value. Default value is 100. The adjustment range is from \-10000 (highest priority) to 10000 (lowest priority). Nice value changes are not additive, but overwrite any prior nice value and are applied to the job's base priority. Only privileged users, Slurm administrator or root, can specify a negative adjustment. .TP \fINodeList\fP= Change the nodes allocated to a running job to shrink it's size. The specified list of nodes must be a subset of the nodes currently allocated to the job. Multiple node names may be specified using simple node range expressions (e.g. "lx[10\-20]"). After a job's allocation is reduced, subsequent \fBsrun\fR commands must explicitly specify node and task counts which are valid for the new allocation. .TP \fINumCPUs\fP=[\-] Set the job's minimum and optionally maximum count of CPUs to be allocated. .TP \fINumNodes\fP=[\-] Set the job's minimum and optionally maximum count of nodes to be allocated. If the job is already running, use this to specify a node count less than currently allocated and resources previously allocated to the job will be relinquished. After a job's allocation is reduced, subsequent \fBsrun\fR commands must explicitly specify node and task counts which are valid for the new allocation. Also see the \fINodeList\fP parameter above. .TP \fINumTasks\fP= Set the job's count of required tasks to the specified value. .TP \fIPartition\fP= Set the job's partition to the specified value. .TP \fIPriority\fP= Set the job's priority to the specified value. Note that a job priority of zero prevents the job from ever being scheduled. By setting a job's priority to zero it is held. Set the priority to a non\-zero value to permit it to run. Explicitly setting a job's priority clears any previously set nice value and removes the priority/multifactor plugin's ability to manage a job's priority. In order to restore the priority/multifactor plugin's ability to manage a job's priority, hold and then release the job. Only the Slurm administrator or root can increase job's priority. .TP \fIQOS\fP= Set the job's QOS (Quality Of Service) to the specified value. Value may be cleared with blank data value, "QOS=". .TP \fIReqNodeList\fP= Set the job's list of required node. Multiple node names may be specified using simple node range expressions (e.g. "lx[10\-20]"). Value may be cleared with blank data value, "ReqNodeList=". .TP \fIRequeue\fP=<0|1> Stipulates whether a job should be requeued after a node failure: 0 for no, 1 for yes. .TP \fIReservationName\fP= Set the job's reservation to the specified value. Value may be cleared with blank data value, "ReservationName=". .TP \fIRotate\fP= Permit the job's geometry to be rotated. Possible values are "YES" and "NO". .TP \fIShared\fP= Set the job's ability to share nodes with other jobs. Possible values are "YES" and "NO". This option can only be changed for pending jobs. .TP \fIStartTime\fP= Set the job's earliest initiation time. It accepts times of the form \fIHH:MM:SS\fR to run a job at a specific time of day (seconds are optional). (If that time is already past, the next day is assumed.) You may also specify \fImidnight\fR, \fInoon\fR, \fIfika\fR (3 PM) or \fIteatime\fR (4 PM) and you can have a time\-of\-day suffixed with \fIAM\fR or \fIPM\fR for running in the morning or the evening. You can also say what day the job will be run, by specifying a date of the form \fIMMDDYY\fR or \fIMM/DD/YY\fR or \fIMM.DD.YY\fR, or a date and time as \fIYYYY\-MM\-DD[THH:MM[:SS]]\fR. You can also give times like \fInow + count time\-units\fR, where the time\-units can be \fIminutes\fR, \fIhours\fR, \fIdays\fR, or \fIweeks\fR and you can tell Slurm to run the job today with the keyword \fItoday\fR and to run the job tomorrow with the keyword \fItomorrow\fR. .RS .PP Notes on date/time specifications: \- although the 'seconds' field of the HH:MM:SS time specification is allowed by the code, note that the poll time of the Slurm scheduler is not precise enough to guarantee dispatch of the job on the exact second. The job will be eligible to start on the next poll following the specified time. The exact poll interval depends on the Slurm scheduler (e.g., 60 seconds with the default sched/builtin). \- if no time (HH:MM:SS) is specified, the default is (00:00:00). \- if a date is specified without a year (e.g., MM/DD) then the current year is assumed, unless the combination of MM/DD and HH:MM:SS has already passed for that year, in which case the next year is used. .RE .TP \fISwitches\fP=[@] When a tree topology is used, this defines the maximum count of switches desired for the job allocation. If Slurm finds an allocation containing more switches than the count specified, the job remain pending until it either finds an allocation with desired switch count or the time limit expires. By default there is no switch count limit and no time limit delay. Set the count to zero in order to clean any previously set count (disabling the limit). The job's maximum time delay may be limited by the system administrator using the \fBSchedulerParameters\fR configuration parameter with the \fBmax_switch_wait\fR parameter option. Also see \fIwait\-for\-switch\fP. .TP \fITimeLimit\fP=
": return posBgn = lineIn.find("--") if posBgn == -1: # 1st form posBgn = 5 posBgn = posBgn + 2 posEnd = lineIn.find("",posBgn) if posEnd == -1: # poorly constructed return id_name = lineIn[posBgn:posEnd] id_name = id_name.replace(' ','-') if id_name in ids: ids[id_name] += 1 id_name += "_" + str(ids[id_name]) else: ids[id_name] = 0 html.write('\n') return def llnl_references(line): manStr = "Refer to mc_support.html" htmlStr = 'Refer to mc_support' lineFix = line.replace(manStr,htmlStr) if lineFix != line: return lineFix manStr = 'http://slurm.schedmd.com/mc_support.html' htmlStr = 'the mc_support document' lineFix = line.replace(manStr,htmlStr) if lineFix != line: return lineFix manStr = 'http://slurm.schedmd.com/dist_plane.html.' htmlStr = 'the dist_plane document' lineFix = line.replace(manStr,htmlStr) if lineFix != line: return lineFix manStr = '<http://slurm.schedmd.com/mpi_guide.html>' htmlStr = 'mpi_guide' lineFix = line.replace(manStr,htmlStr) if lineFix != line: return lineFix manStr = '(http://slurm.schedmd.com/power_save.html).' htmlStr = 'power_save' lineFix = line.replace(manStr,htmlStr) if lineFix != line: return lineFix manStr = 'http://slurm.schedmd.com/cons_res.html' htmlStr = 'cons_res' lineFix = line.replace(manStr,htmlStr) if lineFix != line: return lineFix manStr = 'http://slurm.schedmd.com/cons_res_share.html' htmlStr = 'cons_res_share' lineFix = line.replace(manStr,htmlStr) if lineFix != line: return lineFix manStr = 'http://slurm.schedmd.com/gang_scheduling.html' htmlStr = 'gang_scheduling' lineFix = line.replace(manStr,htmlStr) if lineFix != line: return lineFix manStr = 'http://slurm.schedmd.com/preempt.html' htmlStr = 'preempt' lineFix = line.replace(manStr,htmlStr) if lineFix != line: return lineFix return line def relative_reference(lineIn): fullRef = "/cgi-bin/man/man2html" lenRef = len(fullRef) refAnchor="man2html " while posHREF != -1: posRefAnchor = lineIn.find(refAnchor,cursor) lineOt = lineOt + lineIn[cursor:posRefAnchor+lenRefAnchor] cursor = posHREF + lenRef + 3 lineOt = lineOt + '"' posQuote = lineIn.find('"',cursor) lineOt = lineOt + lineIn[cursor:posQuote] + ".html" cursor = posQuote posHREF = lineIn.find(fullRef,cursor) lineOt = lineOt + lineIn[cursor:] return lineOt def include_virtual(matchobj): global dirname if dirname: filename = dirname + '/' + matchobj.group(2) else: filename = matchobj.group(2) if os.access(filename, os.F_OK): #print 'Including file', filename lines = open(filename, 'r').read() return lines else: return matchobj.group(0) def url_rewrite(matchobj): global dirname if dirname: localpath = dirname + '/' + matchobj.group(2) else: localpath = matchobj.group(2) if matchobj.group(2)[-6:] == '.shtml' and os.access(localpath, os.F_OK): location = matchobj.group(2) if matchobj.group(3) is None: newname = location[:-6] + '.html' else: newname = location[:-6] + '.html' + matchobj.group(3) #print 'Rewriting', location, 'to', newname return matchobj.group(1) + newname + matchobj.group(4) else: return matchobj.group(0) def version_rewrite(matchobj): global version return version files = [] version = sys.argv[1] for f in sys.argv[4:]: posLastDot = f.rfind(".") mhtmlname = f[:posLastDot] + ".mhtml" cmd = "man2html " + f + "> " + mhtmlname os.system(cmd) print(">>>>>>> " + mhtmlname) files.append(mhtmlname) for filename in files: dirname, basefilename = os.path.split(filename) newfilename = basefilename[:-6] + '.html' print('Converting', filename, '->', newfilename) shtml = codecs.open(filename, 'r', encoding='utf-8') html = codecs.open(newfilename, 'w', encoding='utf-8') lines = open(sys.argv[2], 'r').read() lines = lines.replace(".shtml",".html") lines = version_regex.sub(version_rewrite, lines) html.write(lines) # html.write() for line in shtml.readlines(): # Remove html header/footer created by man2html if line == "Content-type: text/html\n": continue if line == "Content-type: text/html; charset=UTF-8\n": continue if line[:6] == "": continue if line[:7] == "": continue if line[:7] == "": continue if line[:7] == "": continue line = include_regex.sub(include_virtual, line) # Special case some html references line = llnl_references(line) #insert tags for some options insert_tag(html, line) # Make man2html links relative ones line = relative_reference(line) line = url_regex.sub(url_rewrite, line) html.write(line) lines = open(sys.argv[3], 'r').read() lines = lines.replace(".shtml",".html") lines = version_regex.sub(version_rewrite, lines) html.write(lines) # html.write() html.close() shtml.close() os.remove(filename) slurm-slurm-15-08-7-1/doc/man/man3/000077500000000000000000000000001265000126300165035ustar00rootroot00000000000000slurm-slurm-15-08-7-1/doc/man/man3/Makefile.am000066400000000000000000000077351265000126300205530ustar00rootroot00000000000000man3_MANS = slurm_hostlist_create.3 \ slurm_hostlist_destroy.3 \ slurm_hostlist_shift.3 \ slurm_allocate_resources.3 \ slurm_allocate_resources_blocking.3 \ slurm_allocation_lookup.3 \ slurm_allocation_lookup_lite.3 \ slurm_allocation_msg_thr_create.3 \ slurm_allocation_msg_thr_destroy.3 \ slurm_api_version.3 \ slurm_checkpoint.3 \ slurm_checkpoint_able.3 \ slurm_checkpoint_complete.3 \ slurm_checkpoint_create.3 \ slurm_checkpoint_disable.3 \ slurm_checkpoint_enable.3 \ slurm_checkpoint_error.3 \ slurm_checkpoint_failed.3 \ slurm_checkpoint_restart.3 \ slurm_checkpoint_task_complete.3 \ slurm_checkpoint_tasks.3 \ slurm_checkpoint_vacate.3 \ slurm_clear_trigger.3 \ slurm_complete_job.3 \ slurm_confirm_allocation.3 \ slurm_create_partition.3 \ slurm_create_reservation.3 \ slurm_delete_partition.3 \ slurm_delete_reservation.3 \ slurm_free_ctl_conf.3 \ slurm_free_front_end_info_msg.3 \ slurm_free_job_info_msg.3 \ slurm_free_job_alloc_info_response_msg.3 \ slurm_free_job_array_resp.3 \ slurm_free_job_step_create_response_msg.3 \ slurm_free_job_step_info_response_msg.3 \ slurm_free_node_info.3 \ slurm_free_node_info_msg.3 \ slurm_free_partition_info.3 \ slurm_free_partition_info_msg.3 \ slurm_free_reservation_info_msg.3 \ slurm_free_resource_allocation_response_msg.3 \ slurm_free_slurmd_status.3 \ slurm_free_submit_response_response_msg.3 \ slurm_free_trigger_msg.3 \ slurm_get_end_time.3 \ slurm_get_errno.3 \ slurm_get_job_steps.3 \ slurm_get_rem_time.3 \ slurm_get_select_jobinfo.3 \ slurm_get_triggers.3 \ slurm_init_update_front_end_msg.3 \ slurm_init_job_desc_msg.3 \ slurm_init_part_desc_msg.3 \ slurm_init_resv_desc_msg.3 \ slurm_init_trigger_msg.3 \ slurm_init_update_node_msg.3 \ slurm_init_update_step_msg.3 \ slurm_job_cpus_allocated_on_node.3 \ slurm_job_cpus_allocated_on_node_id.3 \ slurm_job_step_create.3 \ slurm_job_step_launch_t_init.3 \ slurm_job_step_layout_get.3 \ slurm_job_step_layout_free.3 \ slurm_job_will_run.3 \ slurm_job_will_run2.3 \ slurm_jobinfo_ctx_get.3 \ slurm_kill_job.3 \ slurm_kill_job_step.3 \ slurm_load_ctl_conf.3 \ slurm_load_front_end.3 \ slurm_load_job.3 \ slurm_load_jobs.3 \ slurm_load_job_user.3 \ slurm_load_node.3 \ slurm_load_node_single.3 \ slurm_load_partitions.3 \ slurm_load_reservations.3 \ slurm_load_slurmd_status.3 \ slurm_notify_job.3 \ slurm_perror.3 \ slurm_pid2jobid.3 \ slurm_ping.3 \ slurm_print_ctl_conf.3 \ slurm_print_front_end_info_msg.3 \ slurm_print_front_end_table.3 \ slurm_print_job_info.3 \ slurm_print_job_info_msg.3 \ slurm_print_job_step_info.3 \ slurm_print_job_step_info_msg.3 \ slurm_print_node_info_msg.3 \ slurm_print_node_table.3 \ slurm_print_partition_info.3 \ slurm_print_partition_info_msg.3 \ slurm_print_reservation_info.3 \ slurm_print_reservation_info_msg.3 \ slurm_print_slurmd_status.3 \ slurm_read_hostfile.3 \ slurm_reconfigure.3 \ slurm_resume.3 \ slurm_resume2.3 \ slurm_requeue.3 \ slurm_requeue2.3 \ slurm_set_debug_level.3 \ slurm_set_trigger.3 \ slurm_shutdown.3 \ slurm_signal_job.3 \ slurm_signal_job_step.3 \ slurm_slurmd_status.3 \ slurm_sprint_front_end_table.3 \ slurm_sprint_job_info.3 \ slurm_sprint_job_step_info.3 \ slurm_sprint_node_table.3 \ slurm_sprint_partition_info.3 \ slurm_sprint_reservation_info.3 \ slurm_step_ctx_create.3 \ slurm_step_ctx_create_no_alloc.3 \ slurm_step_ctx_daemon_per_node_hack.3 \ slurm_step_ctx_destroy.3 \ slurm_step_ctx_params_t_init.3 \ slurm_step_ctx_get.3 \ slurm_step_launch.3 \ slurm_step_launch_fwd_signal.3 \ slurm_step_launch_abort.3 \ slurm_step_launch_wait_finish.3 \ slurm_step_launch_wait_start.3 \ slurm_strerror.3 \ slurm_submit_batch_job.3 \ slurm_suspend.3 \ slurm_suspend2.3 \ slurm_takeover.3 \ slurm_terminate_job.3 \ slurm_terminate_job_step.3 \ slurm_update_front_end.3 \ slurm_update_job.3 \ slurm_update_job2.3 \ slurm_update_node.3 \ slurm_update_partition.3 \ slurm_update_reservation.3 \ slurm_update_step.3 EXTRA_DIST = $(man3_MANS) slurm-slurm-15-08-7-1/doc/man/man3/Makefile.in000066400000000000000000000575431265000126300205660ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ subdir = doc/man/man3 DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } man3dir = $(mandir)/man3 am__installdirs = "$(DESTDIR)$(man3dir)" NROFF = nroff MANS = $(man3_MANS) am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ man3_MANS = slurm_hostlist_create.3 \ slurm_hostlist_destroy.3 \ slurm_hostlist_shift.3 \ slurm_allocate_resources.3 \ slurm_allocate_resources_blocking.3 \ slurm_allocation_lookup.3 \ slurm_allocation_lookup_lite.3 \ slurm_allocation_msg_thr_create.3 \ slurm_allocation_msg_thr_destroy.3 \ slurm_api_version.3 \ slurm_checkpoint.3 \ slurm_checkpoint_able.3 \ slurm_checkpoint_complete.3 \ slurm_checkpoint_create.3 \ slurm_checkpoint_disable.3 \ slurm_checkpoint_enable.3 \ slurm_checkpoint_error.3 \ slurm_checkpoint_failed.3 \ slurm_checkpoint_restart.3 \ slurm_checkpoint_task_complete.3 \ slurm_checkpoint_tasks.3 \ slurm_checkpoint_vacate.3 \ slurm_clear_trigger.3 \ slurm_complete_job.3 \ slurm_confirm_allocation.3 \ slurm_create_partition.3 \ slurm_create_reservation.3 \ slurm_delete_partition.3 \ slurm_delete_reservation.3 \ slurm_free_ctl_conf.3 \ slurm_free_front_end_info_msg.3 \ slurm_free_job_info_msg.3 \ slurm_free_job_alloc_info_response_msg.3 \ slurm_free_job_array_resp.3 \ slurm_free_job_step_create_response_msg.3 \ slurm_free_job_step_info_response_msg.3 \ slurm_free_node_info.3 \ slurm_free_node_info_msg.3 \ slurm_free_partition_info.3 \ slurm_free_partition_info_msg.3 \ slurm_free_reservation_info_msg.3 \ slurm_free_resource_allocation_response_msg.3 \ slurm_free_slurmd_status.3 \ slurm_free_submit_response_response_msg.3 \ slurm_free_trigger_msg.3 \ slurm_get_end_time.3 \ slurm_get_errno.3 \ slurm_get_job_steps.3 \ slurm_get_rem_time.3 \ slurm_get_select_jobinfo.3 \ slurm_get_triggers.3 \ slurm_init_update_front_end_msg.3 \ slurm_init_job_desc_msg.3 \ slurm_init_part_desc_msg.3 \ slurm_init_resv_desc_msg.3 \ slurm_init_trigger_msg.3 \ slurm_init_update_node_msg.3 \ slurm_init_update_step_msg.3 \ slurm_job_cpus_allocated_on_node.3 \ slurm_job_cpus_allocated_on_node_id.3 \ slurm_job_step_create.3 \ slurm_job_step_launch_t_init.3 \ slurm_job_step_layout_get.3 \ slurm_job_step_layout_free.3 \ slurm_job_will_run.3 \ slurm_job_will_run2.3 \ slurm_jobinfo_ctx_get.3 \ slurm_kill_job.3 \ slurm_kill_job_step.3 \ slurm_load_ctl_conf.3 \ slurm_load_front_end.3 \ slurm_load_job.3 \ slurm_load_jobs.3 \ slurm_load_job_user.3 \ slurm_load_node.3 \ slurm_load_node_single.3 \ slurm_load_partitions.3 \ slurm_load_reservations.3 \ slurm_load_slurmd_status.3 \ slurm_notify_job.3 \ slurm_perror.3 \ slurm_pid2jobid.3 \ slurm_ping.3 \ slurm_print_ctl_conf.3 \ slurm_print_front_end_info_msg.3 \ slurm_print_front_end_table.3 \ slurm_print_job_info.3 \ slurm_print_job_info_msg.3 \ slurm_print_job_step_info.3 \ slurm_print_job_step_info_msg.3 \ slurm_print_node_info_msg.3 \ slurm_print_node_table.3 \ slurm_print_partition_info.3 \ slurm_print_partition_info_msg.3 \ slurm_print_reservation_info.3 \ slurm_print_reservation_info_msg.3 \ slurm_print_slurmd_status.3 \ slurm_read_hostfile.3 \ slurm_reconfigure.3 \ slurm_resume.3 \ slurm_resume2.3 \ slurm_requeue.3 \ slurm_requeue2.3 \ slurm_set_debug_level.3 \ slurm_set_trigger.3 \ slurm_shutdown.3 \ slurm_signal_job.3 \ slurm_signal_job_step.3 \ slurm_slurmd_status.3 \ slurm_sprint_front_end_table.3 \ slurm_sprint_job_info.3 \ slurm_sprint_job_step_info.3 \ slurm_sprint_node_table.3 \ slurm_sprint_partition_info.3 \ slurm_sprint_reservation_info.3 \ slurm_step_ctx_create.3 \ slurm_step_ctx_create_no_alloc.3 \ slurm_step_ctx_daemon_per_node_hack.3 \ slurm_step_ctx_destroy.3 \ slurm_step_ctx_params_t_init.3 \ slurm_step_ctx_get.3 \ slurm_step_launch.3 \ slurm_step_launch_fwd_signal.3 \ slurm_step_launch_abort.3 \ slurm_step_launch_wait_finish.3 \ slurm_step_launch_wait_start.3 \ slurm_strerror.3 \ slurm_submit_batch_job.3 \ slurm_suspend.3 \ slurm_suspend2.3 \ slurm_takeover.3 \ slurm_terminate_job.3 \ slurm_terminate_job_step.3 \ slurm_update_front_end.3 \ slurm_update_job.3 \ slurm_update_job2.3 \ slurm_update_node.3 \ slurm_update_partition.3 \ slurm_update_reservation.3 \ slurm_update_step.3 EXTRA_DIST = $(man3_MANS) all: all-am .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu doc/man/man3/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu doc/man/man3/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs install-man3: $(man3_MANS) @$(NORMAL_INSTALL) @list1='$(man3_MANS)'; \ list2=''; \ test -n "$(man3dir)" \ && test -n "`echo $$list1$$list2`" \ || exit 0; \ echo " $(MKDIR_P) '$(DESTDIR)$(man3dir)'"; \ $(MKDIR_P) "$(DESTDIR)$(man3dir)" || exit 1; \ { for i in $$list1; do echo "$$i"; done; \ if test -n "$$list2"; then \ for i in $$list2; do echo "$$i"; done \ | sed -n '/\.3[a-z]*$$/p'; \ fi; \ } | while read p; do \ if test -f $$p; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; echo "$$p"; \ done | \ sed -e 'n;s,.*/,,;p;h;s,.*\.,,;s,^[^3][0-9a-z]*$$,3,;x' \ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,' | \ sed 'N;N;s,\n, ,g' | { \ list=; while read file base inst; do \ if test "$$base" = "$$inst"; then list="$$list $$file"; else \ echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man3dir)/$$inst'"; \ $(INSTALL_DATA) "$$file" "$(DESTDIR)$(man3dir)/$$inst" || exit $$?; \ fi; \ done; \ for i in $$list; do echo "$$i"; done | $(am__base_list) | \ while read files; do \ test -z "$$files" || { \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(man3dir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(man3dir)" || exit $$?; }; \ done; } uninstall-man3: @$(NORMAL_UNINSTALL) @list='$(man3_MANS)'; test -n "$(man3dir)" || exit 0; \ files=`{ for i in $$list; do echo "$$i"; done; \ } | sed -e 's,.*/,,;h;s,.*\.,,;s,^[^3][0-9a-z]*$$,3,;x' \ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,'`; \ dir='$(DESTDIR)$(man3dir)'; $(am__uninstall_files_from_dir) tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(MANS) installdirs: for dir in "$(DESTDIR)$(man3dir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-man install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-man3 install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-man uninstall-man: uninstall-man3 .MAKE: install-am install-strip .PHONY: all all-am check check-am clean clean-generic clean-libtool \ cscopelist-am ctags-am distclean distclean-generic \ distclean-libtool distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-info install-info-am install-man \ install-man3 install-pdf install-pdf-am install-ps \ install-ps-am install-strip installcheck installcheck-am \ installdirs maintainer-clean maintainer-clean-generic \ mostlyclean mostlyclean-generic mostlyclean-libtool pdf pdf-am \ ps ps-am tags-am uninstall uninstall-am uninstall-man \ uninstall-man3 # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/doc/man/man3/slurm_allocate_resources.3000066400000000000000000000363241265000126300236770ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm job initiation functions" "April 2015" "Slurm job initiation functions" .SH "NAME" slurm_allocate_resources, slurm_allocate_resources_blocking, slurm_allocation_msg_thr_create, slurm_allocation_msg_thr_destroy, slurm_allocation_lookup, slurm_allocation_lookup_lite, slurm_confirm_allocation, slurm_free_submit_response_response_msg, slurm_init_job_desc_msg, slurm_job_will_run, slurm_job_will_run2, slurm_read_hostfile, slurm_submit_batch_job \- Slurm job initiation functions .SH "SYNTAX" .LP #include .LP int \fBslurm_allocate_resources\fR ( .br job_desc_msg_t *\fIjob_desc_msg_ptr\fP, .br resource_allocation_response_msg_t **\fIslurm_alloc_msg_pptr\fP .br ); .LP resource_allocation_response_msg_t *\fBslurm_allocate_resources_blocking\fR ( .br job_desc_msg_t *\fIjob_desc_msg_ptr\fP, .br time_t \fItimeout\fP, void \fI(*pending_callback)(uint32_t job_id)\fP .br ); .LP allocation_msg_thread_t *\fBslurm_allocation_msg_thr_create\fR ( .br uint16_t *\fIport\fP, .br slurm_allocation_callbacks_t *\fIcallbacks\fP .br ); .LP void *\fBslurm_allocation_msg_thr_destroy\fR ( .br allocation_msg_thread_t *\fIslurm_alloc_msg_thr_ptr\fP .br ); .LP int \fBslurm_allocation_lookup\fR { .br uint32_t \fIjobid\fP, .br resource_allocation_response_msg_t **\fIslurm_alloc_msg_pptr\fP .br ); .LP int \fBslurm_allocation_lookup_lite\fR { .br uint32_t \fIjobid\fP, .br resource_allocation_response_msg_t **\fIslurm_alloc_msg_pptr\fP .br ); .LP int \fBslurm_confirm_allocation\fR ( .br old_job_alloc_msg_t *\fIold_job_desc_msg_ptr\fP, .br resource_allocation_response_msg_t **\fIslurm_alloc_msg_pptr\fP .br ); .LP void \fBslurm_free_resource_allocation_response_msg\fR ( .br resource_allocation_response_msg_t *\fIslurm_alloc_msg_ptr\fP .br ); .LP void \fBslurm_free_submit_response_response_msg\fR ( .br submit_response_msg_t *\fIslurm_submit_msg_ptr\fP .br ); .LP void \fBslurm_init_job_desc_msg\fR ( .br job_desc_msg_t *\fIjob_desc_msg_ptr\fP .br ); .LP int \fBslurm_job_will_run\fR ( .br job_desc_msg_t *\fIjob_desc_msg_ptr\fP .br ); .LP int slurm_job_will_run2\fR ( .br job_desc_msg_t *\fIjob_desc_msg_ptr\fP, .br will_run_response_msg_t **\fIwill_run_resp\fP .br ); .LP int \fBslurm_read_hostfile\fR ( .br char *\fIfilename\fP, int \fIn\fP .br ); .LP int \fBslurm_submit_batch_job\fR ( .br job_desc_msg_t *\fIjob_desc_msg_ptr\fP, .br submit_response_msg_t **\fIslurm_submit_msg_pptr\fP .br ); .SH "ARGUMENTS" .LP .TP \fIjob_desc_msg_ptr\fP Specifies the pointer to a job request specification. See slurm.h for full details on the data structure's contents. .TP \fIcallbacks\fP Specifies the pointer to a allocation callbacks structure. See slurm.h for full details on the data structure's contents. .TP \fIold_job_desc_msg_ptr\fP Specifies the pointer to a description of an existing job. See slurm.h for full details on the data structure's contents. .TP \fIslurm_alloc_msg_pptr\fP Specifies the double pointer to the structure to be created and filled with a description of the created resource allocation (job): job ID, list of allocated nodes, processor count per allocated node, etc. See slurm.h for full details on the data structure's contents. .TP \fIslurm_alloc_msg_ptr\fP Specifies the pointer to the structure to be created and filled in by the function \fIslurm_allocate_resources\fP, \fIslurm_allocate_resources_blocking\fP, \fIslurm_allocation_lookup\fP, \fIslurm_allocation_lookup_lite\fP, \fIslurm_confirm_allocation\fP, \fIslurm_job_will_run\fP or \fIslurm_job_will_run\fP. .TP \fIslurm_alloc_msg_thr_ptr\fP Specifies the pointer to the structure created and returned by the function \fIslurm_allocation_msg_thr_create\fP. Must be destroyed with function \fIslurm_allocation_msg_thr_destroy\fP. .TP \fIslurm_submit_msg_pptr\fP Specifies the double pointer to the structure to be created and filled with a description of the created job: job ID, etc. See slurm.h for full details on the data structure's contents. .TP \fIslurm_submit_msg_ptr\fP Specifies the pointer to the structure to be created and filled in by the function \fIslurm_submit_batch_job\fP. .TP \fIwill_run_resp\fP Specifies when and where the specified job descriptor could be started. .SH "DESCRIPTION" .LP \fBslurm_allocate_resources\fR Request a resource allocation for a job. If successful, a job entry is created. Note that if the job's requested node count or time allocation are outside of the partition's limits then a job entry will be created, a warning indication will be placed in the \fIerror_code\fP field of the response message, and the job will be left queued until the partition's limits are changed. Always release the response message when no longer required using the function \fBslurm_free_resource_allocation_response_msg\fR. This function only makes the request once. If the allocation is not available immediately the node_cnt variable in the resp will be 0. If you want a function that will block until either an error is received or an allocation is granted you can use the \fIslurm_allocate_resources_blocking\fP function described below. .LP \fBslurm_allocate_resources_blocking\fR Request a resource allocation for a job. This call will block until the allocation is granted, an error occurs, or the specified timeout limit is reached. The \fIpending_callback\fP parameter will be called if the allocation is not available immediately and the immediate flag is not set in the request. This can be used to get the jobid of the job while waiting for the allocation to become available. On failure NULL is returned and errno is set. .LP \fBslurm_allocation_msg_thr_create\fR Startup a message handler talking with the controller dealing with messages from the controller during an allocation. Callback functions are declared in the \fIcallbacks\fP parameter and will be called when a corresponding message is received from the controller. This message thread is needed to receive messages from the controller about node failure in an allocation and other important messages. Although technically not required, it could be very helpful to inform about problems with the allocation. .LP \fBslurm_allocation_msg_thr_destroy\fR Shutdown the message handler talking with the controller dealing with messages from the controller during an allocation. .LP \fBslurm_confirm_allocation\fR Return detailed information on a specific existing job allocation. \fBOBSOLETE FUNCTION: Use slurm_allocation_lookup instead.\fR This function may only be successfully executed by the job's owner or user root. .LP \fBslurm_free_resource_allocation_response_msg\fR Release the storage generated in response to a call of the function \fBslurm_allocate_resources\fR, \fBslurm_allocation_lookup\fR, or \fBslurm_allocation_lookup_lite\fR. .LP \fBslurm_free_submit_response_msg\fR Release the storage generated in response to a call of the function \fBslurm_submit_batch_job\fR. .LP \fBslurm_init_job_desc_msg\fR Initialize the contents of a job descriptor with default values. Execute this function before issuing a request to submit or modify a job. .LP \fBslurm_job_will_run\fR Determine if the supplied job description could be executed immediately. .LP \fBslurm_job_will_run2\fR Determine when and where the supplied job description can be executed. .LP \fBslurm_read_hostfile\fR Read a Slurm hostfile specified by "filename". "filename" must contain a list of Slurm NodeNames, one per line. Reads up to "n" number of hostnames from the file. Returns a string representing a hostlist ranged string of the contents of the file. This is a helper function, it does not contact any Slurm daemons. .LP \fBslurm_submit_batch_job\fR Submit a job for later execution. Note that if the job's requested node count or time allocation are outside of the partition's limits then a job entry will be created, a warning indication will be placed in the \fIerror_code\fP field of the response message, and the job will be left queued until the partition's limits are changed and resources are available. Always release the response message when no longer required using the function \fBslurm_free_submit_response_msg\fR. .SH "RETURN VALUE" .LP On success, zero is returned. On error, \-1 is returned, and Slurm error code is set appropriately. .SH "ERRORS" .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBESLURM_CAN_NOT_START_IMMEDIATELY\fR the job can not be started immediately as requested. .LP \fBESLURM_DEFAULT_PARTITION_NOT_SET\fR the system lacks a valid default partition. .LP \fBESLURM_JOB_MISSING_PARTITION_KEY\fR use of this partition is restricted through a credential provided only to user root. This job lacks such a valid credential. .LP \fBESLURM_JOB_MISSING_REQUIRED_PARTITION_GROUP\fR use of this partition is restricted to certain groups. This user is not a member of an authorized group. .LP \fBESLURM_REQUESTED_NODES_NOT_IN_PARTITION\fR the job requested use of specific nodes which are not in the requested (or default) partition. .LP \fBESLURM_TOO_MANY_REQUESTED_CPUS\fR the job requested use of more processors than can be made available to in the requested (or default) partition. .LP \fBESLURM_TOO_MANY_REQUESTED_NODES\fR the job requested use of more nodes than can be made available to in the requested (or default) partition. .LP \fBESLURM_ERROR_ON_DESC_TO_RECORD_COPY\fR unable to create the job due to internal resources being exhausted. Try again later. .LP \fBESLURM_JOB_MISSING_SIZE_SPECIFICATION\fR the job failed to specify some size specification. At least one of the following must be supplied: required processor count, required node count, or required node list. .LP \fBESLURM_JOB_SCRIPT_MISSING\fR failed to identify executable program to be queued. .LP \fBESLURM_USER_ID_MISSING\fR identification of the job's owner was not provided. .LP \fBESLURM_DUPLICATE_JOB_ID\fR the requested job id is already in use. .LP \fBESLURM_NOT_TOP_PRIORITY\fR job can not be started immediately because higher priority jobs are waiting to use this partition. .LP \fBESLURM_REQUESTED_NODE_CONFIG_UNAVAILABLE\fR the requested node configuration is not available (at least not in sufficient quantity) to satisfy the request. .LP \fBESLURM_REQUESTED_PART_CONFIG_UNAVAILABLE\fR the requested partition configuration is not available to satisfy the request. This is not a fatal error, but indicates that the job will be left queued until the partition's configuration is changed. This typically indicates that the job's requested node count is outside of the node count range its partition is configured to support (e.g. the job wants 64 nodes and the partition will only schedule jobs using between 1 and 32 nodes). Alternately, the job's time limit exceeds the partition's time limit. .LP \fBESLURM_NODES_BUSY\fR the requested nodes are already in use. .LP \fBESLURM_INVALID_FEATURE\fR the requested feature(s) does not exist. .LP \fBESLURM_INVALID_JOB_ID\fR the requested job id does not exist. .LP \fBESLURM_INVALID_NODE_COUNT\fR the requested node count is not valid. .LP \fBESLURM_INVALID_NODE_NAME\fR the requested node name(s) is/are not valid. .LP \fBESLURM_INVALID_PARTITION_NAME\fR the requested partition name is not valid. .LP \fBESLURM_TRANSITION_STATE_NO_UPDATE\fR the requested job configuration change can not take place at this time. Try again later. .LP \fBESLURM_ALREADY_DONE\fR the specified job has already completed and can not be modified. .LP \fBESLURM_ACCESS_DENIED\fR the requesting user lacks authorization for the requested action (e.g. trying to delete or modify another user's job). .LP \fBESLURM_INTERCONNECT_FAILURE\fR failed to configure the node interconnect. .LP \fBESLURM_BAD_DIST\fR task distribution specification is invalid. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "NON-BLOCKING EXAMPLE" .LP #include .br #include .br #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br job_desc_msg_t job_desc_msg; .br resource_allocation_response_msg_t* slurm_alloc_msg_ptr ; .LP slurm_init_job_desc_msg( &job_desc_msg ); .br job_desc_msg. name = ("job01\0"); .br job_desc_msg. job_min_memory = 1024; .br job_desc_msg. time_limit = 200; .br job_desc_msg. min_nodes = 400; .br job_desc_msg. user_id = getuid(); .br job_desc_msg. group_id = getgid(); .br if (slurm_allocate_resources(&job_desc_msg, .br &slurm_alloc_msg_ptr)) { .br slurm_perror ("slurm_allocate_resources error"); .br exit (1); .br } .br printf ("Allocated nodes %s to job_id %u\\n", .br slurm_alloc_msg_ptr\->node_list, .br slurm_alloc_msg_ptr\->job_id ); .br if (slurm_kill_job(slurm_alloc_msg_ptr\->job_id, SIGKILL, 0)) { .br printf ("kill errno %d\\n", slurm_get_errno()); .br exit (1); .br } .br printf ("canceled job_id %u\\n", .br slurm_alloc_msg_ptr\->job_id ); .br slurm_free_resource_allocation_response_msg( .br slurm_alloc_msg_ptr); .br exit (0); .br } .SH "BLOCKING EXAMPLE" .LP #include .br #include .br #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br job_desc_msg_t job_desc_msg; .br resource_allocation_response_msg_t* slurm_alloc_msg_ptr ; .LP slurm_init_job_desc_msg( &job_desc_msg ); .br job_desc_msg. name = ("job01\0"); .br job_desc_msg. job_min_memory = 1024; .br job_desc_msg. time_limit = 200; .br job_desc_msg. min_nodes = 400; .br job_desc_msg. user_id = getuid(); .br job_desc_msg. group_id = getgid(); .br if (!(slurm_alloc_msg_ptr = .br slurm_allocate_resources_blocking(&job_desc_msg, 0, NULL))) { .br slurm_perror ("slurm_allocate_resources_blocking error"); .br exit (1); .br } .br printf ("Allocated nodes %s to job_id %u\\n", .br slurm_alloc_msg_ptr\->node_list, .br slurm_alloc_msg_ptr\->job_id ); .br if (slurm_kill_job(slurm_alloc_msg_ptr\->job_id, SIGKILL, 0)) { .br printf ("kill errno %d\\n", slurm_get_errno()); .br exit (1); .br } .br printf ("canceled job_id %u\\n", .br slurm_alloc_msg_ptr\->job_id ); .br slurm_free_resource_allocation_response_msg( .br slurm_alloc_msg_ptr); .br exit (0); .br } .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2010\-2014 SchedMD LLC. Copyright (C) 2002\-2006 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBhostlist_create\fR(3), \fBhostlist_shift\fR(3), \fBhostlist_destroy\fR(3), \fBscancel\fR(1), \fBsrun\fR(1), \fBslurm_free_job_info_msg\fR(3), \fBslurm_get_errno\fR(3), \fBslurm_load_jobs\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_allocate_resources_blocking.3000066400000000000000000000000441265000126300255350ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_allocation_lookup.3000066400000000000000000000000441265000126300235250ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_allocation_lookup_lite.3000066400000000000000000000000441265000126300245420ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_allocation_msg_thr_create.3000066400000000000000000000000441265000126300252020ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_allocation_msg_thr_destroy.3000066400000000000000000000000441265000126300254300ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_api_version.3000066400000000000000000000000371265000126300223270ustar00rootroot00000000000000.so man3/slurm_free_ctl_conf.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint.3000066400000000000000000000000421265000126300221340ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint_able.3000066400000000000000000000000421265000126300231170ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint_complete.3000066400000000000000000000000421265000126300240240ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint_create.3000066400000000000000000000000421265000126300234570ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint_disable.3000066400000000000000000000000421265000126300236170ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint_enable.3000066400000000000000000000000421265000126300234420ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint_error.3000066400000000000000000000146051265000126300233570ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm checkpoint functions" "April 2015" "Slurm checkpoint functions" .SH "NAME" slurm_checkpoint_able, slurm_checkpoint_complete, slurm_checkpoint_create, slurm_checkpoint_disable, slurm_checkpoint_enable, slurm_checkpoint_error, slurm_checkpoint_restart, slurm_checkpoint_vacate \- Slurm checkpoint functions .SH "SYNTAX" .LP #include .LP .LP int \fBslurm_checkpoint_able\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIstep_id\fP, .br time_t *\fIstart_time\fP, .br ); .LP int \fBslurm_checkpoint_complete\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIstep_id\fP, .br time_t \fIstart_time\fP, .br uint32_t \fIerror_code\fP, .br char *\fIerror_msg\fP .br ); .LP int \fBslurm_checkpoint_create\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIstep_id\fP, .br uint16_t \fImax_wait\fP, .br char *\fIimage_dir\fP .br ); .LP int \fBslurm_checkpoint_disable\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIstep_id\fP .br ); .LP int \fBslurm_checkpoint_enable\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIstep_id\fP .br ); .LP int \fBslurm_checkpoint_error\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIstep_id\fP, .br uint32_t *\fIerror_code\fP, .br char ** \fIerror_msg\fP .br ); .LP int \fBslurm_checkpoint_restart\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIstep_id\fP, .br uint16_t \fIstick\fP, .br char *\fIimage_dir\fP .br ); .LP .LP int \fBslurm_checkpoint_tasks\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIstep_id\fP, .br time_t \fIbegin_time\fP, .br char *\fIimage_dir\fP, .br uint16_t \fImax_wait\fP, .br char *\fInodelist\fP .br ); .LP int \fBslurm_checkpoint_vacate\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIstep_id\fP, .br uint16_t \fImax_wait\fP, .br char *\fIimage_dir\fP .br ); .SH "ARGUMENTS" .LP .TP \fIbegin_time\fP When to begin the operation. .TP \fIerror_code\fP Error code for checkpoint operation. Only the highest value is preserved. .TP \fIerror_msg\fP Error message for checkpoint operation. Only the \fIerror_msg\fP value for the highest \fIerror_code\fP is preserved. .TP \fIimage_dir\fP Directory specification for where the checkpoint file should be read from or written to. The default value is specified by the \fIJobCheckpointDir\fP Slurm configuration parameter. .TP \fIjob_id\fP Slurm job ID to perform the operation upon. .TP \fImax_wait\fP Maximum time to allow for the operation to complete in seconds. .TP \fInodelist\fP Nodes to send the request. .TP \fIstart_time\fP Time at which last checkpoint operation began (if one is in progress), otherwise zero. .TP \fIstep_id\fP Slurm job step ID to perform the operation upon. May be NO_VAL if the operation is to be performed on all steps of the specified job. Specify SLURM_BATCH_SCRIPT to checkpoint a batch job. .TP \fIstick\fP If non\-zero then restart the job on the same nodes that it was checkpointed from. .SH "DESCRIPTION" .LP \fBslurm_checkpoint_able\fR Report if checkpoint operations can presently be issued for the specified job step. If yes, returns SLURM_SUCCESS and sets \fIstart_time\fP if checkpoint operation is presently active. Returns ESLURM_DISABLED if checkpoint operation is disabled. .LP \fBslurm_checkpoint_complete\fR Note that a requested checkpoint has been completed. .LP \fBslurm_checkpoint_create\fR Request a checkpoint for the identified job step. Continue its execution upon completion of the checkpoint. .LP \fBslurm_checkpoint_disable\fR Make the identified job step non\-checkpointable. This can be issued as needed to prevent checkpointing while a job step is in a critical section or for other reasons. .LP \fBslurm_checkpoint_enable\fR Make the identified job step checkpointable. .LP \fBslurm_checkpoint_error\fR Get error information about the last checkpoint operation for a given job step. .LP \fBslurm_checkpoint_restart\fR Request that a previously checkpointed job resume execution. It may continue execution on different nodes than were originally used. Execution may be delayed if resources are not immediately available. .LP \fBslurm_checkpoint_vacate\fR Request a checkpoint for the identified job step. Terminate its execution upon completion of the checkpoint. .SH "RETURN VALUE" .LP Zero is returned upon success. On error, \-1 is returned, and the Slurm error code is set appropriately. .SH "ERRORS" .LP \fBESLURM_INVALID_JOB_ID\fR the requested job or job step id does not exist. .LP \fBESLURM_ACCESS_DENIED\fR the requesting user lacks authorization for the requested action (e.g. trying to delete or modify another user's job). .LP \fBESLURM_JOB_PENDING\fR the requested job is still pending. .LP \fBESLURM_ALREADY_DONE\fR the requested job has already completed. .LP \fBESLURM_DISABLED\fR the requested operation has been disabled for this job step. This will occur when a request for checkpoint is issued when they have been disabled. .LP \fBESLURM_NOT_SUPPORTED\fR the requested operation is not supported on this system. .SH "EXAMPLE" .LP #include .br #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br uint32_t job_id, step_id; .LP if (argc < 3) { .br printf("Usage: %s job_id step_id\\n", argv[0]); .br exit(1); .br } .LP job_id = atoi(argv[1]); .br step_id = atoi(argv[2]); .br if (slurm_checkpoint_disable(job_id, step_id)) { .br slurm_perror ("slurm_checkpoint_error:"); .br exit (1); .br } .br exit (0); .br } .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2004\-2007 The Regents of the University of California. Copyright (C) 2008\-2009 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBsrun\fR(1), \fBsqueue\fR(1), \fBfree\fR(3), \fBslurm.conf\fR(5) slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint_failed.3000066400000000000000000000000421265000126300234400ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint_restart.3000066400000000000000000000000421265000126300237000ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint_task_complete.3000066400000000000000000000000421265000126300250460ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint_tasks.3000066400000000000000000000000421265000126300233410ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_checkpoint_vacate.3000066400000000000000000000000421265000126300234570ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_clear_trigger.3000066400000000000000000000067501265000126300226320ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm event trigger management functions" "April 2015" "Slurm event trigger management functions" .SH "NAME" slurm_init_trigger_msg, slurm_clear_trigger, slurm_free_trigger_msg, slurm_get_triggers, slurm_set_trigger \- Slurm event trigger management functions .SH "SYNTAX" .LP #include .LP .LP int \fBslurm_set_trigger\fR ( .br trigger_info_t *\fItrigger_info\fP .br ); .LP int \fBslurm_clear_trigger\fR ( .br trigger_info_t *\fItrigger_info\fP .br ); .LP int \fBslurm_get_triggers\fR ( .br trigger_info_msg_t **\fItrigger_info_msg\fP .br ); .LP int \fBslurm_free_trigger\fR ( .br trigger_info_msg_t *\fItrigger_info_msg\fP .br ); .LP int \fBslurm_init_trigger_msg\fR ( .br trigger_info_msg_t *\fItrigger_info_msg\fP .br ); .SH "ARGUMENTS" .LP .TP \fItrigger_info\fP Information about one event trigger including trigger ID, type, time offset, etc. See \fIslurm.h\fP for details. .TP \fItrigger_info_msg\fP A data structure including an array of \fItrigger_info\fP structures plus their count. See \fIslurm.h\fP for details. .SH "DESCRIPTION" .LP \fBslurm_set_trigger\fR Create a new event trigger. Note that any trigger ID specified in \fItrigger_info\fP is unused. .LP \fBslurm_clear_trigger\fR Clear or remove existing event triggers. If a trigger ID is specified then only that one trigger will be cleared. If a job ID or node name is specified, then all triggers associated with that resource are cleared. .LP \fBslurm_get_triggers\fR Get information about all currently configured event triggers. To avoid memory leaks, always follow this with a call to the \fBslurm_free_trigger\fR function. .LP \fBslurm_free_trigger\fR Release the memory allocated for the array returned by the \fBslurm_get_triggers\fR function. .LP \fBslurm_init_trigger_msg\fR Initialize the data structure to be used in subsequent call to \fBslurm_set_trigger\fR or \fBslurm_clear_trigger\fR. .SH "RETURN VALUE" .LP \fBSLURM_SUCCESS\fR is returned on successful completion, otherwise an error code is returned as described below. .SH "ERRORS" .LP \fBEINVAL\fR Invalid argument .LP \fBESLURM_ACCESS_DENIED\fR Attempt by non\-privileged user to set an event trigger. .LP \fBESLURM_ALREADY_DONE\fR Attempt to set an event trigger for a job which has already completed. .LP \fBESLURM_INVALID_NODE_NAME\fR Attempt to set an event trigger for a node name which is invalid. .LP \fBESLURM_INVALID_JOB_ID\fR the specified job id does not exist. .LP \fBESLURM_TRIGGER_DUP\fR there is already an identical event trigger. .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2010 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). Portions Copyright (C) 2014 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBstrigger\fR(1), \fBslurm_get_errno\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_complete_job.3000066400000000000000000000045031265000126300224550ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm job completion functions" "April 2015" "Slurm job completion functions" .SH "NAME" slurm_complete_job \- Slurm job completion call .SH "SYNTAX" .LP #include .LP int \fBslurm_complete_job\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIjob_return_code\fP .br ); .SH "ARGUMENTS" .LP .TP \fIjob_id\fP Slurm job id number. .TP \fIjob_return_code\fP Exit code of the program executed. .SH "DESCRIPTION" .LP \fBslurm_complete_job\fR Note the termination of a job. This function may only be successfully executed by the job's owner or user root. .SH "RETURN VALUE" .LP On success, zero is returned. On error, \-1 is returned, and Slurm error code is set appropriately. .SH "ERRORS" .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBESLURM_INVALID_JOB_ID\fR the requested job id does not exist. .LP \fBESLURM_ALREADY_DONE\fR the specified job has already completed and can not be modified. .LP \fBESLURM_ACCESS_DENIED\fR the requesting user lacks authorization for the requested action (e.g. trying to delete or modify another user's job). .LP \fBESLURM_INTERCONNECT_FAILURE\fR failed to configure the node interconnect. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2002\-2007 The Regents of the University of California. Copyright (C) 2008\-2009 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm_get_errno\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_confirm_allocation.3000066400000000000000000000000441265000126300236510ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_create_partition.3000066400000000000000000000000351265000126300233430ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_create_reservation.3000066400000000000000000000000351265000126300236730ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_delete_partition.3000066400000000000000000000000351265000126300233420ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_delete_reservation.3000066400000000000000000000000351265000126300236720ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_ctl_conf.3000066400000000000000000000106141265000126300226030ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm informational functions" "April 2015" "Slurm informational functions" .SH "NAME" slurm_free_ctl_conf, slurm_load_ctl_conf, slurm_print_ctl_conf \- Slurm information reporting functions .SH "SYNTAX" .LP #include .br #include .LP long \fBslurm_api_version\fR (); .LP void \fBslurm_free_ctl_conf\fR ( .br slurm_ctl_conf_t *\fIconf_info_msg_ptr\fP .br ); .LP int \fBslurm_load_ctl_conf\fR ( .br time_t \fIupdate_time\fP, .br slurm_ctl_conf_t **\fIconf_info_msg_pptr\fP .br ); .LP void \fBslurm_print_ctl_conf\fR ( .br FILE *\fIout_file\fp, .br slurm_ctl_conf_t *\fIconf_info_msg_ptr\fP .br ); .SH "ARGUMENTS" .LP .TP \fIconf_info_msg_pptr\fP Specifies the double pointer to the structure to be created and filled with the time of the last configuration update and detailed configuration information. Configuration information includes control machine names, file names, timer values, etc. See slurm.h for full details on the data structure's contents. .TP \fIconf_info_msg_ptr\fP Specifies the pointer to the structure created by \fBslurm_load_ctl_conf\fR. .TP \fIout_file\fP Specifies the file to print data to. .TP \fIupdate_time\fP For all of the following informational calls, if update_time is equal to or greater than the last time changes where made to that information, new information is not returned. Otherwise all the configuration. job, node, or partition records are returned. .SH "DESCRIPTION" .LP \fBslurm_api_version\fR Return the Slurm API version number. .LP \fBslurm_free_ctl_conf\fR Release the storage generated by the \fBslurm_load_ctl_conf\fR function. .LP \fBslurm_load_ctl_conf\fR Returns a slurm_ctl_conf_t that contains Slurm configuration records. .LP \fBslurm_print_ctl_conf\fR Prints the contents of the data structure loaded by the \fBslurm_load_ctl_conf\fR function. .SH "RETURN VALUE" .LP For \fBslurm_api_version\fR the Slurm API version number is returned. All other functions return zero on success and \-1 on error with the Slurm error code set appropriately. .SH "ERRORS" .LP \fBSLURM_NO_CHANGE_IN_DATA\fR Data has not changed since \fBupdate_time\fR. .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "EXAMPLE" .LP #include .br #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br slurm_ctl_conf_t * conf_info_msg_ptr = NULL; .br long version = slurm_api_version(); .LP /* We can use the Slurm version number to determine how .br * API should be used */ .br printf("slurm_api_version: %ld, %ld.%ld.%ld\\n", version, .br SLURM_VERSION_MAJOR(version), .br SLURM_VERSION_MINOR(version), .br SLURM_VERSION_MICRO(version)); .LP /* get and print some configuration information */ .br if ( slurm_load_ctl_conf ((time_t) NULL, .br &conf_info_msg_ptr ) ) { .br slurm_perror ("slurm_load_ctl_conf error"); .br exit (1); .br } .br /* The easy way to print */ .br slurm_print_ctl_conf (stdout, .br conf_info_msg_ptr); .LP /* The hard way */ .br printf ("control_machine = %s\\n", .br conf_info_msg_ptr\->control_machine); .br printf ("first_job_id = %u\\n", .br conf_info_msg_ptr\->first_job_id); .LP slurm_free_ctl_conf (conf_info_msg_ptr); .br exit (0); .br } .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2002\-2007 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBscontrol\fR(1), \fBslurm_get_errno\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_front_end_info_msg.3000066400000000000000000000131751265000126300246600ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm front end node informational functions" "April 2015" "Slurm front end node informational functions" .SH "NAME" slurm_free_front_end_info_msg, slurm_load_front_end, slurm_print_front_end_info_msg, slurm_print_front_end_table, slurm_sprint_front_end_table \- Slurm front end node information reporting functions .SH "SYNTAX" .LP #include .br #include .LP void \fBslurm_free_front_end_info_msg\fR ( .br front_end_info_msg_t *\fIfront_end_info_msg_ptr\fP .br ); .LP int \fBslurm_load_front_end\fR ( .br time_t \fIupdate_time\fP, .br front_end_info_msg_t **\fIfront_end_info_msg_pptr\fP, .br ); .LP void \fBslurm_print_front_end_info_msg\fR ( .br FILE *\fIout_file\fp, .br front_end_info_msg_t *\fIfront_end_info_msg_ptr\fP, .br int \fIone_liner\fP .br ); .LP void \fBslurm_print_front_end_table\fR ( .br FILE *\fIout_file\fp, .br front_end_info_t *\fIfront_end_ptr\fP, .br int \fIone_liner\fP .br ); .LP char *\fBslurm_sprint_front_end_table\fR ( .br front_end_info_t *\fIfront_end_ptr\fP, .br int \fIone_liner\fP .br ); .SH "ARGUMENTS" .LP .TP \fIfront_end_info_msg_ptr\fP Specifies the pointer to the structure created by \fBslurm_load_front_end\fR. .TP \fIfront_end_info_msg_pptr\fP Specifies the double pointer to the structure to be created and filled with the time of the last front end node update, a record count, and detailed information about each front_end node. Detailed front_end node information is written to fixed sized records and includes: name, state, etc. See slurm.h for full details on the data structure's contents. .TP \fIfront_end_ptr\fP Specifies a pointer to a single front end node record from the \fIfront_end_info_msg_ptr\fP data structure. .TP \fIone_liner\fP Print one record per line if non\-zero. .TP \fIout_file\fP Specifies the file to print data to. .TP \fIupdate_time\fP For all of the following informational calls, if update_time is equal to or greater than the last time changes where made to that information, new information is not returned. Otherwise all the configuration. job, node, or partition records are returned. .SH "DESCRIPTION" .LP \fBslurm_free_front_end_info_msg\fR Release the storage generated by the \fBslurm_load_front_end\fR function. .LP \fBslurm_load_front_end\fR Returns a \front_end_info_msg_t\fP that contains an update time, record count, and array of records for all front end nodes. .LP \fBslurm_print_front_end_info_msg\fR Prints the contents of the data structure describing all front end node records from the data loaded by the \fBslurm_load_front_end\fR function. .LP \fBslurm_print_front_end_table\fR Prints to a file the contents of the data structure describing a single front end node record loaded by the \fBslurm_load_front_end\fR function. .LP \fBslurm_psrint_front_end_table\fR Prints to memory the contents of the data structure describing a single front end node record loaded by the \fBslurm_load_front_end\fR function. .SH "RETURN VALUE" .LP On success, zero is returned. On error, \-1 is returned, and Slurm error code is set appropriately. .SH "ERRORS" .LP \fBSLURM_NO_CHANGE_IN_DATA\fR Data has not changed since \fBupdate_time\fR. .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "EXAMPLE" .LP #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br int i; .br front_end_info_msg_t *front_end_info_ptr = NULL; .br front_end_info_t *front_end_ptr; .LP /* get and dump some node information */ .br if ( slurm_load_front_end ((time_t) NULL, .br &front_end_buffer_ptr) ) { .br slurm_perror ("slurm_load_front_end error"); .br exit (1); .br } .LP /* The easy way to print... */ .br slurm_print_front_end_info_msg (stdout, front_end_buffer_ptr, 0); .LP /* A harder way.. */ .br for (i = 0; i < front_end_buffer_ptr\->record_count; i++) { .br front_end_ptr = &front_end_buffer_ptr\->front_end_array[i]; .br slurm_print_front_end_table(stdout, front_end_ptr, 0); .br } .LP /* The hardest way. */ .br for (i = 0; i < front_end_buffer_ptr\->front_end_count; i++) { .br printf ("FrontEndName=%s StateCode=%u\\n", .br front_end_buffer_ptr\->front_end_array[i].name, .br front_end_buffer_ptr\->front_end_array[i].node_state); .br } .br slurm_free_front_end_info_msg (front_end_buffer_ptr); .br exit (0); .br } .SH "NOTES" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .LP Some data structures contain index values to cross\-reference each other. If the \fIshow_flags\fP argument is not set to SHOW_ALL when getting this data, these index values will be invalid. .SH "COPYING" Copyright (C) 2010 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBscontrol\fR(1), \fBslurm_get_errno\fR(3), \fBslurm_load_node\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_job_alloc_info_response_msg.3000066400000000000000000000000431265000126300265320ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_job_array_resp.3000066400000000000000000000000301265000126300240040ustar00rootroot00000000000000.so man3/slurm_resume.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_job_info_msg.3000066400000000000000000000316511265000126300234530ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm job information reporting functions" "April 2015" "Slurm job information reporting functions" .SH "NAME" slurm_free_job_alloc_info_response_msg, slurm_free_job_info_msg, slurm_get_end_time, slurm_get_rem_time, slurm_get_select_jobinfo, slurm_job_cpus_allocated_on_node, slurm_job_cpus_allocated_on_node_id, slurm_job_cpus_allocated_str_on_node, slurm_job_cpus_allocated_str_on_node_id, slurm_load_jobs, slurm_load_job_user, slurm_pid2jobid, slurm_print_job_info, slurm_print_job_info_msg \- Slurm job information reporting functions .LP ISLURM_GET_REM_TIME, ISLURM_GET_REM_TIME2 \- Fortran callable extensions .SH "SYNTAX" .LP #include .br #include .br #include .br #include .LP void \fBslurm_free_job_alloc_info_response_msg\fR ( .br job_alloc_info_response_msg_t *\fIjob_alloc_info_msg_ptr\fP .br ); .LP void \fBslurm_free_job_info_msg\fR ( .br job_info_msg_t *\fIjob_info_msg_ptr\fP .br ); .LP int \fBslurm_load_job\fR ( .br job_info_msg_t **\fIjob_info_msg_pptr\fP, .br uint32_t \fIjob_id\fP, .br uint16_t \fIshow_flags\fP, .br ); .LP int \fBslurm_load_job_user\fR ( .br job_info_msg_t **\fIjob_info_msg_pptr\fP, .br uint32_t \fIuser_id\fP, .br uint16_t \fIshow_flags\fP, .br ); .LP int \fBslurm_load_jobs\fR ( .br time_t \fIupdate_time\fP, .br job_info_msg_t **\fIjob_info_msg_pptr\fP, .br uint16_t \fIshow_flags\fP .br ); .LP int \fBslurm_notify_job\fR ( .br uint32_t \fIjob_id\fP, .br char *\fImessage\fP .br ); .LP int \fBslurm_pid2jobid\fR ( .br pid_t \fIjob_pid\fP, .br uint32_t *\fIjob_id_ptr\fP .br ); .LP int \fBslurm_get_end_time\fR ( .br uint32_t \fIjobid\fP, .br time_t *\fIend_time_ptr\fP .br ); .LP long \fBslurm_get_rem_time\fR ( .br uint32_t \fIjob_id\fP .br ); .LP void \fBslurm_print_job_info\fR ( .br FILE *\fIout_file\fP, .br job_info_t *\fIjob_ptr\fP, .br int \fIone_liner\fP .br ); .LP void \fBslurm_print_job_info_msg\fR ( .br FILE *\fIout_file\fP, .br job_info_msg_t *\fIjob_info_msg_ptr\fP, .br int \fIone_liner\fP .br ); .LP int \fBslurm_get_select_jobinfo\fR ( .br select_jobinfo_t \fIjobinfo\fP, .br enum select_data_type \fIdata_type\fP, .br void *\fIdata\fP ); .LP int \fBslurm_job_cpus_allocated_on_node_id\fR ( .br job_resources_t *\fIjob_resrcs_ptr\fP, .br int \fInode_id\fP .br ); .LP int \fBslurm_job_cpus_allocated_on_node\fR ( .br job_resources_t *\fIjob_resrcs_ptr\fP, .br const char *\fInode_name\fP .br ); .LP int \fBslurm_job_cpus_allocated_str_on_node_id\fR ( .br char *\fIcpus\fP, .br size_t \fIcpus_len\fP, .br job_resources_t *\fIjob_resrcs_ptr\fP, .br int \fInode_id\fP .br ); .LP int \fBslurm_job_cpus_allocated_str_on_node\fR ( .br char *\fIcpus\fP, .br size_t \fIcpus_len\fP, .br job_resources_t *\fIjob_resrcs_ptr\fP, .br const char *\fInode_name\fP .br ); .SH "FORTRAN EXTENSION" .LP INTEGER*4 JOBID, REM_TIME .br REM_TIME = ISLURM_GET_REM_TIME(JOBID) .br REM_TIME = ISLURM_GET_REM_TIME2() .LP ISLURM_GET_REM_TIME2() is equivalent to ISLURM_GET_REM_TIME() except that the JOBID is taken from the SLURM_JOB_ID environment variable, which is set by Slurm for tasks which it launches. Both functions return the number of seconds remaining before the job reaches the end of it's allocated time. .SH "ARGUMENTS" .TP \fIcpus\fP Specifies a pointer to allocated memory into which the string representing the list of allocated CPUs on the node is placed. .TP \fIcpus_len\fP The size in bytes of the allocated memory space pointed by \fIcpus\fP. .TP \fIdata_type\fP Identifies the type of data to retrieve \fIjobinfo\fP. Note that different types of data are associated with different computer types and different configurations. .TP \fIdata\fP The data value identified with \fIdata_type\fP is returned in the location specified by \fIdata\fP. If a type of data is requested that does not exist on a particular computer type or configuration, \fBslurm_get_select_jobinfo\fR returns an error. See the slurm.h header file for identification of the data types associated with each value of \fIdata_type\fP. .TP \fIend_time_ptr\fP Specified a pointer to a storage location into which the expected termination time of a job is placed. .TP \fIjob_info_msg_pptr\fP Specifies the double pointer to the structure to be created and filled with the time of the last job update, a record count, and detailed information about each job. Detailed job information is written to fixed sized records and includes: ID number, name, user ID, state, assigned or requested node names, indexes into the node table, etc. In the case of indexes into the node table, this is an array of integers with pairs of start and end index number into the node information records and the data is terminated with a value of \-1. See slurm.h for full details on the data structure's contents. .TP \fIjob_id\fP Specifies a slurm job id. If zero, use the SLURM_JOB_ID environment variable to get the jobid. .TP \fIjob_id_ptr\fP Specifies a pointer to a storage location into which a Slurm job id may be placed. .TP \fIjob_info_msg_ptr\fP Specifies the pointer to the structure created by \fBslurm_load_job\fR or \fBslurm_load_jobs\fR. .TP \fIjobinfo\fP Job\-specific information as constructed by Slurm's NodeSelect plugin. This data object is returned for each job by the \fBslurm_load_job\fR or \fBslurm_load_jobs\fR function. .TP \fIjob_pid\fP Specifies a process id of some process on the current node. .TP \fIjob_ptr\fP Specifies a pointer to a single job records from the \fIjob_info_msg_ptr\fP data structure. .TP \fIjob_resrcs_ptr\fP Pointer to a job_resources_t structure previously using the function \fBslurm_load_job\fR with a \fIshow_flags\fP value of \fBSHOW_DETAIL\fP. .TP \fInode_id\fP Zero origin ID of a node allocated to a job. .TP \fInode_name\fP Name of a node allocated to a job. .TP \fIone_liner\fP Print one record per line if non\-zero. .TP \fIout_file\fP Specifies the file to print data to. .TP \fIshow_flags\fP Job filtering flags, may be ORed. Information about jobs in partitions that are configured as hidden and partitions that the user's group is unable to utilize are not reported by default. The \fBSHOW_ALL\fP flag will cause information about jobs in all partitions to be displayed. The \fBSHOW_DETAIL\fP flag will cause detailed resource allocation information to be reported (e.g. the could of CPUs allocated to a job on each node). .TP \fIupdate_time\fP For all of the following informational calls, if update_time is equal to or greater than the last time changes where made to that information, new information is not returned. Otherwise all the configuration. job, node, or partition records are returned. .TP \fIuser_id\fP ID of user we want information for. .SH "DESCRIPTION" .LP \fBslurm_free_resource_allocation_response_msg\fR Free slurm resource allocation response message. .LP \fBslurm_free_job_info_msg\fR Release the storage generated by the \fBslurm_load_jobs\fR function. .LP \fBslurm_get_end_time\fR Returns the expected termination time of a specified Slurm job. The time corresponds to the exhaustion of the job\'s or partition\'s time limit. NOTE: The data is cached locally and only retrieved from the Slurm controller once per minute. .LP \fBslurm_get_rem_time\fR Returns the number of seconds remaining before the expected termination time of a specified Slurm job id. The time corresponds to the exhaustion of the job\'s or partition\'s time limit. NOTE: The data is cached locally and only retrieved from the Slurm controller once per minute. .LP \fBslurm_job_cpus_allocated_on_node\fR and \fBslurm_job_cpus_allocated_on_node_id\fR return the number of CPUs allocated to a job on a specific node allocated to a job. .LP \fBslurm_job_cpus_allocated_str_on_node\fR and \fBslurm_job_cpus_allocated_str_on_node_id\fR return a string representing the list of CPUs allocated to a job on a specific node allocated to a job. .LP \fBslurm_load_job\fR Returns a job_info_msg_t that contains an update time, record count, and array of job_table records for some specific job ID. .LP \fBslurm_load_jobs\fR Returns a job_info_msg_t that contains an update time, record count, and array of job_table records for all jobs. .LP \fBslurm_load_job_yser\fR Returns a job_info_msg_t that contains an update time, record count, and array of job_table records for all jobs associated with a specific user ID. .LP \fBslurm_load_job_user\fR issues RPC to get slurm information about all jobs to be run as the specified user. .LP \fBslurm_notify_job\fR Sends the specified message to standard output of the specified job ID. .LP \fBslurm_pid2jobid\fR Returns a Slurm job id corresponding to the supplied local process id. This only works for processes which Slurm spawns and their descendants. .LP \fBslurm_print_job_info\fR Prints the contents of the data structure describing a single job records from the data loaded by the \fBslurm_load_node\fR function. .LP \fBslurm_print_job_info_msg\fR Prints the contents of the data structure describing all job records loaded by the \fBslurm_load_node\fR function. .SH "RETURN VALUE" .LP For \fBslurm_get_rem_time\fR on success a number of seconds is returned. For all other functions zero is returned on success. On error, \-1 is returned, and Slurm error code is set appropriately. .SH "ERRORS" .LP \fBSLURM_NO_CHANGE_IN_DATA\fR Data has not changed since \fBupdate_time\fR. .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBESLURM_INVALID_JOB_ID\fR Request for information about a non\-existent job. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .LP \fBINVAL\fR Invalid function argument. .SH "EXAMPLE" .LP #include .br #include .br #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br int i; .br job_info_msg_t * job_buffer_ptr = NULL; .br job_info_t * job_ptr; .br uint32_t job_id; .LP /* get and dump some job information */ .br if ( slurm_load_jobs ((time_t) NULL, .br &job_buffer_ptr, SHOW_ALL) ) { .br slurm_perror ("slurm_load_jobs error"); .br exit (1); .br } .LP /* The easy way to print... */ .br slurm_print_job_info_msg (stdout, job_buffer_ptr, 0); .LP /* A harder way.. */ .br for (i = 0; i < job_buffer_ptr\->record_count; i++) { .br job_ptr = &job_buffer_ptr\->job_array[i]; .br slurm_print_job_info(stdout, job_ptr, 1); .br } .LP /* The hardest way. */ .br printf ("Jobs updated at %lx, record count %d\\n", .br job_buffer_ptr\->last_update, .br job_buffer_ptr\->record_count); .br for (i = 0; i < job_buffer_ptr\->record_count; i++) { .br printf ("JobId=%u UserId=%u\\n", .br job_buffer_ptr\->job_array[i].job_id, .br job_buffer_ptr\->job_array[i].user_id); .br } .LP if (job_buffer_ptr\->record_count >= 1) { .br uint16_t nodes; .br if (slurm_get_select_jobinfo( .br job_buffer_ptr\->job_array[0].select_jobinfo, .br SELECT_JOBDATA_NODE_CNT, .br &nodes) == SLURM_SUCCESS) .br printf("JobId=%u Nodes=%u\\n", .br job_buffer_ptr\->job_array[0].job_id, .br nodes); .br } .LP slurm_free_job_info_msg (job_buffer_ptr); .LP if (slurm_pid2jobid (getpid(), &job_id)) .br slurm_perror ("slurm_load_jobs error"); .br else .br printf ("Slurm job id = %u\\n", job_id); .LP exit (0); .br } .SH "NOTES" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .LP The \fIcommand\fR field in the job record will be the name of user program to be launched by the srun or sbatch command. The field is not set when either the salloc command is used or the sbatch command is used with the \-\-wrap option. .LP Some data structures contain index values to cross\-reference each other. If the \fIshow_flags\fP argument is not set to SHOW_ALL when getting this data, these index values will be invalid. .LP The \fBslurm_hostlist_\fR functions can be used to convert Slurm node list expressions into a collection of individual node names. .SH "COPYING" Copyright (C) 2002\-2006 The Regents of the University of California. Copyright (C) 2008\-2010 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBscontrol\fR(1), \fBsqueue\fR(1), \fBslurm_hostlist_create\fR(3), \fBslurm_hostlist_shift\fR(3), \fBslurm_hostlist_destroy\fR(3), \fBslurm_allocation_lookup\fR(3), \fBslurm_get_errno\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_job_step_create_response_msg.3000066400000000000000000000000411265000126300267210ustar00rootroot00000000000000.so man3/slurm_job_step_create.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_job_step_info_response_msg.3000066400000000000000000000144241265000126300264230ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm job step information functions" "April 2015" "Slurm job step information functions" .SH "NAME" slurm_free_job_step_info_response_msg, slurm_get_job_steps, slurm_print_job_step_info, slurm_print_job_step_info_msg \- Slurm job step information reporting functions .SH "SYNTAX" .LP #include .br #include .LP void \fBslurm_free_job_step_info_response_msg\fR ( .br job_step_info_response_msg_t *\fIjob_step_info_msg_ptr\fP .br ); .LP void \fBslurm_get_job_steps\fR ( .br time_t *\fIupdate_time\fP, .br uint32_t \fIjob_id\fP, .br uint32_t \fIstep_id\fP, .br job_step_info_response_msg_t **\fIjob_step_info_msg_pptr\fP, .br uint16_t \fIshow_flags\fP .br ); .LP void \fBslurm_print_job_step_info\fR ( .br FILE *\fIout_file\fp, .br job_step_info_t *\fIjob_step_ptr\fP, .br int \fIone_liner\fP .br ); .LP void \fBslurm_print_job_step_info_msg\fR ( .br FILE *\fIout_file\fp, .br job_step_info_response_msg_t *\fIjob_step_info_msg_ptr\fP, .br int \fIone_liner\fP .br ); .SH "ARGUMENTS" .LP .TP \fIjob_id\fP Specifies a slurm job ID. A value of zero implies all jobs. .TP \fIjob_step_info_msg_pptr\fP Specifies the double pointer to the structure to be created and filled with the time of the last node update, a record count, and detailed information about each job step specified. Detailed job step information is written to fixed sized records and includes: job_id, step_id, node names, etc. See slurm.h for full details on the data structure's contents. .TP \fIjob_step_info_msg_ptr\fP Specifies the pointer to the structure created by the function \fBslurm_get_job_steps\fP. .TP \fIjob_step_ptr\fP Specifies a pointer to a single job step records from the \fIjob_step_info_msg_pptr\fP data structure. .TP \fIone_liner\fP Print one record per line if non\-zero. .TP \fIout_file\fP Specifies the file to print data to. .TP \fIshow_flags\fP Job filtering flags, may be ORed. Information about job steps in partitions that are configured as hidden and partitions that the user's group is unable to utilize are not reported by default. The \fBSHOW_ALL\fP flag will cause information about job steps in all partitions to be displayed. .TP \fIstep_id\fP Specifies a slurm job step ID. A value of zero implies all job steps. .TP \fIupdate_time\fP For all of the following informational calls, if update_time is equal to or greater than the last time changes where made to that information, new information is not returned. Otherwise all the configuration. job, node, or partition records are returned. .SH "DESCRIPTION" .LP \fBslurm_free_job_step_info_response_msg\fR Release the storage generated by the \fBslurm_get_job_steps\fR function. .LP \fBslurm_get_job_steps\fR Loads into details about job steps that satisfy the \fIjob_id\fP and/or \fIstep_id\fP specifications provided if the data has been updated since the \fIupdate_time\fP specified. .LP \fBslurm_print_job_step_info\fR Prints the contents of the data structure describing a single job step records from the data loaded by the \fslurm_get_job_steps\fR function. .LP \fBslurm_print_job_step_info_msg\fR Prints the contents of the data structure describing all job step records loaded by the \fslurm_get_job_steps\fR function. .SH "RETURN VALUE" .LP On success, zero is returned. On error, \-1 is returned, and Slurm error code is set appropriately. .SH "ERRORS" .LP \fBSLURM_NO_CHANGE_IN_DATA\fR Data has not changed since \fBupdate_time\fR. .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "EXAMPLE" .LP #include .br #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br int i; .br job_step_info_response_msg_t * step_info_ptr = NULL; .br job_step_info_t * step_ptr; .LP /* get and dump some job information */ .br if ( slurm_get_job_steps ((time_t) NULL, 0, 0, .br &step_info_ptr, SHOW_ALL) ) { .br slurm_perror ("slurm_get_job_steps error"); .br exit (1); .br } .LP /* The easy way to print... */ .br slurm_print_job_step_info_msg (stdout, .br step_info_ptr, 0); .LP /* A harder way.. */ .br for (i = 0; i < step_info_ptr\->job_step_count; i++) { .br step_ptr = &step_info_ptr\->job_steps[i]; .br slurm_print_job_step_info(stdout, step_ptr, 0); .br } .LP /* The hardest way. */ .br printf ("Steps updated at %lx, record count %d\\n", .br step_info_ptr\->last_update, .br step_info_ptr\->job_step_count); .br for (i = 0; i < step_info_ptr\->job_step_count; i++) { .br printf ("JobId=%u StepId=%u\\n", .br step_info_ptr\->job_steps[i].job_id, .br step_info_ptr\->job_steps[i].step_id); .br } .LP slurm_free_job_step_info_response_msg(step_info_ptr); .br exit (0); .br } .SH "NOTES" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .LP Some data structures contain index values to cross\-reference each other. If the \fIshow_flags\fP argument is not set to SHOW_ALL when getting this data, these index values will be invalid. .LP The \fBslurm_hostlist_\fR functions can be used to convert Slurm node list expressions into a collection of individual node names. .SH "COPYING" Copyright (C) 2002\-2006 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBscontrol\fR(1), \fBsqueue\fR(1), \fBslurm_hostlist_create\fR(3), \fBslurm_hostlist_shift\fR(3), \fBslurm_hostlist_destroy\fR(3), \fBslurm_get_errno\fR(3), \fBslurm_load_jobs\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_node_info.3000066400000000000000000000163701265000126300227610ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm node informational functions" "April 2015" "Slurm node informational functions" .SH "NAME" slurm_free_node_info_msg, slurm_load_node, slurm_load_node_single, slurm_print_node_info_msg, slurm_print_node_table, slurm_sprint_node_table \- Slurm node information reporting functions .SH "SYNTAX" .LP #include .br #include .LP void \fBslurm_free_node_info_msg\fR ( .br node_info_msg_t *\fInode_info_msg_ptr\fP .br ); .LP int \fBslurm_load_node\fR ( .br time_t \fIupdate_time\fP, .br node_info_msg_t **\fInode_info_msg_pptr\fP, .br uint16_t \fIshow_flags\fP .br ); .LP int \fBslurm_load_node_single\fR ( .br node_info_msg_t **\fInode_info_msg_pptr\fP, .br char *\fInode_name\fP, .br uint16_t \fIshow_flags\fP .br ); .LP void \fBslurm_print_node_info_msg\fR ( .br FILE *\fIout_file\fp, .br node_info_msg_t *\fInode_info_msg_ptr\fP, .br int \fIone_liner\fP .br ); .LP void \fBslurm_print_node_table\fR ( .br FILE *\fIout_file\fp, .br node_info_t *\fInode_ptr\fP, .br int \fInode_scaling\fP .br int \fIone_liner\fP .br ); .LP char *\fBslurm_sprint_node_table\fR ( .br node_info_t *\fInode_ptr\fP, .br int \fInode_scaling\fP .br int \fIone_liner\fP .br ); .SH "ARGUMENTS" .LP .TP \fInode_info_msg_ptr\fP Specifies the pointer to the structure created by \fBslurm_load_node\fR. .TP \fInode_info_msg_pptr\fP Specifies the double pointer to the structure to be created and filled with the time of the last node update, a record count, and detailed information about each node. Detailed node information is written to fixed sized records and includes: name, state, processor count, memory size, etc. See slurm.h for full details on the data structure's contents. .TP \fInode_name\fP Name of the node for which information is requested. .TP \fInode_ptr\fP Specifies a pointer to a single node record from the \fInode_info_msg_ptr\fP data structure. .TP \fInode_scaling\fP number of nodes each node represents default is 1. .TP \fIone_liner\fP Print one record per line if non\-zero. .TP \fIout_file\fP Specifies the file to print data to. .TP \fIshow_flags\fP Job filtering flags, may be ORed. Information about nodes in partitions that are configured as hidden and partitions that the user's group is unable to utilize are not reported by default. The \fBSHOW_ALL\fP flag will cause information about nodes in all partitions to be displayed. .TP \fIupdate_time\fP For all of the following informational calls, if update_time is equal to or greater than the last time changes where made to that information, new information is not returned. Otherwise all the configuration. job, node, or partition records are returned. .SH "DESCRIPTION" .LP \fBslurm_free_node_info_msg\fR Release the storage generated by the \fBslurm_load_node\fR function. .LP \fBslurm_load_node_single\fR issue RPC to get slurm configuration information for a specific node. .LP \fBslurm_load_node\fR Returns a \fInode_info_msg_t\fP that contains an update time, record count, and array of node_table records for all nodes. Note that nodes which are hidden for any reason will have a NULL node name. Other fields associated with the node will be filled in appropriately. Reasons for a node being hidden include: a node state of FUTURE, a node in the CLOUD that is powered down, or a node in a hidden partition. .LP \fBslurm_print_node_info_msg\fR Prints the contents of the data structure describing all node records from the data loaded by the \fBslurm_load_node\fR function. .LP \fBslurm_print_node_table\fR Prints the contents of the data structure describing a single node record loaded by the \fBslurm_load_node\fR function. .SH "RETURN VALUE" .LP On success, zero is returned. On error, \-1 is returned, and Slurm error code is set appropriately. .SH "ERRORS" .LP \fBSLURM_NO_CHANGE_IN_DATA\fR Data has not changed since \fBupdate_time\fR. .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "EXAMPLE" .LP #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br int i, j, k; .br partition_info_msg_t *part_info_ptr = NULL; .br partition_info_t *part_ptr; .br node_info_msg_t *node_info_ptr = NULL; .br node_info_t *node_ptr; .LP /* get and dump some node information */ .br if ( slurm_load_node ((time_t) NULL, .br &node_buffer_ptr, SHOW_ALL) ) { .br slurm_perror ("slurm_load_node error"); .br exit (1); .br } .LP /* The easy way to print... */ .br slurm_print_node_info_msg (stdout, node_buffer_ptr, 0); .LP /* A harder way.. */ .br for (i = 0; i < node_buffer_ptr\->record_count; i++) { .br node_ptr = &node_buffer_ptr\->node_array[i]; .br slurm_print_node_table(stdout, node_ptr, 0); .br } .LP /* The hardest way. */ .br for (i = 0; i < node_buffer_ptr\->node_count; i++) { .br printf ("NodeName=%s CPUs=%u\\n", .br node_buffer_ptr\->node_array[i].name, .br node_buffer_ptr\->node_array[i].cpus); .br } .LP /* get and dump some partition information */ .br /* note that we use the node information loaded */ .br /* above and we assume the node table entries have */ .br /* not changed since */ .br if ( slurm_load_partitions ((time_t) NULL, .br &part_buffer_ptr) ) { .br slurm_perror ("slurm_load_partitions error"); .br exit (1); .br } .br for (i = 0; i < part_buffer_ptr\->record_count; i++) { .br part_ptr = &part_info_ptr\->partition_array[i]; .br printf ("PartitionName=%s Nodes=", .br part_ptr\->name); .br for (j = 0; part_ptr\->node_inx; j+=2) { .br if (part_ptr\->node_inx[j] == \-1) .br break; .br for (k = part_ptr\->node_inx[j]; .br k <= part_ptr\->node_inx[j+1]; .br k++) { .br printf ("%s ", node_buffer_ptr\-> .br node_array[k].name); .br } .br } .br printf("\\n\\n"); .br } .br slurm_free_node_info_msg (node_buffer_ptr); .br slurm_free_partition_info (part_buffer_ptr); .br exit (0); .br } .SH "NOTES" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .LP Some data structures contain index values to cross\-reference each other. If the \fIshow_flags\fP argument is not set to SHOW_ALL when getting this data, these index values will be invalid. .SH "COPYING" Copyright (C) 2002\-2006 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBscontrol\fR(1), \fBsqueue\fR(1), \fBslurm_allocation_lookup\fR(3), \fBslurm_get_errno\fR(3), \fBslurm_load_partitions\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_node_info_msg.3000066400000000000000000000000401265000126300236120ustar00rootroot00000000000000.so man3/slurm_free_node_info.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_partition_info.3000066400000000000000000000141301265000126300240350ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm partition information functions" "April 2015" "Slurm partition information functions" .SH "NAME" slurm_free_partition_info_msg, slurm_load_partitions, slurm_print_partition_info, slurm_print_partition_info_msg \- Slurm partition information reporting functions .SH "SYNTAX" .LP #include .br #include .LP void \fBslurm_free_partition_info_msg\fR ( .br partition_info_msg_t *\fIpartition_info_msg_ptr\fP .br ); .LP int \fBslurm_load_partitions\fR ( .br time_t \fIupdate_time\fR, .br partition_info_msg_t **\fIpartition_info_msg_pptr\fP, .br uint16_t \fIshow_flags\fP .br ); .LP void \fBslurm_print_partition_info\fR ( .br FILE *\fIout_file\fP, .br partition_info_t *\fIpartition_ptr\fP, .br int \fIone_liner\fP .br ); .LP void \fBslurm_print_partition_info_msg\fR ( .br FILE *\fIout_file\fP, .br partition_info_msg_t *\fIpartition_info_msg_ptr\fP, .br int \fIone_liner\fP .br ); .SH "ARGUMENTS" .LP .TP \fIone_liner\fP Print one record per line if non\-zero. .TP \fIout_file\fP Specifies the file to print data to. .TP \fIpartition_info_msg_pptr\fP Specifies the double pointer to the structure to be created and filled with the time of the last partition update, a record count, and detailed information about each partition. Detailed partition information is written to fixed sized records and includes: name, state, job time limit, job size limit, node names, indexes into the node table, etc. In the case of indexes into the node table, this is an array of integers with pairs of start and end index number into the node information records and the data is terminated with a value of \-1. See slurm.h for full details on the data structure's contents. .TP \fIpartition_info_msg_ptr\fP Specifies the pointer to the structure created by \fBslurm_load_partitions\fP. .TP \fIshow_flags\fP Job filtering flags, may be ORed. Information about partitions that are configured as hidden and partitions that the user's group is unable to utilize are not reported by default. The \fBSHOW_ALL\fP flag will cause information about partitions to be displayed. .TP \fIupdate_time\fP For all of the following informational calls, if update_time is equal to or greater than the last time changes where made to that information, new information is not returned. Otherwise all the configuration. job, node, or partition records are returned. .SH "DESCRIPTION" .LP \fBslurm_free_partition_info_msg\fR Release the storage generated by the \fBslurm_load_partitions\fR function. .LP \fBslurm_load_partitions\fR Returns a partition_info_msg_t that contains an update time, record count, and array of partition_table records for all partitions. .LP \fBslurm_print_partition_info\fR Prints the contents of the data structure describing a single partition records from the data loaded by the \fBslurm_load_partitions\fR function. .LP \fBslurm_print_partition_info_msg\fR Prints the contents of the data structure describing all partition records loaded by the \fBslurm_load_partitions\fR function. .SH "RETURN VALUE" .LP On success, zero is returned. On error, \-1 is returned, and Slurm error code is set appropriately. .SH "ERRORS" .LP \fBSLURM_NO_CHANGE_IN_DATA\fR Data has not changed since \fBupdate_time\fR. .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "EXAMPLE" .LP #include .br #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br int i; .br partition_info_msg_t *part_info_ptr = NULL; .br partition_info_t *part_ptr; .LP /* get and dump some partition information */ .br if (slurm_load_partitions((time_t)NULL, .br &part_info_ptr, SHOW_ALL)) { .br slurm_perror ("slurm_load_partitions error"); .br exit (1); .br } .LP /* The easy way to print... */ .br slurm_print_partition_info_msg (stdout, .br part_info_ptr, 0); .LP /* A harder way.. */ .br for (i = 0; i < part_info_ptr\->record_count; i++) { .br part_ptr = &part_info_ptr\->partition_array[i]; .br slurm_print_partition_info(stdout, part_ptr, 0); .br } .LP /* The hardest way. */ .br printf("Partitions updated at %lx, records=%d\\n", .br part_info_ptr\->last_update, .br part_info_ptr\->record_count); .br for (i = 0; i < part_info_ptr\->record_count; i++) { .br printf ("PartitionName=%s Nodes=%s\\n", .br part_info_ptr\->partition_array[i].name, .br part_info_ptr\->partition_array[i].nodes ); .br } .LP slurm_free_partition_info_msg (part_info_ptr); .br exit (0); .br } .SH "NOTES" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .LP Some data structures contain index values to cross\-reference each other. If the \fIshow_flags\fP argument is not set to SHOW_ALL when getting this data, these index values will be invalid. .LP The \fBslurm_hostlist_\fR functions can be used to convert Slurm node list expressions into a collection of individual node names. .SH "COPYING" Copyright (C) 2002\-2006 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBscontrol\fR(1), \fBsinfo\fR(1), \fBsqueue\fR(1), \fBslurm_hostlist_create\fR(3), \fBslurm_hostlist_shift\fR(3), \fBslurm_hostlist_destroy\fR(3), \fBslurm_get_errno\fR(3), \fBslurm_load_node\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_partition_info_msg.3000066400000000000000000000000451265000126300247030ustar00rootroot00000000000000.so man3/slurm_free_partition_info.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_reservation_info_msg.3000066400000000000000000000000441265000126300252320ustar00rootroot00000000000000.so man3/slurm_load_reservations.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_resource_allocation_response_msg.3000066400000000000000000000000441265000126300276300ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_slurmd_status.3000066400000000000000000000000371265000126300237230ustar00rootroot00000000000000.so man3/slurm_slurmd_status.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_submit_response_response_msg.3000066400000000000000000000000441265000126300270150ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_free_trigger_msg.3000066400000000000000000000000371265000126300233230ustar00rootroot00000000000000.so man3/slurm_clear_trigger.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_get_checkpoint_file_path.3000066400000000000000000000000421265000126300250060ustar00rootroot00000000000000.so man3/slurm_checkpoint_error.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_get_end_time.3000066400000000000000000000000431265000126300224310ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_get_errno.3000066400000000000000000000070051265000126300217770ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm error handling functions" "April 2015" "Slurm error handling functions" .SH "NAME" slurm_get_errno, slurm_perror, slurm_strerror \- Slurm error handling functions .SH "SYNTAX" .LP #include .LP int \fBslurm_get_errno\fR ( ); .LP void \fBslurm_perror\fR ( .br char *\fIheader\fP .br ); .LP char * \fBslurm_strerror\fR ( .br int \fIerrnum\fP .br ); .SH "ARGUMENTS" .LP .TP \fIerrnum\fP A Slurm error code. .TP \fIheader\fP A pointer to a string used as a message header for printing along with an error description. .SH "DESCRIPTION" .LP \fBslurm_get_errno\fR Return the error code as set by the Slurm API function executed. .LP \fBslurm_perror\fR Print to standard error the supplied header followed by a colon followed by a text description of the last Slurm error code generated. .LP \fBslurm_strerror\fR Given a Slurm error code, return a pointer to a text description of the error's meaning. .SH "RETURN VALUE" .LP \fBslurm_get_errno\fR returns an error code or zero if no error was generated by the last Slurm function call executed. \fBslurm_strerror\fR returns a pointer to a text string, which is empty if no error was generated by the last Slurm function call executed. .SH "EXAMPLE" .LP #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br /* assume Slurm API function failed here */ .br fprintf (stderr, "Slurm function errno = %d\\n", .br slurm_get_errno ()); .br fprintf (stderr, "Slurm function errno = %d %s\\n", .br slurm_get_errno (), .br slurm_strerror (slurm_get_errno ())); .br slurm_perror ("Slurm function"); .br exit (1); .br } .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2002 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm_allocate_resources\fR(3), \fBslurm_complete_job\fR(3), \fBslurm_complete_job_step\fR(3), \fBslurm_allocation_lookup\fR(3), \fBslurm_free_ctl_conf\fR(3), \fBslurm_free_job_info_msg\fR(3), \fBslurm_free_job_step_create_response_msg\fR(3), \fBslurm_free_node_info\fR(3), \fBslurm_free_partition_info\fR(3), \fBslurm_free_resource_allocation_response_msg\fR(3), \fBslurm_free_submit_response_response_msg\fR(3), \fBslurm_get_job_steps\fR(3), \fBslurm_init_job_desc_msg\fR(3), \fBslurm_init_part_desc_msg\fR(3), \fBslurm_job_step_create\fR(3), \fBslurm_job_will_run\fR(3), \fBslurm_kill_job\fR(3), \fBslurm_kill_job_step\fR(3), \fBslurm_load_ctl_conf\fR(3), \fBslurm_load_jobs\fR(3), \fBslurm_load_node\fR(3), \fBslurm_load_partitions\fR(3), \fBslurm_pid2jobid\fR(3), \fBslurm_reconfigure\fR(3), \fBslurm_shutdown\fR(3), \fBslurm_submit_batch_job\fR(3), \fBslurm_update_job\fR(3), \fBslurm_update_node\fR(3), \fBslurm_update_partition\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_get_job_steps.3000066400000000000000000000000611265000126300226350ustar00rootroot00000000000000.so man3/slurm_free_job_step_info_response_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_get_rem_time.3000066400000000000000000000000431265000126300224460ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_get_select_jobinfo.3000066400000000000000000000000431265000126300236320ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_get_triggers.3000066400000000000000000000000371265000126300224760ustar00rootroot00000000000000.so man3/slurm_clear_trigger.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_hostlist_create.3000066400000000000000000000063051265000126300232110ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm host list functions" "April 2015" "Slurm host list functions" .SH "NAME" slurm_hostlist_create, slurm_hostlist_shift, slurm_hostlist_destroy \- Slurm host list support functions .SH "SYNTAX" .LP #include .LP .LP hostlist_t \fBslurm_hostlist_create\fR ( .br char *\fInode_list\fP .br ); .LP char * \fBslurm_hostlist_shift\fR ( .br hostlist_t \fIhost_list\fP .br ); .LP void \fBslurm_hostlist_destroy\fR ( .br hostlist_t \fIhost_list\fP .br ); .SH "ARGUMENTS" .LP .TP \fInode_list\fP A list of nodes as returned by the \fBslurm_job_step_create\fR functions. The returned value may include a simple range format to describe numeric ranges of values and/or multiple numeric values (e.g. "linux[1\-3,6]" represents "linux1", "linux2", "linux3", and "linux6"). .TP \fIhost_list\fP A hostlist created by the \fBslurm_hostlist_create\fR function. .SH "DESCRIPTION" .LP \fBslurm_hostlist_create\fR creates a database of node names from a range format describing node names. Use \fBslurm_hostlist_destroy\fR to release storage associated with the database when no longer required. .LP \fBslurm_hostlist_shift\fR extracts the first entry from the host list database created by the \fBslurm_hostlist_create\fR function. .LP \fBslurm_hostlist_destroy\fR releases storage associated with a database created by \fBslurm_hostlist_create\fR when no longer required. .SH "RETURN VALUE" .LP \fBslurm_hostlist_create\fR returns the host list database or NULL if memory can not be allocated for the database. .LP \fBslurm_hostlist_shift\fR returns a character string or NULL if no entries remain in the database. .SH "EXAMPLE" .LP #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br hostlist_t my_hostlist; .br char *hostnames, *host; .LP /* generate a list of hostnames, possibly using a */ .br /* slurm job step creation function */ .LP my_hostlist = slurm_hostlist_create (hostnames); .br if (my_hostlist == NULL) { .br fprintf (stderr, "No memory\\n"); .br exit (1); .br } .LP while ( (host = slurm_hostlist_shift(my_hostlist)) ) .br printf ("host = %s\\n", host); .LP slurm_hostlist_destroy (my_hostlist) ; .br exit (0); .br } .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2002\-2006 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm_get_job_steps\fR(3), \fBslurm_load_jobs\fR(3), \fBslurm_load_partitions\fB(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_hostlist_destroy.3000066400000000000000000000000411265000126300234260ustar00rootroot00000000000000.so man3/slurm_hostlist_create.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_hostlist_shift.3000066400000000000000000000000411265000126300230520ustar00rootroot00000000000000.so man3/slurm_hostlist_create.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_init_job_desc_msg.3000066400000000000000000000000441265000126300234500ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_init_part_desc_msg.3000066400000000000000000000000351265000126300236440ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_init_resv_desc_msg.3000066400000000000000000000000351265000126300236550ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_init_trigger_msg.3000066400000000000000000000000371265000126300233450ustar00rootroot00000000000000.so man3/slurm_clear_trigger.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_init_update_front_end_msg.3000066400000000000000000000000351265000126300252200ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_init_update_node_msg.3000066400000000000000000000000351265000126300241670ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_init_update_step_msg.3000066400000000000000000000000341265000126300242140ustar00rootroot00000000000000.so man3/slurm_update_job.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_job_cpus_allocated_on_node.3000066400000000000000000000000431265000126300253230ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_job_cpus_allocated_on_node_id.3000066400000000000000000000000431265000126300257770ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_job_cpus_allocated_str_on_node.3000066400000000000000000000000431265000126300262130ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_job_cpus_allocated_str_on_node_id.3000066400000000000000000000000431265000126300266670ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_job_step_create.3000066400000000000000000000071541265000126300231500ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm job step initiation functions" "April 2015" "Slurm job step initiation functions" .SH "NAME" slurm_free_job_step_create_response_msg, slurm_job_step_create \- Slurm job step initiation functions .SH "SYNTAX" .LP #include .LP .LP void \fBslurm_free_job_step_create_response_msg\fR ( .br job_step_create_response_msg_t *\fIslurm_step_alloc_resp_msg_ptr\fP .br ); .LP int \fBslurm_job_step_create\fR ( .br job_step_create_request_msg_t *\fIslurm_step_alloc_req_msg_ptr\fP, .br job_step_create_response_msg_t **\fIslurm_step_alloc_resp_msg_pptr\fP .br ); .SH "ARGUMENTS" .LP .TP \fIslurm_step_alloc_req_msg_ptr\fP Specifies the pointer to the structure with job step request specification. See slurm.h for full details on the data structure's contents. .TP \fIslurm_step_alloc_resp_msg_pptr\fP Specifies the double pointer to the structure to be created and filled with a description of the created job step: node allocation, credentials, etc. See slurm.h for full details on the data structure's contents. .SH "DESCRIPTION" .LP \fBslurm_free_job_step_create_response_msg\fR Release the storage generated in response to a call of the function \fBslurm_job_step_create\fR. .LP \fBslurm_job_step_create\fR Initialize a job step including the allocation of nodes to it from those already allocate to that job. Always release the response message when no longer required using the function \fBslurm_free_job_step_create_response_msg\fR. The list of host names returned may be matched to their data in the proper order by using the functions \fBhostlist_create\fR, \fBhostlist_shift\fR, and \fBhostlist_destroy\fR. .SH "RETURN VALUE" .LP On success, zero is returned. On error, \-1 is returned, and Slurm error code is set appropriately. .SH "ERRORS" .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBESLURM_INVALID_JOB_ID\fR the requested job id does not exist. .LP \fBESLURM_ALREADY_DONE\fR the specified job has already completed and can not be modified. .LP \fBESLURM_ACCESS_DENIED\fR the requesting user lacks authorization for the requested action (e.g. trying to delete or modify another user's job). .LP \fBESLURM_DISABLED\fR the ability to create a job step is currently disabled. This is indicative of the job being suspended. Retry the call as desired. .LP \fBESLURM_INTERCONNECT_FAILURE\fR failed to configure the node interconnect. .LP \fBESLURM_BAD_DIST\fR task distribution specification is invalid. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2002-2007 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBhostlist_create\fR(3), \fBhostlist_shift\fR(3), \fBhostlist_destroy\fR(3), \fBsrun\fR(1), \fBslurm_get_errno\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_job_step_launch_t_init.3000066400000000000000000000000351265000126300245140ustar00rootroot00000000000000.so man3/slurm_step_launch.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_job_step_layout_free.3000066400000000000000000000000611265000126300242110ustar00rootroot00000000000000.so man3/slurm_free_job_step_info_response_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_job_step_layout_get.3000066400000000000000000000000611265000126300240470ustar00rootroot00000000000000.so man3/slurm_free_job_step_info_response_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_job_will_run.3000066400000000000000000000000441265000126300224740ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_job_will_run2.3000066400000000000000000000000441265000126300225560ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_jobinfo_ctx_get.3000066400000000000000000000000411265000126300231470ustar00rootroot00000000000000.so man3/slurm_step_ctx_create.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_kill_job.3000066400000000000000000000074271265000126300216100ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm job signal functions" "April 2015" "Slurm job signal functions" .SH "NAME" slurm_kill_job, slurm_kill_job_step, .br slurm_signal_job, slurm_signal_job_step, .br slurm_terminate_job_step \- Slurm job signal calls .SH "SYNTAX" .LP #include .LP int \fBslurm_kill_job\fR ( .br uint32_t \fIjob_id\fP, .br uint16_t \fIsignal\fP, .br uint16_t \fIbatch_flag\fP .br ); .LP int \fBslurm_kill_job_step\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIjob_step_id\fP, .br uint16_t \fIsignal\fP .br ); .LP int \fBslurm_signal_job\fR ( .br uint32_t \fIjob_id\fP, .br uint16_t \fIsignal\fP .br ); .LP int \fBslurm_signal_job_step\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIjob_step_id\fP, .br uint16_t \fIsignal\fP .br ); .LP int \fBslurm_terminate_job_step\fR ( .br uint32_t \fIjob_id\fP, .br uint32_t \fIjob_step_id\fP, .br ); .SH "ARGUMENTS" .LP \fIbatch_flag\fP If non\-zero then signal only the batch job shell. .TP \fIjob_id\fP Slurm job id number. .TP \fIjob_step_id\fp Slurm job step id number. .TP \fIsignal\fp Signal to be sent to the job or job step. .SH "DESCRIPTION" .LP \fBslurm_kill_job\fR Request that a signal be sent to either the batch job shell (if \fIbatch_flag\fP is non\-zero) or all steps of the specified job. If the job is pending and the signal is SIGKILL, the job will be terminated immediately. This function may only be successfully executed by the job's owner or user root. .LP \fBslurm_kill_job_step\fR Request that a signal be sent to a specific job step. This function may only be successfully executed by the job's owner or user root. .LP \fBslurm_signal_job\fR Request that the specified signal be sent to all steps of an existing job. .LP \fBslurm_signal_job_step\fR Request that the specified signal be sent to an existing job step. .LP \fBslurm_terminate_job_step\fR Request that terminates a job step by sending a REQUEST_TERMINATE_TASKS rpc to all slurmd of a job step. .SH "RETURN VALUE" .LP On success, zero is returned. On error, \-1 is returned, and Slurm error code is set appropriately. .SH "ERRORS" .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBESLURM_DEFAULT_PARTITION_NOT_SET\fR the system lacks a valid default partition. .LP \fBESLURM_INVALID_JOB_ID\fR the requested job id does not exist. .LP \fBESLURM_JOB_SCRIPT_MISSING\fR the \fIbatch_flag\fP was set for a non\-batch job. .LP \fBESLURM_ALREADY_DONE\fR the specified job has already completed and can not be modified. .LP \fBESLURM_ACCESS_DENIED\fR the requesting user lacks authorization for the requested action (e.g. trying to delete or modify another user's job). .LP \fBESLURM_INTERCONNECT_FAILURE\fR failed to configure the node interconnect. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2002 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBscancel\fR(1), \fBslurm_get_errno\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_kill_job_step.3000066400000000000000000000000321265000126300226240ustar00rootroot00000000000000.so man3/slurm_kill_job.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_load_ctl_conf.3000066400000000000000000000000371265000126300225770ustar00rootroot00000000000000.so man3/slurm_free_ctl_conf.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_load_front_end.3000066400000000000000000000000511265000126300227620ustar00rootroot00000000000000.so man3/slurm_free_front_end_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_load_job.3000066400000000000000000000000431265000126300215570ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_load_job_user.3000066400000000000000000000000431265000126300226150ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_load_jobs.3000066400000000000000000000000431265000126300217420ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_load_node.3000066400000000000000000000000401265000126300217270ustar00rootroot00000000000000.so man3/slurm_free_node_info.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_load_node_single.3000066400000000000000000000000401265000126300232700ustar00rootroot00000000000000.so man3/slurm_free_node_info.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_load_partitions.3000066400000000000000000000000451265000126300232030ustar00rootroot00000000000000.so man3/slurm_free_partition_info.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_load_reservations.3000066400000000000000000000134161265000126300235410ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm reservation information functions" "April 2015" "Slurm reservation information functions" .SH "NAME" slurm_load_reservations, slurm_free_reservation_info_msg, slurm_print_reservation_info, slurm_sprint_reservation_info, slurm_print_reservation_info_msg \- Slurm reservation information reporting functions .SH "SYNTAX" .LP #include .br #include .LP int \fBslurm_load_reservations\fR ( .br time_t \fIupdate_time\fR, .br reserve_info_msg_t **\fIreservation_info_msg_pptr\fP .br ); .LP void \fBslurm_free_reservation_info_msg\fR ( .br reserve_info_msg_t *\fIreservation_info_msg_ptr\fP .br ); .LP void \fBslurm_print_reservation_info\fR ( .br FILE *\fIout_file\fP, .br reserve_info_t *\fIreservation_ptr\fP, .br int \fIone_liner\fP .br ); .LP char * \fBslurm_sprint_reservation_info\fR ( .br reserve_info_t *\fIreservation_ptr\fP, .br int \fIone_liner\fP .br ); .LP void \fBslurm_print_reservation_info_msg\fR ( .br FILE *\fIout_file\fP, .br reserve_info_msg_t *\fIreservation_info_msg_ptr\fP, .br int \fIone_liner\fP .br ); .SH "ARGUMENTS" .LP .TP \fIone_liner\fP Print one record per line if non\-zero. .TP \fIout_file\fP Specifies the file to print data to. .TP \fIreservation_info_msg_pptr\fP Specifies the double pointer to the structure to be created and filled with the time of the last reservation update, a record count, and detailed information about each reservation. Detailed reservation information is written to fixed sized records and includes: reservation name, time limits, access restrictions, etc. See slurm.h for full details on the data structure's contents. .TP \fIreservation_info_msg_ptr\fP Specifies the pointer to the structure created by \fBslurm_load_reservations\fP. .TP \fIupdate_time\fP For all of the following informational calls, if update_time is equal to or greater than the last time changes where made to that information, new information is not returned. Otherwise all the configuration. job, node, or reservation records are returned. .SH "DESCRIPTION" .LP \fBslurm_load_reservations\fR Returns a reserve_info_msg_t that contains an update time, record count, and array of reservation_table records for all reservations. .LP \fBslurm_free_reservation_info_msg\fR Release the storage generated by the \fBslurm_load_reservations\fR function. .LP \fBslurm_print_reservation_info\fR Prints the contents of the data structure describing one of the reservation records from the data loaded by the \fBslurm_load_reservations\fR function. .LP \fBslurm_sprint_reservation_info\fR Prints the sames info as \fBslurm_print_reservation_info\fR, but prints to a string that must be freed by the caller, rather than printing to a file. .LP \fBslurm_print_reservation_info_msg\fR Prints the contents of the data structure describing all reservation records loaded by the \fBslurm_load_reservations\fR function. .SH "RETURN VALUE" .LP On success, zero is returned. On error, \-1 is returned, and Slurm error code is set appropriately. .SH "ERRORS" .LP \fBSLURM_NO_CHANGE_IN_DATA\fR Data has not changed since \fBupdate_time\fR. .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "EXAMPLE" .LP #include .br #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br int i; .br reserve_info_msg_t *res_info_ptr = NULL; .br reserve_info_t *res_ptr; .LP /* get and dump all reservation information */ .br if (slurm_load_reservations((time_t)NULL, .br &res_info_ptr)) { .br slurm_perror ("slurm_load_reservations error"); .br exit (1); .br } .LP /* The easy way to print... */ .br slurm_print_reservation_info_msg(stdout, .br res_info_ptr, 0); .LP /* A harder way.. */ .br for (i = 0; i < res_info_ptr\->record_count; i++) { .br res_ptr = &res_info_ptr\->reservation_array[i]; .br slurm_print_reservation_info(stdout, res_ptr, 0); .br } .LP /* The hardest way. */ .br printf("reservations updated at %lx, records=%d\\n", .br res_info_ptr\->last_update, .br res_info_ptr\->record_count); .br for (i = 0; i < res_info_ptr\->record_count; i++) { .br printf ("reservationName=%s Nodes=%s\\n", .br res_info_ptr\->reservation_array[i].name, .br res_info_ptr\->reservation_array[i].node_list ); .br } .LP slurm_free_reservation_info_msg (res_info_ptr); .br return 0; .br } .SH "NOTES" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .LP The \fBslurm_hostlist_\fR functions can be used to convert Slurm node list expressions into a collection of individual node names. .SH "COPYING" Copyright (C) 2002\-2006 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBscontrol\fR(1), \fBsinfo\fR(1), \fBsqueue\fR(1), \fBslurm_hostlist_create\fR(3), \fBslurm_hostlist_shift\fR(3), \fBslurm_hostlist_destroy\fR(3), \fBslurm_get_errno\fR(3), \fBslurm_load_node\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) slurm-slurm-15-08-7-1/doc/man/man3/slurm_load_slurmd_status.3000066400000000000000000000000371265000126300237210ustar00rootroot00000000000000.so man3/slurm_slurmd_status.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_notify_job.3000066400000000000000000000000431265000126300221500ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_perror.3000066400000000000000000000000331265000126300213160ustar00rootroot00000000000000.so man3/slurm_get_errno.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_pid2jobid.3000066400000000000000000000000431265000126300216540ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_ping.3000066400000000000000000000000351265000126300207440ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_ctl_conf.3000066400000000000000000000000371265000126300230140ustar00rootroot00000000000000.so man3/slurm_free_ctl_conf.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_front_end_info_msg.3000066400000000000000000000000511265000126300250600ustar00rootroot00000000000000.so man3/slurm_free_front_end_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_front_end_table.3000066400000000000000000000000511265000126300243460ustar00rootroot00000000000000.so man3/slurm_free_front_end_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_job_info.3000066400000000000000000000000431265000126300230070ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_job_info_msg.3000066400000000000000000000000431265000126300236550ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_job_step_info.3000066400000000000000000000000611265000126300240420ustar00rootroot00000000000000.so man3/slurm_free_job_step_info_response_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_job_step_info_msg.3000066400000000000000000000000611265000126300247100ustar00rootroot00000000000000.so man3/slurm_free_job_step_info_response_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_node_info_msg.3000066400000000000000000000000401265000126300240250ustar00rootroot00000000000000.so man3/slurm_free_node_info.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_node_table.3000066400000000000000000000000401265000126300233130ustar00rootroot00000000000000.so man3/slurm_free_node_info.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_partition_info.3000066400000000000000000000000451265000126300242500ustar00rootroot00000000000000.so man3/slurm_free_partition_info.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_partition_info_msg.3000066400000000000000000000000451265000126300251160ustar00rootroot00000000000000.so man3/slurm_free_partition_info.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_reservation_info.3000066400000000000000000000000441265000126300245770ustar00rootroot00000000000000.so man3/slurm_load_reservations.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_reservation_info_msg.3000066400000000000000000000000441265000126300254450ustar00rootroot00000000000000.so man3/slurm_load_reservations.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_print_slurmd_status.3000066400000000000000000000000371265000126300241360ustar00rootroot00000000000000.so man3/slurm_slurmd_status.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_read_hostfile.3000066400000000000000000000000441265000126300226170ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_reconfigure.3000066400000000000000000000323401265000126300223230ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm administrative functions" "April 2015" "Slurm administrative functions" .SH "NAME" slurm_create_partition, slurm_create_reservation, slurm_delete_partition, slurm_delete_reservation, slurm_init_part_desc_msg, slurm_init_resv_desc_msg, slurm_reconfigure, slurm_shutdown, slurm_takeover, ,slurm_init_update_node_msg slurm_update_node, slurm_update_partition, slurm_update_reservation \- Slurm administrative functions .SH "SYNTAX" .LP #include .LP int \fBslurm_create_partition\fR ( .br update_part_msg_t *\fIupdate_part_msg_ptr\fP .br ); .LP int \fBslurm_create_reservation\fR ( .br resv_desc_msg_t *\fIupdate_resv_msg_ptr\fP .br ); .LP int \fBslurm_delete_partition\fR ( .br delete_part_msg_t *\fIdelete_part_msg_ptr\fP .br ); .LP int \fBslurm_delete_reservation\fR ( .br reservation_name_msg_t *\fIdelete_resv_msg_ptr\fP .br ); .LP void \fBslurm_init_front_end_msg\fR ( .br update_front_end_msg_t *\fIupdate_front_end_msg_ptr\fP .br ); .LP void \fBslurm_init_part_desc_msg\fR ( .br update_part_msg_t *\fIupdate_part_msg_ptr\fP .br ); .LP void \fBslurm_init_resv_desc_msg\fR ( .br resv_desc_msg_t *\fIupdate_resv_msg_ptr\fP .br ); .LP void \fBslurm_init_update_node_msg\fR( .br update_node_msg_t *\fIupdate_node_msg_ptr\fP .br ); .LP int \fBslurm_reconfigure\fR ( ); .LP int \fBslurm_shutdown\fR ( .br uint16_t \fIshutdown_options\fP .br ); .LP int \fBslurm_takeover\fR ( ); .LP int \fBslurm_update_front_end\fR ( .br update_front_end_msg_t *\fIupdate_front_end_msg_ptr\fP .br ); .LP int \fBslurm_update_node\fR ( .br update_node_msg_t *\fIupdate_node_msg_ptr\fP .br ); .LP int \fBslurm_update_partition\fR ( .br update_part_msg_t *\fIupdate_part_msg_ptr\fP .br ); .LP int \fBslurm_update_reservation\fR ( .br resv_desc_msg_t *\fIupdate_resv_msg_ptr\fP .br ); .SH "ARGUMENTS" .LP .TP \fIshutdown_options\fP 0: all slurm daemons are shutdown .br 1: slurmctld generates a core file .br 2: only the slurmctld is shutdown (no core file) .TP \fIdelete_part_msg_ptr\fP Specifies the pointer to a partition delete request specification. See slurm.h for full details on the data structure's contents. .TP \fIdelete_resv_msg_ptr\fP Specifies the pointer to a reservation delete request specification. See slurm.h for full details on the data structure's contents. .TP \fIupdate_front_end_msg_ptr\fP Specifies the pointer to a front end node update request specification. See slurm.h for full details on the data structure's contents. .TP \fIupdate_node_msg_ptr\fP Specifies the pointer to a node update request specification. See slurm.h for full details on the data structure's contents. .TP \fIupdate_part_msg_ptr\fP Specifies the pointer to a partition create or update request specification. See slurm.h for full details on the data structure's contents. .TP \fIupdate_resv_msg_ptr\fP Specifies the pointer to a reservation create or update request specification. See slurm.h for full details on the data structure's contents. .SH "DESCRIPTION" .LP \fBslurm_create_partition\fR Request that a new partition be created. Initialize the data structure using the \fBslurm_init_part_desc_msg\fR function prior to setting values of the parameters to be changed. Note: \fBslurm_init_part_desc_msg\fR is not equivalent to setting the data structure values to zero. A partition name must be set for the call to succeed. This function may only be successfully executed by user root. .LP \fBslurm_create_reservation\fR Request that a new reservation be created. Initialize the data structure using the \fBslurm_init_resv_desc_msg\fR function prior to setting values of the parameters to be changed. Note: \fBslurm_init_resv_desc_msg\fR is not equivalent to setting the data structure values to zero. The reservation's time limits, user or account restrictions, and node names or a node count must be specified for the call to succeed. This function may only be successfully executed by user root. .LP \fBslurm_delete_partition\fR Request that the specified partition be deleted. All jobs associated with the identified partition will be terminated and purged. This function may only be successfully executed by user root. .LP \fBslurm_delete_reservation\fR Request that the specified reservation be deleted. This function may only be successfully executed by user root. .LP \fBslurm_init_update_front_end_msg\fR Initialize the contents of an update front end node descriptor with default values. Note: \fBslurm_init_update_front_end_msg\fR is not equivalent to setting the data structure values to zero. Execute this function before executing \fBslurm_update_front_end\fR. .LP \fBslurm_init_part_desc_msg\fR Initialize the contents of a partition descriptor with default values. Note: \fBslurm_init_part_desc_msg\fR is not equivalent to setting the data structure values to zero. Execute this function before executing \fBslurm_create_partition\fR or \fBslurm_update_partition\fR. .LP \fBslurm_init_resv_desc_msg\fR Initialize the contents of a reservation descriptor with default values. Note: \fBslurm_init_resv_desc_msg\fR is not equivalent to setting the data structure values to zero. Execute this function before executing \fBslurm_create_reservation\fR or \fBslurm_update_reservation\fR. .LP \fBslurm_init_update_node_msg\fR Initialize the contents of an update node descriptor with default values. Note: \fBslurm_init_update_node_msg\fR is not equivalent to setting the data structure values to zero. Execute this function before executing \fBslurm_update_node\fR. .LP \fBslurm_reconfigure\fR Request that the Slurm controller re\-read its configuration file. The new configuration parameters take effect immediately. This function may only be successfully executed by user root. .LP \fBslurm_shutdown\fR Request that the Slurm controller terminate. This function may only be successfully executed by user root. .LP \fBslurm_takeover\fR Request that the Slurm primary controller shutdown immediately and the backup controller take over. This function may only be successfully executed by user root. .LP \fBslurm_update_front_end\fR Request that the state of one or more front end nodes be updated. This function may only be successfully executed by user root. If used by some autonomous program, the state value most likely to be used is \fBNODE_STATE_DRAIN\fR. .LP \fBslurm_update_node\fR Request that the state of one or more nodes be updated. Note that the state of a node (e.g. DRAINING, IDLE, etc.) may be changed, but its hardware configuration may not be changed by this function. If the hardware configuration of a node changes, update the Slurm configuration file and execute the \fBslurm_reconfigure\fR function. This function may only be successfully executed by user root. If used by some autonomous program, the state value most likely to be used is \fBNODE_STATE_DRAIN\fR or \fBNODE_STATE_FAILING\fR. The node state flag \fBNODE_STATE_NO_RESPOND\fR may be specified without changing the underlying node state. Note that the node's \fBNODE_STATE_NO_RESPOND\fR flag will be cleared as soon as the slurmd daemon on that node communicates with the slurmctld daemon. Likewise the state \fBNODE_STATE_DOWN\fR indicates that the slurmd daemon is not responding (and has not responded for an interval at least as long as the \fBSlurmdTimeout\fR configuration parameter). The node will leave the \fBNODE_STATE_DOWN\fR state as soon as the slurmd daemon communicates. .LP \fBslurm_update_partition\fR Request that the configuration of a partition be updated. Note that most, but not all parameters of a partition may be changed by this function. Initialize the data structure using the \fBslurm_init_part_desc_msg\fR function prior to setting values of the parameters to be changed. Note: \fBslurm_init_part_desc_msg\fR is not equivalent to setting the data structure values to zero. This function may only be successfully executed by user root. .LP \fBslurm_update_reservation\fR Request that the configuration of a reservation be updated. Initialize the data structure using the \fBslurm_init_resv_desc_msg\fR function prior to setting values of the parameters to be changed. Note: \fBslurm_init_resv_desc_msg\fR is not equivalent to setting the data structure values to zero. This function may only be successfully executed by user root. .SH "RETURN VALUE" .LP On success, zero is returned. On error, \-1 is returned, and the Slurm error code is set appropriately. .LP Exception: A successful slurm_create_reservation call returns a string containing the name of the reservation, in memory to be freed by the caller. A failed call returns NULL and sets the Slurm error code. .SH "ERRORS" .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBESLURM_INVALID_NODE_NAME\fR The requested node name(s) is/are not valid. .LP \fBESLURM_INVALID_NODE_STATE\fR The specified state node state or requested node state transition is not valid. .LP \fBESLURM_INVALID_PARTITION_NAME\fR The requested partition name is not valid. .LP \fBESLURM_INVALID_AUTHTYPE_CHANGE\fR The \fBAuthType\fR parameter can not be changed using the \fBslurm_reconfigure\fR function, but all Slurm daemons and commands must be restarted. See \fBslurm.conf\fR(5) for more information. .LP \fBESLURM_INVALID_SCHEDTYPE_CHANGE\fR The \fBSchedulerType\fR parameter can not be changed using the \fBslurm_reconfigure\fR function, but the \fBslurmctld\fR daemon must be restarted. Manual changes to existing job parameters may also be required. See \fBslurm.conf\fR(5) for more information. .LP \fBESLURM_INVALID_SWITCHTYPE_CHANGE\fR The \fBSwitchType\fR parameter can not be changed using the \fBslurm_reconfigure\fR function, but all Slurm daemons and commands must be restarted. All previously running jobs will be lost. See \fBslurm.conf\fR(5) for more information. .LP \fBESLURM_ACCESS_DENIED\fR The requesting user lacks authorization for the requested action (e.g. trying to delete or modify another user's job). .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .LP \fBESLURM_RESERVATION_ACCESS\fR Requestor is not authorized to access the reservation. .LP \fBESLURM_RESERVATION_INVALID\fR Invalid reservation parameter given, e.g. wrong name given. .LP \fBESLURM_INVALID_TIME_VALUE\fR Invalid time value. .LP \fBESLURM_RESERVATION_BUSY\fR Reservation is busy, e.g. trying to delete a reservation while in use. .LP \fBESLURM_RESERVATION_NOT_USABLE\fR Reservation not usable, e.g. trying to use an expired reservation. .SH "EXAMPLE" .LP #include .br #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br update_node_msg_t update_node_msg; .br update_part_msg_t update_part_msg; .br delete_part_msg_t delete_part_msg; .br resv_desc_msg_t resv_msg; .br char *resv_name = NULL; .LP if (slurm_reconfigure ( )) { .br slurm_perror ("slurm_reconfigure error"); .br exit (1); .br } .LP slurm_init_part_desc_msg ( &update_part_msg ); .br update_part_msg.name = "test.partition"; .br update_part_msg.state_up = 0; /* partition down */ .br if (slurm_create_partition (&update_part_msg)) { .br slurm_perror ("slurm_create_partition error"); .br exit (1); .br } .LP update_part_msg.state_up = 1; /* partition up */ .br if (slurm_update_partition (&update_part_msg)) { .br slurm_perror ("slurm_update_partition error"); .br exit (1); .br } .LP delete_part_msg.name = "test.partition"; .br if (slurm_delete_partition (&delete_part_msg)) { .br slurm_perror ("slurm_delete_partition error"); .br exit (1); .br } .LP slurm_init_update_node_msg (&update_node_msg); .br update_node_msg.node_names = "lx[10\-12]"; .br update_node_msg.node_state = NODE_STATE_DRAIN ; .br if (slurm_update_node (&update_node_msg)) { .br slurm_perror ("slurm_update_node error"); .br exit (1); .br } .LP slurm_init_resv_desc_msg ( &resv_msg ); .br resv_msg.start_time = time(NULL) + 60*60; /* One hour from now */ .br resv_msg.duration = 720; /* 12 hours/720 minutes */ .br resv_msg.node_cnt = 10; .br resv_msg.accounts = "admin"; .br resv_name = slurm_create_reservation (&resv_msg); .br if (!resv_name) { .br slurm_perror ("slurm_create_reservation error"); .br exit (1); .br } .br free(resv_name); .br exit (0); .br } .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2002\-2007 The Regents of the University of California. Copyright (C) 2008\-2010 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBscontrol\fR(1), \fBslurm_get_errno\fR(3), \fBslurm_init_job_desc_msg\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3), \fBslurm.conf\fR(5) slurm-slurm-15-08-7-1/doc/man/man3/slurm_requeue.3000066400000000000000000000000301265000126300214550ustar00rootroot00000000000000.so man3/slurm_resume.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_requeue2.3000066400000000000000000000000301265000126300215370ustar00rootroot00000000000000.so man3/slurm_resume.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_resume.3000066400000000000000000000122451265000126300213150ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm suspend, resume and requeue functions" "April 2015" "Slurm suspend, resume and requeue functions" .SH "NAME" slurm_suspend, slurm_suspend2, slurm_resume, slurm_resume2, slurm_requeue, slurm_requeue2, slurm_free_job_array_resp \- Slurm suspend, resume and requeue functions .SH "SYNTAX" .LP #include .LP .LP int \fBslurm_suspend\fR ( .br uint32_t \fIjob_id\fP .br ); .LP int \fBslurm_suspend2\fR ( .br char * \fIjob_id_str\fP, job_array_resp_msg_t **resp .br ); .LP int \fBslurm_resume\fR ( .br uint32_t \fIjob_id\fP .br ); .LP int \fBslurm_resume2\fR ( .br char * \fIjob_id_str\fP, job_array_resp_msg_t **resp .br ); .LP int \fBslurm_requeue\fR ( .br uint32_t \fIjob_id\fP, uint32_t \fIstate\fP .br ); .LP int \fBslurm_requeue2\fR ( .br char * \fIjob_id_str\fP, uint32_t \fIstate\fP, job_array_resp_msg_t **resp .br ); .LP void \fBslurm_free_job_array_resp\fR ( .br job_array_resp_msg_t *resp .br ); .SH "ARGUMENTS" .LP .TP \fIjob_id\fP Slurm job ID to perform the operation upon in numeric form. .TP \fIjob_id_str\fP Slurm job ID to perform the operation upon in string form. This is intended to be a single job. For job arrays, the job ID may be followed by an underscore and task ID values. For example: "123", "123_4", "123_4\-6", "123_4,6,8", and "123_4\-6,18". The functions using this option are designed primarily for use with job arrays so that separate error codes can be returned for each task of the job array. .TP \fIresp\fP Array of error codes and job IDs. Always use the \fBslurm_free_job_array_resp\fR function to release the memory allocated to hold the error codes. .TP \fIstate\fP The state in which the job should be requeued valid values are: .RS .TP 20 \fI"0"\fP If the job has to be requeued in JOB_PENDING state. .TP \fI"JOB_SPECIAL_EXIT"\fP If the job has to be requeued in the special exit state and be held. .TP \fI"JOB_REQUEUE_HOLD"\fP If the job has to be requeued in "JOB_PENDING" and held state. .RE .SH "DESCRIPTION" .TP 18 \fBslurm_suspend\fR Suspend the specified job. .TP \fBslurm_suspend2\fR Suspend the specified job or job array. Call the function \fBslurm_free_job_array_resp\fR to release memory allocated for the response array. .TP \fBslurm_resume\fR Resume execution of a previously suspended job. .TP \fBslurm_resume2\fR Resume execution of a previously suspended job or job array. Call the function \fBslurm_free_job_array_resp\fR to release memory allocated for the response array. .TP \fBslurm_requeue\fR Requeue a running or pending Slurm batch job. The job script will be restarted from its beginning, ignoring any previous checkpoint. .TP \fBslurm_requeue2\fR Requeue a running or pending Slurm batch job or job array. The job script will be restarted from its beginning, ignoring any previous checkpoint. Call the function \fBslurm_free_job_array_resp\fR to release memory allocated for the response array. .TP \fBslurm_free_job_array_resp\fR Release memory allocated by the \fBslurm_suspend2\fR, \fBslurm_resume2\fR, \fBslurm_requeue2\fR, and \fBslurm_update_job2\fR functions. .SH "RETURN VALUE" .LP Zero is returned upon success. On error, \-1 is returned, and the Slurm error code is set appropriately. Functions \fBslurm_suspend2\fR, \fBslurm_resume2\fR, and \fBslurm_requeue2\fR return zero if the \fIresp\fP array is filled, in which the that array should be examined to determine the error codes for individual tasks of a job array. Then call the function \fBslurm_free_job_array_resp\fR to release memory allocated for the response array. .SH "ERRORS" .LP \fBESLURM_DISABLED\fR the operation is currently disabled (e.g. attempt to suspend a job that is not running, resume a job that is not currently suspended, or requeue a job on which the operation has been disabled). .LP \fBESLURM_INVALID_JOB_ID\fR the requested job id does not exist. .LP \fBESLURM_ACCESS_DENIED\fR the requesting user lacks authorization for the requested action (e.g. not user root or SlurmUser). .LP \fBESLURM_JOB_PENDING\fR the requested job is still pending. .LP \fBESLURM_ALREADY_DONE\fR the requested job has already completed. .LP \fBESLURM_NOT_SUPPORTED\fR the requested operation is not supported on this system. .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Portions copyright (C) 2014 SchedMD LLC. Portions copyright (C) 2005\-2006 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBscontrol\fR(1) slurm-slurm-15-08-7-1/doc/man/man3/slurm_resume2.3000066400000000000000000000000301265000126300213640ustar00rootroot00000000000000.so man3/slurm_resume.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_set_debug_level.3000066400000000000000000000000351265000126300231370ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_set_trigger.3000066400000000000000000000000371265000126300223270ustar00rootroot00000000000000.so man3/slurm_clear_trigger.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_shutdown.3000066400000000000000000000000351265000126300216620ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_signal_job.3000066400000000000000000000000321265000126300221130ustar00rootroot00000000000000.so man3/slurm_kill_job.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_signal_job_step.3000066400000000000000000000000321265000126300231460ustar00rootroot00000000000000.so man3/slurm_kill_job.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_slurmd_status.3000066400000000000000000000035451265000126300227310ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurmd status functions" "April 2015" "Slurmd status functions" .SH "NAME" slurm_free_slurmd_status, slurm_load_slurmd_status, slurm_print_slurmd_status \- Slurmd status functions .SH "SYNTAX" .LP #include .LP .LP void \fBslurm_free_slurmd_status\fR ( .br slurmd_status_t* \fIslurmd_status_ptr\fP .br ); .LP int \fBslurm_load_slurmd_status\fR ( .br slurmd_status_t** \fIslurmd_status_ptr\fP .br ); .LP void \fBslurm_print_slurmd_status\fR ( .br FILE *\fIout\fP, .br slurmd_status_t* \fIslurmd_status_pptr\fP .br ); .SH "ARGUMENTS" .LP .TP \fIslurmd_status_ptr\fP Slurmd status pointer. Created by \fBslurm_load_slurmd_status\fR, used in subsequent function calls, and destroyed by \fBslurm_free_slurmd_status\fR. .SH "DESCRIPTION" .LP \fBslurm_free_slurmd_status\fR free slurmd state information. .LP \fBslurm_load_slurmd_status\fR issue RPC to get the status of slurmd daemon on this machine. .LP \fBslurm_print_slurmd_status\fR output the contents of slurmd status message as loaded using slurm_load_slurmd_status. .SH "COPYING" Copyright (C) 2006-2007 The Regents of the University of California. Copyright (C) 2008 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. slurm-slurm-15-08-7-1/doc/man/man3/slurm_sprint_front_end_table.3000066400000000000000000000000511265000126300245310ustar00rootroot00000000000000.so man3/slurm_free_front_end_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_sprint_job_info.3000066400000000000000000000000431265000126300231720ustar00rootroot00000000000000.so man3/slurm_free_job_info_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_sprint_job_step_info.3000066400000000000000000000000611265000126300242250ustar00rootroot00000000000000.so man3/slurm_free_job_step_info_response_msg.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_sprint_node_table.3000066400000000000000000000000401265000126300234760ustar00rootroot00000000000000.so man3/slurm_free_node_info.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_sprint_partition_info.3000066400000000000000000000000451265000126300244330ustar00rootroot00000000000000.so man3/slurm_free_partition_info.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_sprint_reservation_info.3000066400000000000000000000000441265000126300247620ustar00rootroot00000000000000.so man3/slurm_load_reservations.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_step_ctx_create.3000066400000000000000000000205241265000126300231700ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm job step context functions" "April 2015" "Slurm job step context functions" .SH "NAME" slurm_step_ctx_create, slurm_step_ctx_create_no_alloc, slurm_step_ctx_daemon_per_node_hack, slurm_step_ctx_get, slurm_step_ctx_params_t_init, slurm_jobinfo_ctx_get, slurm_spawn_kill, slurm_step_ctx_destroy \- Slurm task spawn functions .SH "SYNTAX" .LP #include .LP .LP slurm_step_ctx \fBslurm_step_ctx_create\fR ( .br slurm_step_ctx_params_t *\fIstep_req\fP .br ); .LP slurm_step_ctx \fBslurm_step_ctx_create_no_alloc\fR ( .br slurm_step_ctx_params_t *\fIstep_req\fP .br ); .LP int \fBslurm_step_ctx_daemon_per_node_hack\fR ( .br slurm_step_ctx_t *\fIctx\fP .br ); .LP int \fBslurm_step_ctx_get\fR ( .br slurm_step_ctx_t *\fIctx\fP, .br int \fIctx_key\fP, .br ... .br ); .LP int \fBslurm_jobinfo_ctx_get\fR ( .br switch_jobinfo_t \fIjobinfo\fP, .br int \fIdata_type\fP, .br void *\fIdata\fP .br ); .LP void \fBslurm_step_ctx_params_t_init\fR ( .br slurm_step_ctx_params_t *\fIstep_req\fP .br ); .LP int \fBslurm_spawn\fR { .br slurm_step_ctx \fIctx\fP, .br int *\fIfd_array\fP .br ); .LP int \fBslurm_spawn_kill\fR { .br slurm_step_ctx \fIctx\fP, .br uint16_t \fIsignal\fP .br ); .LP int \fBslurm_step_ctx_destroy\fR { .br slurm_step_ctx \fIctx\fP .br ); .SH "ARGUMENTS" .LP .TP \fIstep_req\fP Specifies the pointer to the structure with job step request specification. See slurm.h for full details on the data structure's contents. .TP \fIctx\fP Job step context. Created by \fBslurm_step_ctx_create\fR, or \fBslurm_step_ctx_create_no_alloc\fR used in subsequent function calls, and destroyed by \fBslurm_step_ctx_destroy\fR. .TP \fIctx_key\fP Identifies the fields in \fIctx\fP to be collected by \fBslurm_step_ctx_get\fR. .TP \fIdata\fP Storage location for requested data. See \fIdata_type\fP below. .TP \fIdata_type\fP Switch\-specific data requested. The interpretation of this field depends upon the switch plugin in use. .TP \fIfd_array\fP Array of socket file descriptors to be connected to the initiated tasks. Tasks will be connected to these file descriptors in order of their task id. This socket will carry standard input, output and error for the task. \fIjobinfo\fP Switch\-specific job information as returned by \fBslurm_step_ctx_get\fR. .TP \fIsignal\fP Signal to be sent to the spawned tasks. .SH "DESCRIPTION" .LP \fBslurm_jobinfo_ctx_get\fR Get values from a \fIjobinfo\fR field as returned by \fBslurm_step_ctx_get\fR. The operation of this function is highly dependent upon the switch plugin in use. .LP \fBslurm_step_ctx_create\fR Create a job step context. To avoid memory leaks call \fBslurm_step_ctx_destroy\fR when the use of this context is finished. NOTE: this function creates a slurm job step. Call \fBslurm_spawn\fR in a timely fashion to avoid having job step credentials time out. If \fBslurm_spawn\fR is not used, explicitly cancel the job step. .LP \fBslurm_step_ctx_create_no_alloc\fR Same as above, only no allocation is made. To avoid memory leaks call \fBslurm_step_ctx_destroy\fR when the use of this context is finished. .LP \fBslurm_step_ctx_daemon_per_node_hack\fR Hack the step context to run a single process per node, regardless of the settings selected at slurm_step_ctx_create time. .LP \fBslurm_step_ctx_get\fR Get values from a job step context. \fIctx_key\fP identifies the fields to be gathered from the job step context. Subsequent arguments to this function are dependent upon the value of \fIctx_key\fP. See the \fBCONTEXT KEYS\fR section for details. .LP \fBslurm_step_ctx_params_t_init\fR This initializes parameters in the structure that you will pass to slurm_step_ctx_create(). .LP \fBslurm_spawn\fR Spawn tasks based upon a job step context and establish communications with the tasks using the socket file descriptors specified. Note that this function can only be called once for each job step context. Establish a new job step context for each set of tasks to be spawned. .LP \fBslurm_spawn_kill\fR Signal the tasks spawned for this context by \fBslurm_spawn\fR. .LP \fBslurm_step_ctx_destroy\fR Destroy a job step context created by \fBslurm_step_ctx_create\fR. .SH "CONEXT KEYS" .TP \fBSLURM_STEP_CTX_ARGS\fR Set the argument count and values for the executable. Accepts two additional arguments, the first of type int and the second of type char **. .TP \fBSLURM_STEP_CTX_CHDIR\fR Have the remote process change directory to the specified location before beginning execution. Accepts one argument of type char * identifying the directory's pathname. By default the remote process will execute in the same directory pathname from which it is spawned. NOTE: This assumes that same directory pathname exists on the other nodes. .TP \fBSLURM_STEP_CTX_ENV\fR Sets the environment variable count and values for the executable. Accepts two additional arguments, the first of type int and the second of type char **. By default the current environment variables are copied to started task's environment. .TP \fBSLURM_STEP_CTX_RESP\fR Get the job step response message. Accepts one additional argument of type job_step_create_response_msg_t **. .TP \fBSLURM_STEP_CTX_STEPID\fR Get the step id of the created job step. Accepts one additional argument of type uint32_t *. .TP \fBSLURM_STEP_CTX_TASKS\fR Get the number of tasks per node for a given job. Accepts one additional argument of type uint32_t **. This argument will be set to point to an array with the task counts of each node in an element of the array. See \fBSLURM_STEP_CTX_TID\fR below to determine the task ID numbers associated with each of those tasks. .TP \fBSLURM_STEP_CTX_TID\fR Get the task ID numbers associated with the tasks allocated to a specific node. Accepts two additional arguments, the first of type int and the second of type uint32_t **. The first argument identifies the node number of interest (zero origin). The second argument will be set to point to an array with the task ID numbers of each task allocated to the node (also zero origin). See \fBSLURM_STEP_CTX_TASKS\fR above to determine how many tasks are associated with each node. .SH "RETURN VALUE" .LP For \fB slurm_step_ctx_create\fR a context is return upon success. On error NULL is returned and the Slurm error code is set appropriately. .LP For all other functions zero is returned upon success. On error, \-1 is returned, and the Slurm error code is set appropriately. .SH "ERRORS" .LP \fBEINVAL\fR Invalid argument .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBESLURM_INVALID_JOB_ID\fR the requested job id does not exist. .LP \fBESLURM_ALREADY_DONE\fR the specified job has already completed and can not be modified. .LP \fBESLURM_ACCESS_DENIED\fR the requesting user lacks authorization for the requested action (e.g. trying to delete or modify another user's job). .LP \fBESLURM_DISABLED\fR the ability to create a job step is currently disabled. This is indicative of the job being suspended. Retry the call as desired. .LP \fBESLURM_INTERCONNECT_FAILURE\fR failed to configure the node interconnect. .LP \fBESLURM_BAD_DIST\fR task distribution specification is invalid. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "EXAMPLE .LP SEE \fBslurm_step_launch\fR(3) man page for an example of slurm_step_ctx_create and slurm_step_launch in use together. .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2004-2007 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm_allocate_resources\fR(3), \fBslurm_job_step_create\fR(3), \fBslurm_kill_job\fR(3), \fBslurm_get_errno\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3), \fBsrun\fR(1) slurm-slurm-15-08-7-1/doc/man/man3/slurm_step_ctx_create_no_alloc.3000066400000000000000000000000411265000126300250260ustar00rootroot00000000000000.so man3/slurm_step_ctx_create.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_step_ctx_daemon_per_node_hack.3000066400000000000000000000000411265000126300260210ustar00rootroot00000000000000.so man3/slurm_step_ctx_create.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_step_ctx_destroy.3000066400000000000000000000000411265000126300234060ustar00rootroot00000000000000.so man3/slurm_step_ctx_create.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_step_ctx_get.3000066400000000000000000000000411265000126300224740ustar00rootroot00000000000000.so man3/slurm_step_ctx_create.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_step_ctx_params_t_init.3000066400000000000000000000000411265000126300245460ustar00rootroot00000000000000.so man3/slurm_step_ctx_create.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_step_launch.3000066400000000000000000000165701265000126300223270ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm job step launch functions" "April 2015" "Slurm job step launch functions" .SH "NAME" slurm_step_launch_params_t_init, slurm_step_launch, slurm_step_launch_fwd_signal, slurm_step_launch_wait_start, slurm_step_launch_wait_finish, slurm_step_launch_abort \- Slurm job step launch functions .SH "SYNTAX" .LP #include .LP .LP void \fBslurm_step_launch_params_t_init\fR ( .br slurm_step_launch_params_t *\fIlaunch_req\fP .br ); .LP int \fBslurm_step_launch\fR ( .br slurm_step_ctx \fIctx\fP, .br const slurm_step_launch_params_t *\fIlaunch_req\fP, .br const slurm_step_launch_callbacks_t \fIcallbacks\fP .br ); .LP void \fBslurm_step_launch_fwd_signal\fR ( .br slurm_step_ctx \fIctx\fP, .br int \fIsigno\fP .br ); .LP int \fBslurm_step_launch_wait_start\fR ( .br slurm_step_ctx \fIctx\fP .br ); .LP void \fBslurm_step_launch_wait_finish\fR ( .br slurm_step_ctx \fIctx\fP .br ); .LP void \fBslurm_step_launch_abort\fR { .br slurm_step_ctx \fIctx\fP .br ); .SH "ARGUMENTS" .LP .TP \fIcallbacks\fP Identify functions to be called when various events occur. .TP \fIctx\fP Job step context. Created by \fBslurm_step_ctx_create\fR, used in subsequent function calls, and destroyed by \fBslurm_step_ctx_destroy\fR. .TP \fIlaunch_req\fP Pointer to a structure allocated by the user containing specifications of the job step to be launched. .SH "DESCRIPTION" .LP \fBslurm_step_launch_params_t_init\fR initialize a user-allocated slurm_step_launch_params_t structure with default values. default values. This function will NOT allocate any new memory. .LP \fBslurm_step_launch\fR Launch a parallel job step. .LP \fBslurm_step_launch_fwd_signal\fR Forward a signal to all those nodes with running tasks. .LP \fBslurm_step_launch_wait_start\fR Block until all tasks have started. .LP \fBslurm_step_launch_wait_finish\fR Block until all tasks have finished (or failed to start altogether). .LP \fBslurm_step_launch_abort\fR Abort an in-progress launch, or terminate the fully launched job step. Can be called from a signal handler. .SH "IO Redirection" .LP Use the \fIlocal_fds\fR entry in \fIslurm_step_launch_params_t\fR to specify file descriptors to be used for standard input, output and error. Any \fIlocal_fds\fR not specified will result in the launched tasks using the calling process's standard input, output and error. Threads created by \fBslurm_step_launch\fR will completely handle copying data between the remote processes and the specified local file descriptors. .LP Use the substructure in \fIslurm_step_io_fds_t\fR to restrict the redirection of I/O to a specific node or task ID. For example, to redirect standard output only from task 0, set .LP .nf params.local_fs.out.taskid=0; .fi .LP Use the \fIremote_*_filename\fR fields in \fIslurm_step_launch_params_t\fR to have launched tasks read and/or write directly to local files rather than transferring data over the network to the calling process. These strings support many of the same format options as the \fBsrun\fR command. Any \fIremote_*_filename\fR fields set will supersede the corresponding \fIlocal_fds\fR entries. For example, the following code will direct each task to write standard output and standard error to local files with names containing the task ID (e.g. "/home/bob/test_output/run1.out.0" and "/home/bob/test_output/run.1.err.0" for task 0). .LP .nf params.remote_output_filename = "/home/bob/test_output/run1.out.%t" params.remote_error_filename = "/home/bob/test_output/run1.err.%t" .fi .SH "RETURN VALUE" .LP \fBslurm_step_launch\fR and \fBslurm_step_launch_wait_start\fR will return SLURM_SUCCESS when all tasks have successfully started, or SLURM_ERROR if the job step is aborted during launch. .SH "ERRORS" .LP \fBEINVAL\fR Invalid argument .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBESLURM_INVALID_JOB_ID\fR the requested job id does not exist. .LP \fBESLURM_ALREADY_DONE\fR the specified job has already completed and can not be modified. .LP \fBESLURM_ACCESS_DENIED\fR the requesting user lacks authorization for the requested action (e.g. trying to delete or modify another user's job). .LP \fBESLURM_INTERCONNECT_FAILURE\fR failed to configure the node interconnect. .LP \fBESLURM_BAD_DIST\fR task distribution specification is invalid. .LP \fBSLURM_PROTOCOL_SOCKET_IMPL_TIMEOUT\fR Timeout in communicating with Slurm controller. .SH "EXAMPLE .LP .nf /* * To compile: * gcc test.c \-o test \-g \-pthread \-lslurm * * Or if Slurm is not in your default search paths: * gcc test.c \-o test \-g \-pthread \-I{$SLURM_DIR}/include \\ * \-Wl,\-\-rpath={$SLURM_DIR}/lib \-L{$SLURM_DIR}/lib \-lslurm */ #include #include #include #include #include static void _task_start(launch_tasks_response_msg_t *msg) { printf("%d tasks started on node %s\\n", msg->count_of_pids, msg->node_name); } static void _task_finish(task_exit_msg_t *msg) { printf("%d tasks finished\\n", msg->num_tasks); } int main (int argc, char *argv[]) { slurm_step_ctx_params_t step_params; slurm_step_ctx step_ctx; slurm_step_launch_params_t params; slurm_step_launch_callbacks_t callbacks; uint32_t job_id, step_id; slurm_step_ctx_params_t_init(&step_params); step_params.node_count = 1; step_params.task_count = 4; step_params.overcommit = true; step_ctx = slurm_step_ctx_create(&step_params); if (step_ctx == NULL) { slurm_perror("slurm_step_ctx_create"); exit(1); } slurm_step_ctx_get(step_ctx, SLURM_STEP_CTX_JOBID, &job_id); slurm_step_ctx_get(step_ctx, SLURM_STEP_CTX_STEPID, &step_id); printf("Ready to start job %u step %u\\n", job_id, step_id); slurm_step_launch_params_t_init(¶ms); params.argc = argc \- 1; params.argv = argv + 1; callbacks.task_start = _task_start; callbacks.task_finish = _task_finish; if (slurm_step_launch(step_ctx, NULL, ¶ms, &callbacks) != SLURM_SUCCESS) { slurm_perror("slurm_step_launch"); exit(1); } printf("Sent step launch RPC\\n"); if (slurm_step_launch_wait_start(step_ctx) != SLURM_SUCCESS) { fprintf(stderr, "job step was aborted during launch\\n"); } else { printf("All tasks have started\\n"); } slurm_step_launch_wait_finish(step_ctx); printf("All tasks have finished\\n"); slurm_step_ctx_destroy(step_ctx); exit(0); } .fi .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Copyright (C) 2006-2007 The Regents of the University of California. Copyright (C) 2008 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm_step_ctx_create\fR(3), \fBslurm_step_ctx_destroy\fR(3), \fBslurm_get_errno\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3), \fBsalloc\fR(1), \fBsrun\fR(1) slurm-slurm-15-08-7-1/doc/man/man3/slurm_step_launch_abort.3000066400000000000000000000000351265000126300235030ustar00rootroot00000000000000.so man3/slurm_step_launch.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_step_launch_fwd_signal.3000066400000000000000000000000351265000126300245110ustar00rootroot00000000000000.so man3/slurm_step_launch.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_step_launch_wait_finish.3000066400000000000000000000000351265000126300247000ustar00rootroot00000000000000.so man3/slurm_step_launch.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_step_launch_wait_start.3000066400000000000000000000000351265000126300245550ustar00rootroot00000000000000.so man3/slurm_step_launch.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_strerror.3000066400000000000000000000000331265000126300216670ustar00rootroot00000000000000.so man3/slurm_get_errno.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_submit_batch_job.3000066400000000000000000000000441265000126300233050ustar00rootroot00000000000000.so man3/slurm_allocate_resources.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_suspend.3000066400000000000000000000000301265000126300214630ustar00rootroot00000000000000.so man3/slurm_resume.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_suspend2.3000066400000000000000000000000301265000126300215450ustar00rootroot00000000000000.so man3/slurm_resume.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_takeover.3000066400000000000000000000000351265000126300216270ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_terminate_job.3000066400000000000000000000000321265000126300226260ustar00rootroot00000000000000.so man3/slurm_kill_job.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_terminate_job_step.3000066400000000000000000000000321265000126300236610ustar00rootroot00000000000000.so man3/slurm_kill_job.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_update_front_end.3000066400000000000000000000000351265000126300233270ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_update_job.3000066400000000000000000000116551265000126300221350ustar00rootroot00000000000000.TH "Slurm API" "3" "Slurm job and step update functions" "April 2015" "Slurm job and step update functions" .SH "NAME" slurm_init_job_desc_msg, slurm_init_update_step_msg, slurm_update_job, slurm_update_job2, slurm_update_step \- Slurm job and step update functions .SH "SYNTAX" .LP #include .LP void \fBslurm_init_job_desc_msg\fR ( .br job_desc_msg_t *\fIjob_desc_msg_ptr\fP .br ); .LP void \fBslurm_init_update_step_msg\fR ( .br step_update_request_msg_t * \fIstep_msg\fP .br ); .LP int \fBslurm_update_job\fR ( .br job_desc_msg_t * \fIjob_msg\fP .br ); .LP int \fBslurm_update_job2\fR ( .br job_desc_msg_t * \fIjob_msg\fP, job_array_resp_msg_t **resp .br ); .LP int \fBslurm_update_step\fR ( .br step_update_request_msg_t * \fIstep_msg\fP .br ); .LP void \fBslurm_free_job_array_resp\fR ( .br job_array_resp_msg_t *resp .br ); .SH "ARGUMENTS" .LP .TP \fIjob_msg\fP Specifies the pointer to a job descriptor. See slurm.h for full details on the data structure's contents. .TP \fIstep_msg\fP Specifies the pointer to a step descriptor. See slurm.h for full details on the data structure's contents. .TP \fIresp\fP Array of error codes and job IDs. Always use the \fBslurm_free_job_array_resp\fR function to release the memory allocated to hold the error codes. .SH "DESCRIPTION" .TP 18 \fBslurm_init_job_desc_msg\fR Initialize the contents of a job descriptor with default values. Execute this function before issuing a request to submit or modify a job. .TP \fBslurm_init_update_step_msg\fR Initialize the contents of a job step update descriptor with default values. Execute this function before issuing a request to modify a job step. .TP \fBslurm_update_job\fR Update a job with the changes made to the data structure passed as an argument to the function. .TP \fBslurm_update_job2\fR Update a job or job array with the changes made to the data structure passed as an argument to the function. Call the function \fBslurm_free_job_array_resp\fR to release memory allocated for the response array. .TP \fBslurm_update_step\fR Update a job step with the changes made to the data structure passed as an argument to the function. .TP \fBslurm_free_job_array_resp\fR Release memory allocated by the \fBslurm_suspend2\fR, \fBslurm_resume2\fR, \fBslurm_requeue2\fR, and \fBslurm_update_job2\fR functions. .SH "RETURN VALUE" .LP On success, zero is returned. On error, \-1 is returned, and the Slurm error code is set appropriately. The function \fBslurm_update_job2\fR returns zero if the \fIresp\fP array is filled, in which the that array should be examined to determine the error codes for individual tasks of a job array. Then call the function \fBslurm_free_job_array_resp\fR to release memory allocated for the response array. .SH "ERRORS" .LP \fBSLURM_PROTOCOL_VERSION_ERROR\fR Protocol version has changed, re\-link your code. .LP \fBESLURM_ACCESS_DENIED\fR The requesting user lacks authorization for the requested action (e.g. trying to modify another user's job). .LP \fBESLURM_INVALID_JOB_ID\fR Invalid job or step ID value. .LP \fBESLURM_INVALID_TIME_VALUE\fR Invalid time value. .SH "EXAMPLE" .LP #include .br #include .br #include .LP int main (int argc, char *argv[]) .br { .br job_desc_msg_t update_job_msg; .br step_update_request_msg_t update_step_msg; .LP slurm_init_job_desc_msg( &update_job_msg ); .br update_job_msg.job_id = 1234; .br update_job_msg time_limit = 200; .br if (slurm_update_job (&update_job_msg)) { .br slurm_perror ("slurm_update_job error"); .br exit (1); .br } .LP slurm_init_update_step_msg( &update_step_msg ); .br update_step_msg.job_id = 1234; .br update_step_msg.step_id = 2; .br update_step_msg time_limit = 30; .br if (slurm_update_step (&update_step_msg)) { .br slurm_perror ("slurm_update_step error"); .br exit (1); .br } .br exit (0); .br } .SH "NOTE" These functions are included in the libslurm library, which must be linked to your process for use (e.g. "cc \-lslurm myprog.c"). .SH "COPYING" Portions copyright (C) 2014 SchedMD LLC. Portions copyright (C) 2009\-2010 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). CODE\-OCEC\-09\-009. All rights reserved. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBscontrol\fR(1), \fBslurm_free_job_array_resp\fR(3), \fBslurm_get_errno\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3), slurm-slurm-15-08-7-1/doc/man/man3/slurm_update_job2.3000066400000000000000000000000341265000126300222040ustar00rootroot00000000000000.so man3/slurm_update_job.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_update_node.3000066400000000000000000000000351265000126300222760ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_update_partition.3000066400000000000000000000000351265000126300233620ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_update_reservation.3000066400000000000000000000000351265000126300237120ustar00rootroot00000000000000.so man3/slurm_reconfigure.3 slurm-slurm-15-08-7-1/doc/man/man3/slurm_update_step.3000066400000000000000000000000341265000126300223230ustar00rootroot00000000000000.so man3/slurm_update_job.3 slurm-slurm-15-08-7-1/doc/man/man5/000077500000000000000000000000001265000126300165055ustar00rootroot00000000000000slurm-slurm-15-08-7-1/doc/man/man5/Makefile.am000066400000000000000000000015001265000126300205350ustar00rootroot00000000000000htmldir = ${datadir}/doc/${PACKAGE}-${SLURM_VERSION_STRING}/html man5_MANS = \ acct_gather.conf.5 \ bluegene.conf.5 \ burst_buffer.conf.5 \ cgroup.conf.5 \ cray.conf.5 \ ext_sensors.conf.5 \ gres.conf.5 \ nonstop.conf.5 \ slurm.conf.5 \ slurmdbd.conf.5 \ topology.conf.5 \ wiki.conf.5 EXTRA_DIST = $(man5_MANS) if HAVE_MAN2HTML html_DATA = \ acct_gather.conf.html \ bluegene.conf.html \ burst_buffer.conf.html \ cgroup.conf.html \ cray.conf.html \ ext_sensors.conf.html \ gres.conf.html \ nonstop.conf.html \ slurm.conf.html \ slurmdbd.conf.html \ topology.conf.html \ wiki.conf.html MOSTLYCLEANFILES = ${html_DATA} EXTRA_DIST += $(html_DATA) SUFFIXES = .html .5.html: `dirname $<`/../man2html.py @SLURM_MAJOR@.@SLURM_MINOR@ $(srcdir)/../../html/header.txt $(srcdir)/../../html/footer.txt $< endif slurm-slurm-15-08-7-1/doc/man/man5/Makefile.in000066400000000000000000000537431265000126300205660ustar00rootroot00000000000000# Makefile.in generated by automake 1.14.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2013 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = test -n '$(MAKEFILE_LIST)' && test -n '$(MAKELEVEL)' am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ target_triplet = @target@ @HAVE_MAN2HTML_TRUE@am__append_1 = $(html_DATA) subdir = doc/man/man5 DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.am ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/auxdir/ax_lib_hdf5.m4 \ $(top_srcdir)/auxdir/ax_pthread.m4 \ $(top_srcdir)/auxdir/libtool.m4 \ $(top_srcdir)/auxdir/ltoptions.m4 \ $(top_srcdir)/auxdir/ltsugar.m4 \ $(top_srcdir)/auxdir/ltversion.m4 \ $(top_srcdir)/auxdir/lt~obsolete.m4 \ $(top_srcdir)/auxdir/slurm.m4 \ $(top_srcdir)/auxdir/x_ac__system_configuration.m4 \ $(top_srcdir)/auxdir/x_ac_affinity.m4 \ $(top_srcdir)/auxdir/x_ac_aix.m4 \ $(top_srcdir)/auxdir/x_ac_blcr.m4 \ $(top_srcdir)/auxdir/x_ac_bluegene.m4 \ $(top_srcdir)/auxdir/x_ac_cflags.m4 \ $(top_srcdir)/auxdir/x_ac_cray.m4 \ $(top_srcdir)/auxdir/x_ac_curl.m4 \ $(top_srcdir)/auxdir/x_ac_databases.m4 \ $(top_srcdir)/auxdir/x_ac_debug.m4 \ $(top_srcdir)/auxdir/x_ac_dlfcn.m4 \ $(top_srcdir)/auxdir/x_ac_env.m4 \ $(top_srcdir)/auxdir/x_ac_freeipmi.m4 \ $(top_srcdir)/auxdir/x_ac_gpl_licensed.m4 \ $(top_srcdir)/auxdir/x_ac_hwloc.m4 \ $(top_srcdir)/auxdir/x_ac_iso.m4 \ $(top_srcdir)/auxdir/x_ac_json.m4 \ $(top_srcdir)/auxdir/x_ac_lua.m4 \ $(top_srcdir)/auxdir/x_ac_man2html.m4 \ $(top_srcdir)/auxdir/x_ac_munge.m4 \ $(top_srcdir)/auxdir/x_ac_ncurses.m4 \ $(top_srcdir)/auxdir/x_ac_netloc.m4 \ $(top_srcdir)/auxdir/x_ac_nrt.m4 \ $(top_srcdir)/auxdir/x_ac_ofed.m4 \ $(top_srcdir)/auxdir/x_ac_pam.m4 \ $(top_srcdir)/auxdir/x_ac_printf_null.m4 \ $(top_srcdir)/auxdir/x_ac_ptrace.m4 \ $(top_srcdir)/auxdir/x_ac_readline.m4 \ $(top_srcdir)/auxdir/x_ac_rrdtool.m4 \ $(top_srcdir)/auxdir/x_ac_setpgrp.m4 \ $(top_srcdir)/auxdir/x_ac_setproctitle.m4 \ $(top_srcdir)/auxdir/x_ac_sgi_job.m4 \ $(top_srcdir)/auxdir/x_ac_slurm_ssl.m4 \ $(top_srcdir)/auxdir/x_ac_sun_const.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/config.h $(top_builddir)/slurm/slurm.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } man5dir = $(mandir)/man5 am__installdirs = "$(DESTDIR)$(man5dir)" "$(DESTDIR)$(htmldir)" NROFF = nroff MANS = $(man5_MANS) DATA = $(html_DATA) am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTHD_CFLAGS = @AUTHD_CFLAGS@ AUTHD_LIBS = @AUTHD_LIBS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BGL_LOADED = @BGL_LOADED@ BGQ_LOADED = @BGQ_LOADED@ BG_INCLUDES = @BG_INCLUDES@ BG_LDFLAGS = @BG_LDFLAGS@ BG_L_P_LOADED = @BG_L_P_LOADED@ BLCR_CPPFLAGS = @BLCR_CPPFLAGS@ BLCR_HOME = @BLCR_HOME@ BLCR_LDFLAGS = @BLCR_LDFLAGS@ BLCR_LIBS = @BLCR_LIBS@ BLUEGENE_LOADED = @BLUEGENE_LOADED@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CHECK_CFLAGS = @CHECK_CFLAGS@ CHECK_LIBS = @CHECK_LIBS@ CMD_LDFLAGS = @CMD_LDFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CRAY_JOB_CPPFLAGS = @CRAY_JOB_CPPFLAGS@ CRAY_JOB_LDFLAGS = @CRAY_JOB_LDFLAGS@ CRAY_SELECT_CPPFLAGS = @CRAY_SELECT_CPPFLAGS@ CRAY_SELECT_LDFLAGS = @CRAY_SELECT_LDFLAGS@ CRAY_SWITCH_CPPFLAGS = @CRAY_SWITCH_CPPFLAGS@ CRAY_SWITCH_LDFLAGS = @CRAY_SWITCH_LDFLAGS@ CRAY_TASK_CPPFLAGS = @CRAY_TASK_CPPFLAGS@ CRAY_TASK_LDFLAGS = @CRAY_TASK_LDFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DATAWARP_CPPFLAGS = @DATAWARP_CPPFLAGS@ DATAWARP_LDFLAGS = @DATAWARP_LDFLAGS@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DL_LIBS = @DL_LIBS@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FREEIPMI_CPPFLAGS = @FREEIPMI_CPPFLAGS@ FREEIPMI_LDFLAGS = @FREEIPMI_LDFLAGS@ FREEIPMI_LIBS = @FREEIPMI_LIBS@ GLIB_CFLAGS = @GLIB_CFLAGS@ GLIB_COMPILE_RESOURCES = @GLIB_COMPILE_RESOURCES@ GLIB_GENMARSHAL = @GLIB_GENMARSHAL@ GLIB_LIBS = @GLIB_LIBS@ GLIB_MKENUMS = @GLIB_MKENUMS@ GOBJECT_QUERY = @GOBJECT_QUERY@ GREP = @GREP@ GTK_CFLAGS = @GTK_CFLAGS@ GTK_LIBS = @GTK_LIBS@ H5CC = @H5CC@ H5FC = @H5FC@ HAVEMYSQLCONFIG = @HAVEMYSQLCONFIG@ HAVE_AIX = @HAVE_AIX@ HAVE_MAN2HTML = @HAVE_MAN2HTML@ HAVE_NRT = @HAVE_NRT@ HAVE_OPENSSL = @HAVE_OPENSSL@ HAVE_SOME_CURSES = @HAVE_SOME_CURSES@ HDF5_CC = @HDF5_CC@ HDF5_CFLAGS = @HDF5_CFLAGS@ HDF5_CPPFLAGS = @HDF5_CPPFLAGS@ HDF5_FC = @HDF5_FC@ HDF5_FFLAGS = @HDF5_FFLAGS@ HDF5_FLIBS = @HDF5_FLIBS@ HDF5_LDFLAGS = @HDF5_LDFLAGS@ HDF5_LIBS = @HDF5_LIBS@ HDF5_VERSION = @HDF5_VERSION@ HWLOC_CPPFLAGS = @HWLOC_CPPFLAGS@ HWLOC_LDFLAGS = @HWLOC_LDFLAGS@ HWLOC_LIBS = @HWLOC_LIBS@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ JSON_CPPFLAGS = @JSON_CPPFLAGS@ JSON_LDFLAGS = @JSON_LDFLAGS@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBCURL = @LIBCURL@ LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIB_LDFLAGS = @LIB_LDFLAGS@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ MUNGE_CPPFLAGS = @MUNGE_CPPFLAGS@ MUNGE_DIR = @MUNGE_DIR@ MUNGE_LDFLAGS = @MUNGE_LDFLAGS@ MUNGE_LIBS = @MUNGE_LIBS@ MYSQL_CFLAGS = @MYSQL_CFLAGS@ MYSQL_LIBS = @MYSQL_LIBS@ NCURSES = @NCURSES@ NETLOC_CPPFLAGS = @NETLOC_CPPFLAGS@ NETLOC_LDFLAGS = @NETLOC_LDFLAGS@ NETLOC_LIBS = @NETLOC_LIBS@ NM = @NM@ NMEDIT = @NMEDIT@ NRT_CPPFLAGS = @NRT_CPPFLAGS@ NUMA_LIBS = @NUMA_LIBS@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OFED_CPPFLAGS = @OFED_CPPFLAGS@ OFED_LDFLAGS = @OFED_LDFLAGS@ OFED_LIBS = @OFED_LIBS@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PAM_DIR = @PAM_DIR@ PAM_LIBS = @PAM_LIBS@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PROCTRACKDIR = @PROCTRACKDIR@ PROJECT = @PROJECT@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ READLINE_LIBS = @READLINE_LIBS@ REAL_BGQ_LOADED = @REAL_BGQ_LOADED@ REAL_BG_L_P_LOADED = @REAL_BG_L_P_LOADED@ RELEASE = @RELEASE@ RRDTOOL_CPPFLAGS = @RRDTOOL_CPPFLAGS@ RRDTOOL_LDFLAGS = @RRDTOOL_LDFLAGS@ RRDTOOL_LIBS = @RRDTOOL_LIBS@ RUNJOB_LDFLAGS = @RUNJOB_LDFLAGS@ SED = @SED@ SEMAPHORE_LIBS = @SEMAPHORE_LIBS@ SEMAPHORE_SOURCES = @SEMAPHORE_SOURCES@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ SLEEP_CMD = @SLEEP_CMD@ SLURMCTLD_PORT = @SLURMCTLD_PORT@ SLURMCTLD_PORT_COUNT = @SLURMCTLD_PORT_COUNT@ SLURMDBD_PORT = @SLURMDBD_PORT@ SLURMD_PORT = @SLURMD_PORT@ SLURM_API_AGE = @SLURM_API_AGE@ SLURM_API_CURRENT = @SLURM_API_CURRENT@ SLURM_API_MAJOR = @SLURM_API_MAJOR@ SLURM_API_REVISION = @SLURM_API_REVISION@ SLURM_API_VERSION = @SLURM_API_VERSION@ SLURM_MAJOR = @SLURM_MAJOR@ SLURM_MICRO = @SLURM_MICRO@ SLURM_MINOR = @SLURM_MINOR@ SLURM_PREFIX = @SLURM_PREFIX@ SLURM_VERSION_NUMBER = @SLURM_VERSION_NUMBER@ SLURM_VERSION_STRING = @SLURM_VERSION_STRING@ SO_LDFLAGS = @SO_LDFLAGS@ SSL_CPPFLAGS = @SSL_CPPFLAGS@ SSL_LDFLAGS = @SSL_LDFLAGS@ SSL_LIBS = @SSL_LIBS@ STRIP = @STRIP@ SUCMD = @SUCMD@ UTIL_LIBS = @UTIL_LIBS@ VERSION = @VERSION@ _libcurl_config = @_libcurl_config@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ ac_have_man2html = @ac_have_man2html@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = ${datadir}/doc/${PACKAGE}-${SLURM_VERSION_STRING}/html includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ lua_CFLAGS = @lua_CFLAGS@ lua_LIBS = @lua_LIBS@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ target_cpu = @target_cpu@ target_os = @target_os@ target_vendor = @target_vendor@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ man5_MANS = \ acct_gather.conf.5 \ bluegene.conf.5 \ burst_buffer.conf.5 \ cgroup.conf.5 \ cray.conf.5 \ ext_sensors.conf.5 \ gres.conf.5 \ nonstop.conf.5 \ slurm.conf.5 \ slurmdbd.conf.5 \ topology.conf.5 \ wiki.conf.5 EXTRA_DIST = $(man5_MANS) $(am__append_1) @HAVE_MAN2HTML_TRUE@html_DATA = \ @HAVE_MAN2HTML_TRUE@ acct_gather.conf.html \ @HAVE_MAN2HTML_TRUE@ bluegene.conf.html \ @HAVE_MAN2HTML_TRUE@ burst_buffer.conf.html \ @HAVE_MAN2HTML_TRUE@ cgroup.conf.html \ @HAVE_MAN2HTML_TRUE@ cray.conf.html \ @HAVE_MAN2HTML_TRUE@ ext_sensors.conf.html \ @HAVE_MAN2HTML_TRUE@ gres.conf.html \ @HAVE_MAN2HTML_TRUE@ nonstop.conf.html \ @HAVE_MAN2HTML_TRUE@ slurm.conf.html \ @HAVE_MAN2HTML_TRUE@ slurmdbd.conf.html \ @HAVE_MAN2HTML_TRUE@ topology.conf.html \ @HAVE_MAN2HTML_TRUE@ wiki.conf.html @HAVE_MAN2HTML_TRUE@MOSTLYCLEANFILES = ${html_DATA} @HAVE_MAN2HTML_TRUE@SUFFIXES = .html all: all-am .SUFFIXES: .SUFFIXES: .html .5 $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu doc/man/man5/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --gnu doc/man/man5/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs install-man5: $(man5_MANS) @$(NORMAL_INSTALL) @list1='$(man5_MANS)'; \ list2=''; \ test -n "$(man5dir)" \ && test -n "`echo $$list1$$list2`" \ || exit 0; \ echo " $(MKDIR_P) '$(DESTDIR)$(man5dir)'"; \ $(MKDIR_P) "$(DESTDIR)$(man5dir)" || exit 1; \ { for i in $$list1; do echo "$$i"; done; \ if test -n "$$list2"; then \ for i in $$list2; do echo "$$i"; done \ | sed -n '/\.5[a-z]*$$/p'; \ fi; \ } | while read p; do \ if test -f $$p; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; echo "$$p"; \ done | \ sed -e 'n;s,.*/,,;p;h;s,.*\.,,;s,^[^5][0-9a-z]*$$,5,;x' \ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,' | \ sed 'N;N;s,\n, ,g' | { \ list=; while read file base inst; do \ if test "$$base" = "$$inst"; then list="$$list $$file"; else \ echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man5dir)/$$inst'"; \ $(INSTALL_DATA) "$$file" "$(DESTDIR)$(man5dir)/$$inst" || exit $$?; \ fi; \ done; \ for i in $$list; do echo "$$i"; done | $(am__base_list) | \ while read files; do \ test -z "$$files" || { \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(man5dir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(man5dir)" || exit $$?; }; \ done; } uninstall-man5: @$(NORMAL_UNINSTALL) @list='$(man5_MANS)'; test -n "$(man5dir)" || exit 0; \ files=`{ for i in $$list; do echo "$$i"; done; \ } | sed -e 's,.*/,,;h;s,.*\.,,;s,^[^5][0-9a-z]*$$,5,;x' \ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,'`; \ dir='$(DESTDIR)$(man5dir)'; $(am__uninstall_files_from_dir) install-htmlDATA: $(html_DATA) @$(NORMAL_INSTALL) @list='$(html_DATA)'; test -n "$(htmldir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(htmldir)'"; \ $(MKDIR_P) "$(DESTDIR)$(htmldir)" || exit 1; \ fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(htmldir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(htmldir)" || exit $$?; \ done uninstall-htmlDATA: @$(NORMAL_UNINSTALL) @list='$(html_DATA)'; test -n "$(htmldir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(htmldir)'; $(am__uninstall_files_from_dir) tags TAGS: ctags CTAGS: cscope cscopelist: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(MANS) $(DATA) installdirs: for dir in "$(DESTDIR)$(man5dir)" "$(DESTDIR)$(htmldir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: -test -z "$(MOSTLYCLEANFILES)" || rm -f $(MOSTLYCLEANFILES) clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-htmlDATA install-man install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-man5 install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-htmlDATA uninstall-man uninstall-man: uninstall-man5 .MAKE: install-am install-strip .PHONY: all all-am check check-am clean clean-generic clean-libtool \ cscopelist-am ctags-am distclean distclean-generic \ distclean-libtool distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-htmlDATA install-info install-info-am \ install-man install-man5 install-pdf install-pdf-am install-ps \ install-ps-am install-strip installcheck installcheck-am \ installdirs maintainer-clean maintainer-clean-generic \ mostlyclean mostlyclean-generic mostlyclean-libtool pdf pdf-am \ ps ps-am tags-am uninstall uninstall-am uninstall-htmlDATA \ uninstall-man uninstall-man5 @HAVE_MAN2HTML_TRUE@.5.html: @HAVE_MAN2HTML_TRUE@ `dirname $<`/../man2html.py @SLURM_MAJOR@.@SLURM_MINOR@ $(srcdir)/../../html/header.txt $(srcdir)/../../html/footer.txt $< # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: slurm-slurm-15-08-7-1/doc/man/man5/acct_gather.conf.5000066400000000000000000000126301265000126300217650ustar00rootroot00000000000000.TH "acct_gather.conf" "5" "Slurm Configuration File" "April 2015" "Slurm Configuration File" .SH "NAME" acct_gather.conf \- Slurm configuration file for the acct_gather plugins .SH "DESCRIPTION" \fBacct_gather.conf\fP is an ASCII file which defines parameters used by Slurm's acct_gather related plugins. The file location can be modified at system build time using the DEFAULT_SLURM_CONF parameter or at execution time by setting the SLURM_CONF environment variable. The file will always be located in the same directory as the \fBslurm.conf\fP file. .LP Parameter names are case insensitive. Any text following a "#" in the configuration file is treated as a comment through the end of that line. The size of each line in the file is limited to 1024 characters. Changes to the configuration file take effect upon restart of Slurm daemons, daemon receipt of the SIGHUP signal, or execution of the command "scontrol reconfigure" unless otherwise noted. .LP The following acct_gather.conf parameters are defined to control the general behavior of various plugins in Slurm. .LP The acct_gather.conf file is different than other Slurm .conf files. Each plugin defines which options are available. So if you do not load the respective plugin for an option that option will appear to be unknown by Slurm and could cause Slurm not to load. If you decide to change plugin types you might also have to change the related options as well. .TP \fBEnergyIPMI\fR Options used for AcctGatherEnergyType/ipmi are as follows: .RS .TP 10 \fBEnergyIPMIFrequency\fR= This parameter is the number of seconds between BMC access samples. .TP \fBEnergyIPMICalcAdjustment\fR= If set to "yes", the consumption between the last BMC access sample and a step consumption update is approximated to get more accurate task consumption. The adjustment is made at the step start and each time the consumption is updated, including the step end. The approximations are not accumulated, only the first and last adjustments are used to calculated the consumption. The default is "no". .TP \fBEnergyIPMIPowerSensors\fR=\fR Optionally specify the ids of the sensors to used. Multiple can be set with ";" separators. The key "Node" is mandatory and is used to know the consumed energy for nodes (scontrol show node) and jobs (sacct). Other keys are optional and are named by administrator. These keys are useful only when profile is activated for energy to store power (in watt) of each key. are integers, multiple values can be set with "," separators. The sum of the listed sensors is used for each key. EnergyIPMIPowerSensors is optional, default value is "Node=number" where "number" is the id of the first power sensor returned by ipmi-sensors. .br i.e. .br .na EnergyIPMIPowerSensors=Node=16,19,23,26;Socket0=16,23;Socket1=19,26;SSUP=23,26;KNC=16,19 .ad .br EnergyIPMIPowerSensors=Node=29,32;SSUP0=29;SSUP1=32 .br EnergyIPMIPowerSensors=Node=1280 .LP The following acct_gather.conf parameters are defined to control the IPMI config default values for libipmiconsole. .TP 10 \fBEnergyIPMIUsername\fR=\fIUSERNAME\fR Specify BMC Username. .TP \fBEnergyIPMIPassword\fR=\fIPASSWORD\fR Specify BMC Password. .RE .TP \fBProfileHDF5\fR Options used for AcctGatherProfileType/hdf5 are as follows: .RS .TP 10 \fBProfileHDF5Dir\fR= This parameter is the path to the shared folder into which the acct_gather_profile plugin will write detailed data (usually as an HDF5 file). The directory is assumed to be on a file system shared by the controller and all compute nodes. This is a required parameter. .TP \fBProfileHDF5Default\fR A comma delimited list of data types to be collected for each job submission. Allowed values are: .RS .TP 8 \fBAll\fR All data types are collected. (Cannot be combined with other values.) .TP \fBNone\fR No data types are collected. This is the default. (Cannot be combined with other values.) .TP \fBEnergy\fR Energy data is collected. .TP \fBFilesystem\fR File system (Lustre) data is collected. .TP \fBNetwork\fR Network (InfiniBand) data is collected. .TP \fBTask\fR Task (I/O, Memory, ...) data is collected. .RE .RE .TP \fBInfinibandOFED\fR Options used for AcctGatherInfinbandType/ofed are as follows: .RS .TP 10 \fBInfinibandOFEDPort\fR= This parameter represents the port number of the local Infiniband card that we are willing to monitor. The default port is 1. .RE .RE .SH "EXAMPLE" .LP .br ### .br # Slurm acct_gather configuration file .br ### .br # Parameters for AcctGatherEnergy/impi plugin .br EnergyIPMIFrequency=10 .br EnergyIPMICalcAdjustment=yes .br # .br # Parameters for AcctGatherProfileType/hdf5 plugin .br ProfileHDF5Dir=/app/slurm/profile_data .br # Parameters for AcctGatherInfiniband/ofed plugin .br InfinibandOFEDPort=1 .br .SH "COPYING" Copyright (C) 2012-2013 Bull. Produced at Bull (cf, DISCLAIMER). .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm.conf\fR(5) slurm-slurm-15-08-7-1/doc/man/man5/bluegene.conf.5000066400000000000000000000325111265000126300213070ustar00rootroot00000000000000.TH "bluegene.conf" "5" "Slurm Configuration File" "April 2015" "Slurm Configuration File" .SH "NAME" bluegene.conf \- Slurm configuration file for BlueGene systems .SH "DESCRIPTION" \fBbluegene.conf\fP is an ASCII file which describes IBM BlueGene specific Slurm configuration information. This includes specifications for bgblock layout, configuration, logging, etc. The file location can be modified at system build time using the DEFAULT_SLURM_CONF parameter or at execution time by setting the SLURM_CONF environment variable. The file will always be located in the same directory as the \fBslurm.conf\fP file. .LP Parameter names are case insensitive. Any text following a "#" in the configuration file is treated as a comment through the end of that line. Changes to the configuration file take only effect upon restart of the slurmctld daemon. "scontrol reconfigure" does nothing with this file. Changes will only take place after a restart of the controller. .LP There are some differences between BlueGene/L, BlueGene/P and BlueGene/Q systems with respects to the contents of the bluegene.conf file. .SH "The BlueGene/L specific options are:" .TP \fBAltBlrtsImage\fR Alternative BlrtsImage. This is an optional field only used for multiple images on a system and should be followed by a Groups option indicating the user groups allowed to use this image (i.e. Groups=da,jette). If Groups is not specified then this image will be usable by all groups. You can put as many alternative images as you want in the bluegene.conf file. .TP \fBAltLinuxImage\fR Alternative LinuxImage. This is an optional field only used for multiple images on a system and should be followed by a Groups option indicating the user groups allowed to use this image (i.e. Groups=da,jette). If Groups is not specified then this image will be usable by all groups. You can put as many alternative images as you want in the bluegene.conf file. .TP \fBAltRamDiskImage\fR Alternative RamDiskImage. This is an optional field only used for multiple images on a system and should be followed by a Groups option indicating the user groups allowed to use this image (i.e. Groups=da,jette). If Groups is not specified then this image will be usable by all groups. You can put as many alternative images as you want in the bluegene.conf file. .TP \fBBlrtsImage\fR BlrtsImage used for creation of all bgblocks. There is no default value and this must be specified. .TP \fBLinuxImage\fR LinuxImage used for creation of all bgblocks. There is no default value and this must be specified. .TP \fBRamDiskImage\fR RamDiskImage used for creation of all bgblocks. There is no default value and this must be specified. .SH "The BlueGene/P specific options are:" .TP \fBAltCnloadImage\fR Alternative CnloadImage. This is an optional field only used for multiple images on a system and should be followed by a Groups option indicating the user groups allowed to use this image (i.e. Groups=da,jette). If Groups is not specified then this image will be usable by all groups. You can put as many alternative images as you want in the conf file. .TP \fBAltIoloadImage\fR Alternative IoloadImage. This is an optional field only used for multiple images on a system and should be followed by a Groups option indicating the user groups allowed to use this image (i.e. Groups=da,jette). If Groups is not specified then this image will be usable by all groups. You can put as many alternative images as you want in the conf file. .TP \fBCnloadImage\fR CnloadImage used for creation of all bgblocks. There is no default value and this must be specified. .TP \fBIoloadImage\fR IoloadImage used for creation of all bgblocks. There is no default value and this must be specified. .SH "The BlueGene/Q specific options are:" .TP \fBAllowSubBlockAllocations\fR Can be set to Yes or No, defaults to No. This option allows multiple users to run jobs as small as 1 cnode in size on a block one midplane in size and smaller. While this option gives great flexibility to run a host of job sizes previously not available on any BlueGene system it also may cause security concerns since IO traffic can share the same path with other jobs. NOTE - There is a current limitation for sub-block jobs and how the system (used for I/O) and user (used for MPI) torus class routes are configured. The network device hardware has cutoff registers to prevent packets from flowing outside of the sub-block. Unfortunately, when the sub-block has a size 3, the job can attempt to send user packets outside of its sub-block. This causes it to be terminated by signal 36. To prevent this from happening Slurm does not allow a sub-block to be used with any dimension of 3. NOTE - In the current IBM API it does not allow wrapping inside a midplane. Meaning you can not create a sub-block of 2 with nodes in the 0 and 3 position. Slurm will support this in the future when the underlying system allows it. .TP \fBRebootQOSList\fR A comma separated list of QOS's. Jobs with these QOS's are subject to being preempted when they are the only jobs running on a block that has either compute nodes in software error or an action item set. .SH "All options below are common on all BlueGene systems:" .TP \fBAltMloaderImage\fR Alternative MloaderImage. This is an optional field only used for multiple images on a system and should be followed by a Groups option indicating the user groups allowed to use this image (i.e. Groups=da,jette). If Groups is not specified then this image will be usable by all groups. You can put as many alternative images as you want in the conf file. .TP \fBBridgeAPILogFile\fR Fully qualified pathname of a into which the Bridge API logs are to be written. There is no default value. .TP \fBBridgeAPIVerbose\fR Specify how verbose the Bridge API logs should be. The default value is 0. .RS .TP \fB0\fR: Log only error and warning messages .TP \fB1\fR: Log level 0 plus information messages .TP \fB2\fR: Log level 1 plus basic debug messages .TP \fB3\fR: Log level 2 plus more debug message .TP \fB4\fR: Log all messages .RE .TP \fBDefaultConnType\fR Specify the default Connection Type(s) to be used when generating new blocks in Dynamic LayoutMode. The default value is TORUS. On a BGQ system you can specify a different connection type for each dimension. (i.e. T,T,T,M would make the default be torus in all dimensions except Z where it would be mesh) NOTE - If a block is requested that can use all the midplanes in a dimension torus will always be used. .TP \fBDenyPassthrough\fR Specify which dimensions you do not want to allow pass\-throughs. Valid options are A, X, Y, Z or all ("A" applies only to BlueGene/Q systems). For example, to prevent pass\-throughs in the X and Y dimensions you would specify "DenyPassthrough=X,Y". By default, pass\-throughs are enabled in every dimension. .TP \fBIONodesPerMP\fR The number of IO nodes on a midplane. This number must be the smallest number if you have a heterogeneous system. There is no default value and this must be specified. The typical settings for BlueGene/L systems are as follows: For IO rich systems, 64 is the value that should be used to create small blocks. For systems that are not IO rich, or for which small blocks are not desirable, 8 is usually the number to use. For BlueGene/P IO rich systems, 32 is the value that should be used to create small blocks since there are only 2 IO nodes per nodecard instead of 4 as on BlueGene/L. .TP \fBLayoutMode\fR Describes how Slurm should create bgblocks. .RS .TP 10 \fBSTATIC\fR: Create and use the defined non\-overlapping bgblocks. .TP \fBOVERLAP\fR: Create and use the defined bgblocks, which may overlap. It is highly recommended that none of the bgblocks have any passthroughs in the X\-dimension on BGL and BGP systems. \fBUse this mode with extreme caution.\fR .TP \fBDYNAMIC\fR: Create and use bgblocks as needed for each job. Bgblocks will not be defined in the bluegene.conf file. Dynamic partitioning may introduce fragmentation of resources. \fBUse this mode with mild caution.\fR .RE .TP \fBMaxBlockInError\fR MaxBlockInError is used on BGQ systems to specify the percentage of a block allowed in an error state before no future jobs are allowed. Since cnodes can go into Software Failure and allow the block to not fail this option is used when allowing multiple jobs to run on a block and once the percentage of cnodes in that block breach this limit no future jobs will be allowed to be run on the block. After all jobs are finished on the block the block is freed which will resolve any cnodes in an error state. Default is 0, which means once any cnodes are in an error state disallow future jobs. .TP \fBMidplaneNodeCnt\fR The number of c\-nodes (compute nodes) per midplane. There is no default value and this must be specified (usually 512). .TP \fBMloaderImage\fR MloaderImage used for creation of all bgblocks. There is no default value and this must be specified. .TP \fBNodeCardNodeCnt\fR or \fBNodeBoardNodeCnt\fR Number of c\-nodes per nodecard / nodeboard. There is no default value and this must be specified. For most BlueGene systems this is usually 32. .TP \fBSubMidplaneSystem\fR Set to Yes if this system is not a full midplane in size, Default is No (regular system). .LP Each bgblock is defined by the midplanes used to construct it. Ordering is very important for laying out switch wires. Please use the smap tool to define blocks and do not change the order of blocks created. A bgblock is implicitly created containing all resources on the system. Bgblocks must not overlap in static mode (except for implicitly created bgblock). This will be the case when smap is used to create a configuration file All Nodes defined here must also be defined in the slurm.conf file. Define only the numeric coordinates of the bgblocks here. The prefix will be based upon the NodeName defined in slurm.conf .TP \fBMPs\fR Define the coordinates of the bgblock end points. For BlueGene/L and BlueGene/P systems there will be three coordinates (X, Y, and Z). For BlueGene/Q systems there will be for coordinates (A, X, Y, and Z). .TP \fBType\fR Define the network connection type for the bgblock. The default value is TORUS. On a BGQ system you can specify a different connection type for each dimension. (i.e. T,T,T,M would make the default be torus in all dimensions except Z where it would be mesh) NOTE - If a block is requested that can use all the midplanes in a dimension torus will always be used. .RS .TP 8 \fBMESH\fR: Communication occur over a mesh. .TP \fBSMALL\fR: The midplane is divided into more than one bgblock. The administrator should define the number of single nodecards and quarter midplane blocks using the options \fB32CNBlocks\fR and \fB128CNBlocks\fR respectively for a BlueGene/L system. \fB64CNBlocks\fR, and \fB256CNBlocks\fR are also available for later BlueGene systems. \fB16CNBlocks\fR is also valid on BlueGene/P systems. Keep in mind you must have at keast one IO node per block. So if you only have 4 ionodes per midplane the smallest block you will be able to make is 128 c-nodes. The total number of c\-nodes of the blocks in a small request must not exceed \fBMidplaneNodeCnt\fR. If none are specified, the midplane will be divided into four 128 c-node blocks. See example below. .TP \fBTORUS\fR: Communications occur over a torus (end\-points of network directly connect. .RE .SH "EXAMPLE" .LP .br ################################################################## .br # bluegene.conf for a Bluegene/L system .br # build by smap on 03/06/2006 .br ################################################################## .br BridgeAPILogFile=/var/log/slurm/bridgeapi.log .br BridgeAPIVerbose=2 .br BlrtsImage=/bgl/BlueLight/ppcfloor/bglsys/bin/rts_hw.rts .br LinuxImage=/bgl/BlueLight/ppcfloor/bglsys/bin/zImage.elf .br MloaderImage=/bgl/BlueLight/ppcfloor/bglsys/bin/mmcs\-mloader.rts .br RamDiskImage=/bgl/BlueLight/ppcfloor/bglsys/bin/ramdisk.elf .br MidplaneNodeCnt=512 .br NodeCardNodeCnt=32 .br IONodesPerMP=64 # An I/O rich environment .br LayoutMode=STATIC .br ################################################################## .br # LEAVE AS COMMENT, Full\-system bgblock, implicitly created .br # BPs=[000x333] Type=TORUS # 4x4x4 = 64 midplanes .br ################################################################## .br BPs=[000x133] Type=TORUS # 2x4x4 = 32 .br BPs=[200x233] Type=TORUS # 1x4x4 = 16 .br BPs=[300x313] Type=TORUS # 1x2x4 = 8 .br BPs=[320x323] Type=TORUS # 1x1x4 = 4 .br BPs=[330x331] Type=TORUS # 1x1x2 = 2 .br BPs=[332] Type=TORUS # 1x1x1 = 1 .br BPs=[333] Type=SMALL 32CNBlocks=4 128CNBlocks=3 # 32 * 4 + 128 * 3 = 512 .SH "COPYING" Copyright (C) 2006-2010 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). .br Copyright (C) 2010\-2013 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "FILES" /etc/bluegene.conf .SH "SEE ALSO" .LP \fBsmap\fR(1), \fBslurm.conf\fR(5) slurm-slurm-15-08-7-1/doc/man/man5/burst_buffer.conf.5000066400000000000000000000214431265000126300222130ustar00rootroot00000000000000.TH "burst_buffer.conf" "5" "December 2015" "burst_buffer.conf 15.08" "Slurm configuration file" .SH "NAME" burst_buffer.conf \- Slurm configuration file for burst buffer management. .SH "DESCRIPTION" \fBburst_buffer.conf\fP is an ASCII file which describes the configuration of burst buffer resource management. This file is only required on the head node(s), where the slurmctld daemon executes. The file location can be modified at system build time using the DEFAULT_SLURM_CONF parameter or at execution time by setting the SLURM_CONF environment variable. The file will always be located in the same directory as the \fBslurm.conf\fP file. In order to support multiple configuration files for multiple burst buffer plugins, the configuration file may alternately be given a name containing the plugin name. For example, if "burst_buffer.conf" is not found, the burst_buffer/generic configuration could be read from a file named "burst_buffer_generic.conf". .LP Parameter names are case insensitive. Any text following a "#" in the configuration file is treated as a comment through the end of that line. Changes to the configuration file take effect upon restart of Slurm daemons, daemon receipt of the SIGHUP signal, or execution of the command "scontrol reconfigure". .LP The configuration parameters available include: .TP \fBAllowUsers\fR Comma separated list of user names and/or IDs permitted to use burst buffers. The options \fBAllowUsers\fR and \fBDenyUsers\fR can not both be specified. By default all users are permitted to use burst buffers. .TP \fBCreateBuffer\fR Fully qualified path name of a program which will create both persistent and per\-job burst buffers. This option is not used by the burst_buffer/cray plugin. .TP \fBDefaultPool\fR Name of the pool used by default for resource allocations. The default value is the first pool reported by the burst buffer infrastructure. This option is only used by the burst_buffer/cray plugin. .TP \fBDenyUsers\fR Colon delimited list of user names and/or IDs prevented from using burst buffers. The options \fBAllowUsers\fR and \fBDenyUsers\fR can not both be specified. By default all users are permitted to use burst buffers. .TP \fBDestroyBuffer\fR Fully qualified path name of a program which will destroy both persistent and per\-job burst buffers. This option is not used by the burst_buffer/cray plugin. .TP \fBFlags\fR String used to control various functions. Multiple options may be comma separated. Supported options include: .RS .TP \fBDisablePersistent\fR Prevents regular users from being able to create and destroy persistent burst buffers. This is the default behaviour, only privileged users (Slurm operators and administrators) can create or destroy persistent burst buffers. .TP \fBEmulateCray\fR Emulating a Cray DataWarp system using the dw_wlm_cli script in the burst_buffer/cray plugin. .TP \fBEnablePersistent\fR Enables regular users to create and destroy persistent burst buffers. By default, only privileged users (Slurm operators and administrators) can create or destroy persistent burst buffers. .TP \fBPrivateData\fR If set, then only Slurm operators and the burst buffer owner can see burst buffer data. .TP \fBTeardownFailure\fR If set, then teardown a burst buffer after file staging error. Otherwise preserve the burst buffer for analysis and manual teardown. If set, .RE .TP \fBGetSysState\fR Fully qualified path name of a program which will return the current burst buffer state. See the src/plugins/burst_buffer/generic/bb_get_state.example in the Slurm distribution for an example. For the Cray plugin, this should be the path of the \fIdw_wlm_cli\fR command and it's default value is /opt/cray/dw_wlm/default/bin/dw_wlm_cli. .TP \fBGranularity\fR Granularity of job space allocations in units of bytes. The numeric value may have a suffix of "m" (megabytes), "g" (gigabytes), "t" (terabytes), "p" (petabytes), or "n" (nodes). Bytes is assumed if no suffix is supplied. This option is not used by the burst_buffer/cray plugin. .\ Possible future enhancement .\ .TP .\ \fBGres\fR .\ Generic resources associated with burst buffers. .\ This is a completely separate name space from the Gres defined in the slurm.conf .\ file. .\ The Gres value consists of a comma separated list of generic resources, .\ each of which includes a name separated by a colon and a numeric value. .\ The numeric value can include a suffix of "k", "m" or "g", which multiplies .\ the numeric value by 1,024, 1,048,576, or 1,073,741,824 respectively. .\ The numeric value is a 32-bit value. .\ See the example below. .TP \fBOtherTimeout\fR If a burst buffer operation (other than job validation, stage in, or stage out) runs for longer than this number of seconds, the job will be placed in a held state. A Slurm administrator will be required to release the job. By default there is a 300 second (5 minute) timeout for these operations. Also see \fBStageInTimeout\fR, \fBStageOutTimeout\fR, and \fBValidateTimeout\fR options. (NOTE: This option was added after the release of Slurm version 15.08 and will its not be visible to users with Slurm tools until the version 16.05 release.) .TP \fBPrivateData\fR If set to "true" then users will only be able to view burst buffers they can use. Slurm administrators will still be able to view all burst buffers. By default, users can view all burst buffers. .TP \fBStageInTimeout\fR If the stage in of files for a job takes more than this number of seconds, the burst buffer will be released and the job will be placed in a held state. A Slurm administrator will be required to release the job. By default there is a one day timeout for the stage in process. .TP \fBStageOutTimeout\fR If the stage out of files for a job takes more than this number of seconds, the burst buffer will be released and the job will be purged. By default there is a one day timeout for the stage out process. .TP \fBStartStageIn\fR Fully qualified path name of a program which will stage files in for a job. See the src/plugins/burst_buffer/generic/bb_start_stage_in.example in the Slurm distribution for an example. This option is not used by the burst_buffer/cray plugin. .TP \fBStartStageOut\fR Fully qualified path name of a program which will stage files out for a job. See the src/plugins/burst_buffer/generic/bb_start_stage_out.example in the Slurm distribution for an example. This option is not used by the burst_buffer/cray plugin. .TP \fBStopStageIn\fR Fully qualified path name of a program which will stop staging files in for a job. See the src/plugins/burst_buffer/generic/bb_stop_stage_out.example in the Slurm distribution for an example. This option is not used by the burst_buffer/cray plugin. .TP \fBStopStageOut\fR Fully qualified path name of a program which will stop staging files in for a job. See the src/plugins/burst_buffer/generic/bb_stop_stage_out.example in the Slurm distribution for an example. This option is not used by the burst_buffer/cray plugin. .TP \fBValidateTimeout\fR If the validation of a job submission request takes more than this number of seconds, the submission will be rejected. The value of \fBValidateTimeout\fR must be less than the value of \fBMessageTimeout\fR configured in the slurm.conf file or job submission requests may fail with a response timeout error. By default there is a 5 second timeout for the validation operations. (NOTE: This option was added after the release of Slurm version 15.08 and will its not be visible to users with Slurm tools until the version 16.05 release.) .SH "EXAMPLE" .LP .br ################################################################## .br # Slurm's burst buffer configuration file (burst_buffer.conf) .br ################################################################## .br AllowUsers=alan,brenda .br PrivateData=true .\ .br .\ Gres=nodes:10,other:20 .br # .br Granularity=1G .br # .br StageInTimeout=30 # Seconds .br StageOutTimeout=30 # Seconds .br # .br CreateBuffer=/usr/local/slurm/15.08/sbin/CB .br DestroyBuffer=/usr/local/slurm/15.08/sbin/DB .br GetSysState=/usr/local/slurm/15.08/sbin/GSS .br StartStageIn=/usr/local/slurm/15.08/sbin/SSI .br StartStageOut=/usr/local/slurm/15.08/sbin/SSO .br StopStageIn=/usr/local/slurm/15.08/sbin/PSI .br StopStageOut=/usr/local/slurm/15.08/sbin/PSO .SH "COPYING" Copyright (C) 2014-2015 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm.conf\fR(5) slurm-slurm-15-08-7-1/doc/man/man5/cgroup.conf.5000066400000000000000000000166751265000126300210350ustar00rootroot00000000000000.TH "cgroup.conf" "5" "Slurm Configuration File" "November 2015" "Slurm Configuration File" .SH "NAME" cgroup.conf \- Slurm configuration file for the cgroup support .SH "DESCRIPTION" \fBcgroup.conf\fP is an ASCII file which defines parameters used by Slurm's Linux cgroup related plugins. The file location can be modified at system build time using the DEFAULT_SLURM_CONF parameter or at execution time by setting the SLURM_CONF environment variable. The file will always be located in the same directory as the \fBslurm.conf\fP file. .LP Parameter names are case insensitive. Any text following a "#" in the configuration file is treated as a comment through the end of that line. Changes to the configuration file take effect upon restart of Slurm daemons, daemon receipt of the SIGHUP signal, or execution of the command "scontrol reconfigure" unless otherwise noted. .LP For general Slurm Cgroups information, see the Cgroups Guide at . .LP The following cgroup.conf parameters are defined to control the general behavior of Slurm cgroup plugins. .TP \fBCgroupAutomount\fR= Slurm cgroup plugins require valid and functional cgroup subsystem to be mounted under /cgroup/. When launched, plugins check their subsystem availability. If not available, the plugin launch fails unless CgroupAutomount is set to yes. In that case, the plugin will first try to mount the required subsystems. .TP \fBCgroupMountpoint\fR=\fIPATH\fR Specify the \fIPATH\fR under which cgroups should be mounted. This should be a writable directory which will contain cgroups mounted one per subsystem. The default \fIPATH\fR is /cgroup. .TP \fBCgroupReleaseAgentDir\fR= Used to tune the cgroup system behavior. This parameter identifies the location of the directory containing Slurm cgroup release_agent files. .SH "TASK/CGROUP PLUGIN" .LP The following cgroup.conf parameters are defined to control the behavior of this particular plugin: .TP \fBAllowedDevicesFile\fR= If the ConstrainDevices field is set to "yes" then this file has to be used to declare the devices that need to be allowed by default for all the jobs. The current implementation of cgroup devices subsystem works as a whitelist of entries, which means that in order to isolate the access of a job upon particular devices we need to allow the access on all the devices, supported by default and then deny on those that the job does not have the permission to use. The default value is "/etc/slurm/cgroup_allowed_devices_file.conf". The syntax of the file accepts one device per line and it permits lines like /dev/sda* or /dev/cpu/*/*. See also an example of this file in etc/cgroup_allowed_devices_file.conf.example. .TP \fBAllowedRAMSpace\fR= Constrain the job cgroup RAM to this percentage of the allocated memory. The percentage supplied may be expressed as floating point number, e.g. 98.5. If the \fBAllowedRAMSpace\fR limit is exceeded, the job steps will be killed and a warning message will be written to standard error. Also see \fBConstrainRAMSpace\fR. The default value is 100. .TP \fBAllowedSwapSpace\fR= Constrain the job cgroup swap space to this percentage of the allocated memory. The default value is 0, which means that RAM+Swap will be limited to \fBAllowedRAMSpace\fR. The supplied percentage may be expressed as a floating point number, e.g. 50.5. If the limit is exceeded, the job steps will be killed and a warning message will be written to standard error. Also see \fBConstrainSwapSpace\fR. .TP \fBConstrainCores\fR= If configured to "yes" then constrain allowed cores to the subset of allocated resources. It uses the cpuset subsystem. The default value is "no". .TP \fBConstrainDevices\fR= If configured to "yes" then constrain the job's allowed devices based on GRES allocated resources. It uses the devices subsystem for that. The default value is "no". .TP \fBConstrainRAMSpace\fR= If configured to "yes" then constrain the job's RAM usage. The default value is "no", in which case the job's RAM limit will be set to its swap space limit. Also see \fBAllowedSwapSpace\fR, \fBAllowedRAMSpace\fR and \fBConstrainSwapSpace\fR. .TP \fBConstrainSwapSpace\fR= If configured to "yes" then constrain the job's swap space usage. The default value is "no". Note that when set to "yes" and ConstrainRAMSpace is set to "no", AllowedRAMSpace is automatically set to 100% in order to limit the RAM+Swap amount to 100% of job's requirement plus the percent of allowed swap space. This amount is thus set to both RAM and RAM+Swap limits. This means that in that particular case, ConstrainRAMSpace is automatically enabled with the same limit than the one used to constrain swap space. Also see \fBAllowedSwapSpace\fR. .TP \fBMaxRAMPercent\fR=\fIPERCENT\fR Set an upper bound in percent of total RAM on the RAM constraint for a job. This will be the memory constraint applied to jobs that are not explicitly allocated memory by Slurm (i.e. Slurm's select plugin is not configured to manage memory allocations). The \fIPERCENT\fR may be an arbitrary floating point number. The default value is 100. .TP \fBMaxSwapPercent\fR=\fIPERCENT\fR Set an upper bound (in percent of total RAM) on the amount of RAM+Swap that may be used for a job. This will be the swap limit applied to jobs on systems where memory is not being explicitly allocated to job. The \fIPERCENT\fR may be an arbitrary floating point number between 0 and 100. The default value is 100. .TP \fBMinRAMSpace\fR= Set a lower bound (in MB) on the memory limits defined by \fBAllowedRAMSpace\fR and \fBAllowedSwapSpace\fR. This prevents accidentally creating a memory cgroup with such a low limit that slurmstepd is immediately killed due to lack of RAM. The default limit is 30M. .TP \fBTaskAffinity\fR= If configured to "yes" then set a default task affinity to bind each step task to a subset of the allocated cores using \fBsched_setaffinity\fP. The default value is "no". Note: This feature requires the Portable Hardware Locality (hwloc) library to be installed. .SH "DISTRIBUTION\-SPECIFIC NOTES" .LP Debian and derivatives (e.g. Ubuntu) usually exclude the memory and memsw (swap) cgroups by default. To include them, add the following parameters to the kernel command line: \fBcgroup_enable=memory swapaccount=1\fR .LP This can usually be placed in /etc/default/grub inside the \fBGRUB_CMDLINE_LINUX\fR variable. A command such as update\-grub must be run after updating the file. .SH "EXAMPLE" .LP .br ### .br # Slurm cgroup support configuration file .br ### .br CgroupAutomount=yes .br CgroupReleaseAgentDir="/etc/slurm/cgroup" .br ConstrainCores=yes .br # .SH "COPYING" Copyright (C) 2010\-2012 Lawrence Livermore National Security. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). .br Copyright (C) 2010\-2015 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm.conf\fR(5) slurm-slurm-15-08-7-1/doc/man/man5/cray.conf.5000066400000000000000000000105401265000126300204550ustar00rootroot00000000000000.TH "cray.conf" "5" "Slurm Configuration File" "April 2015" "Slurm Configuration File" .SH "NAME" cray.conf \- Slurm configuration file for the Cray\-specific information .SH "DESCRIPTION" \fBcray.conf\fP is an ASCII file which defines parameters used by Slurm's select/cray plugin in support of Cray systems. The file location can be modified at system build time using the DEFAULT_SLURM_CONF parameter or at execution time by setting the SLURM_CONF environment variable. The file will always be located in the same directory as the \fBslurm.conf\fP file. The default configuration parameters will work properly in a typical installation and this file will not be required. .LP Parameter names are case insensitive. Any text following a "#" in the configuration file is treated as a comment through the end of that line. Changes to the configuration file take effect upon restart of Slurm daemons, daemon receipt of the SIGHUP signal, or execution of the command "scontrol reconfigure" unless otherwise noted. .LP The configuration parameters available include: .TP \fBAlpsEngine\fR= Communication protocol version number to be used between Slurm and ALPS/BASIL. The default value is BASIL's response to the ENGINE query. Use with caution: Changes in ALPS communications which are not recognized by Slurm could result in loss of jobs. Currently supported values include 1.1, 1.2.0, 1.3.0, 3.1.0, 4.0, 4.1.0, 4.2.0, 5.0, 5.1, 5.2 or "latest". A value of "latest" will use the most current version of Slurm's logic and can be useful for validation with new versions of ALPS. .TP \fBAlpsDir\fR= Fully qualified pathname of the directory in which ALPS is installed. The default value is \fI/usr\fR. .TP \fBapbasil\fR= Fully qualified pathname to the apbasil command. The default value is \fI/usr/bin/apbasil\fR. .TP \fBapbasilTimeout\fR= How many seconds to wait for the apbasil command to complete before killing it. By default, wait indefinitely .TP \fBapkill\fR= Fully qualified pathname to the apkill command. The default value is \fI/usr/bin/apkill\fR. .TP \fBNoAPIDSignalOnKill\fR=Yes When set to yes this will make it so the slurmctld will not signal the apid's in a batch job. Instead it relies on the rpc coming from the slurmctld to kill the job to end things correctly. .TP \fBSDBdb\fR= Name of the ALPS database. The default value is \fIXTAdmin\fR. .TP \fBSDBhost\fR= Hostname of the database server. The default value is \fIsdb\fR. .TP \fBSDBpass\fR= Password used to access the ALPS database. The default value is NULL, which will load the password from the \fImy.cnf\fR file. .TP \fBSDBport\fR= Port used to access the ALPS database. The default value is 0. .TP \fBSDBuser\fR= Name of user used to access the ALPS database. The default value is NULL, which will load the user name from the \fImy.cnf\fR file. .TP \fBSubAllocate\fR=Yes Only allocate requested node resources instead of the whole node. In both cases the user will be charged for the entire node. .TP \fBSyncTimeout\fR= Slurm does not normally schedule jobs while its job or node state information is out of synchronization with that of ALPS. This parameter specifies a maximum time to defer job scheduling while waiting for consistent state. The inconsistent state might be caused by a variety of hardware or software failures and proceeding could result in more failures. The default value is 3600 (one hour). A value of zero will wait indefinitely for consistent state. .SH "EXAMPLE" .LP .br ### .br # Slurm Cray support configuration file .br ### .br apbasil=/opt/alps_simulator_40_r6768/apbasil.sh .br SDBhost=localhost .br SDBuser=alps_user .br SDBdb=XT5istanbul .SH "COPYING" Copyright (C) 2011-2013 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm.conf\fR(5) slurm-slurm-15-08-7-1/doc/man/man5/ext_sensors.conf.5000066400000000000000000000056761265000126300221110ustar00rootroot00000000000000.TH "ext_sensors.conf" "5" "Slurm Configuration File" "April 2015" "Slurm Configuration File" .SH "NAME" ext_sensors.conf \- Slurm configuration file for the external sensors plugin .SH "DESCRIPTION" \fBext_sensors.conf\fP is an ASCII file which defines parameters used by Slurm's external sensors plugins. The file location can be modified at system build time using the DEFAULT_SLURM_CONF parameter or at execution time by setting the SLURM_CONF environment variable. The file will always be located in the same directory as the \fBslurm.conf\fP file. .LP Parameter names are case insensitive. Any text following a "#" in the configuration file is treated as a comment through the end of that line. The size of each line in the file is limited to 1024 characters. Changes to the configuration file take effect upon restart of Slurm daemons, daemon receipt of the SIGHUP signal, or execution of the command "scontrol reconfigure" unless otherwise noted. .LP The following ext_sensors.conf parameters are defined to control data collection by the ext_sensors plugins. All of these parameters are optional. If a parameter is omitted, data collection of the omitted type is disabled. .TP \fBJobData\fR=\fBenergy\fR Specify the data types to be collected by the plugin for jobs/steps. .TP \fBNodeData\fR=\fB[energy|temp][,temp|energy]\fR Specify the data types to be collected by the plugin for nodes. .TP \fBSwitchData\fR=\fBenergy\fR Specify the data types to be collected by the plugin for switches. .TP \fBColdDoorData\fR=\fBtemp\fR Specify the data types to be collected by the plugin for cold doors. .TP \fBMinWatt\fR=\fB\fR Minimum recorded power consumption, in watts. .TP \fBMaxWatt\fR=\fB\fR Maximum recorded power consumption, in watts. .TP \fBMinTemp\fR=\fB\fR Minimum recorded temperature, in celsius. .TP \fBMaxTemp\fR=\fB\fR Maximum recorded temperature, in celsius. .TP \fBEnergyRRA\fR=\fB\fR Energy RRA name. .TP \fBTempRRA\fR=\fB\fR Temperature RRA name. .TP \fBEnergyPathRRD\fR=\fB\fR Pathname of energy RRD file. .TP \fBTempPathRRD\fR=\fB\fR Pathname of temperature RRD file. .SH "EXAMPLE" .LP .br ### .br # Slurm external sensors plugin configuration file .br ### .br JobData=energy .br NodeData=energy,temp .br SwitchData=energy .br ColdDoorData=temp .br # .SH "COPYING" Copyright (C) 2013 Bull .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm.conf\fR(5) slurm-slurm-15-08-7-1/doc/man/man5/gres.conf.5000066400000000000000000000137761265000126300204750ustar00rootroot00000000000000.TH "gres.conf" "5" "Slurm Configuration File" "April 2015" "Slurm Configuration File" .SH "NAME" gres.conf \- Slurm configuration file for generic resource management. .SH "DESCRIPTION" \fBgres.conf\fP is an ASCII file which describes the configuration of generic resources on each compute node. Each node must contain a gres.conf file if generic resources are to be scheduled by Slurm. The file location can be modified at system build time using the DEFAULT_SLURM_CONF parameter or at execution time by setting the SLURM_CONF environment variable. The file will always be located in the same directory as the \fBslurm.conf\fP file. If generic resource counts are set by the gres plugin function node_config_load(), this file may be optional. .LP Parameter names are case insensitive. Any text following a "#" in the configuration file is treated as a comment through the end of that line. Changes to the configuration file take effect upon restart of Slurm daemons, daemon receipt of the SIGHUP signal, or execution of the command "scontrol reconfigure" unless otherwise noted. .LP The overall configuration parameters available include: .TP \fBCount\fR Number of resources of this type available on this node. The default value is set to the number of \fBFile\fR values specified (if any), otherwise the default value is one. A suffix of "K", "M", "G", "T" or "P" may be used to multiply the number by 1024, 1048576, 1073741824, etc. respectively. .TP \fBCPUs\fR Specify the CPU index numbers for the specific CPUs which can use this resource. For example, it may be strongly preferable to use specific CPUs with specific devices (e.g. on a NUMA architecture). Multiple CPUs may be specified using a comma delimited list or a range may be specified using a "\-" separator (e.g. "0,1,2,3" or "0\-3"). If specified, then only the identified CPUs can be allocated with each generic resource; an attempt to use other CPUs will not be honored. If not specified, then any CPU can be used with the resources, which also increases the speed of Slurm's scheduling algorithm. If any CPU can be effectively used with the resources, then do not specify the \fBCPUs\fR option for improved speed in the Slurm scheduling logic. Since Slurm must be able to perform resource management on heterogeneous clusters having various CPU ID numbering schemes, use the Slurm CPU index numbers here (CPU_ID = Board_ID x threads_per_board + Socket_ID x threads_per_socket + Core_ID x threads_per_core + Thread_ID). .TP \fBFile\fR Fully qualified pathname of the device files associated with a resource. The file name parsing logic includes support for simple regular expressions as shown in the example. This field is generally required if enforcement of generic resource allocations is to be supported (i.e. prevents a users from making use of resources allocated to a different user). If \fBFile\fR is specified then \fBCount\fR must be either set to the number of file names specified or not set (the default value is the number of files specified). Slurm must track the utilization of each individual device If device file names are specified, which involves more overhead than just tracking the device counts. Use the \fBFile\fR parameter only if the \fBCount\fR is not sufficient for tracking purposes. NOTE: If you specify the \fBFile\fR parameter for a resource on some node, the option must be specified on all nodes and Slurm will track the assignment of each specific resource on each node. Otherwise Slurm will only track a count of allocated resources rather than the state of each individual device file. .TP \fBName\fR Name of the generic resource. Any desired name may be used. Each generic resource has an optional plugin which can provide resource\-specific options. Generic resources that currently include an optional plugin are: .RS .TP \fBgpu\fR Graphics Processing Unit .TP \fBnic\fR Network Interface Card .TP \fBmic\fR Intel Many Integrated Core (MIC) processor .RE .TP \fBNodeName\fR An optional NodeName specification can be used to permit one gres.conf file to be used for all compute nodes in a cluster by specifying the node(s) that each line should apply to. The NodeName specification can use a Slurm hostlist specification as shown in the example below. .TP \fBType\fR An arbitrary string identifying the type of device. For example, a particular model of GPU. If \fBType\fR is specified, then \fBCount\fR is limited in size (currently 1024). .SH "EXAMPLES" .LP .br ################################################################## .br # Slurm's Generic Resource (GRES) configuration file .br ################################################################## .br # Configure support for our four GPUs .br Name=gpu Type=gtx560 File=/dev/nvidia0 CPUs=0,1 .br Name=gpu Type=gtx560 File=/dev/nvidia1 CPUs=0,1 .br Name=gpu Type=tesla File=/dev/nvidia2 CPUs=2,3 .br Name=gpu Type=tesla File=/dev/nvidia3 CPUs=2,3 .br Name=bandwidth Count=20M .LP .br ################################################################## .br # Slurm's Generic Resource (GRES) configuration file .br # Use a single gres.conf file for all compute nodes .br ################################################################## .br NodeName=tux[0\-15] Name=gpu File=/dev/nvidia[0\-3] .br NodeName=tux[16\-31] Name=gpu File=/dev/nvidia[0\-7] .SH "COPYING" Copyright (C) 2010 The Regents of the University of California. Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). .br Copyright (C) 2010\-2014 SchedMD LLC. .LP This file is part of Slurm, a resource management program. For details, see . .LP Slurm is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm.conf\fR(5) slurm-slurm-15-08-7-1/doc/man/man5/nonstop.conf.5000066400000000000000000000111571265000126300212240ustar00rootroot00000000000000.TH "nonstop.conf" "5" "Slurm Configuration File" "April 2015" "Slurm Configuration File" .SH "NAME" nonstop.conf \- Slurm configuration file for fault-tolerant computing. .SH "DESCRIPTION" \fBnonstop.conf\fP is an ASCII file which describes the configuration used for fault-tolerant computing with Slurm using the optional slurmctld/nonstop plugin. This plugin provides a means for users to notify Slurm of nodes it believes are suspect, replace the job's failing or failed nodes, and extend a job's in response to failures. The file location can be modified at system build time using the DEFAULT_SLURM_CONF parameter or at execution time by setting the SLURM_CONF environment variable. The file will always be located in the same directory as the \fBslurm.conf\fP file. .LP Parameter names are case insensitive. Any text following a "#" in the configuration file is treated as a comment through the end of that line. Changes to the configuration file take effect upon restart of Slurm daemons, daemon receipt of the SIGHUP signal, or execution of the command "scontrol reconfigure" unless otherwise noted. The configuration parameters available include: .TP \fBBackupAddr\fR Communications address used for the slurmctld daemon. This can either be a hostname or IP address. This value would typically be identical to the value of \fBBackupAddr\fR in the slurm.conf file. .TP \fBControlAddr\fR Communications address used for the slurmctld daemon. This can either be a hostname or IP address. This value would typically be identical to the value of \fBControlAddr\fR in the slurm.conf file. .TP \fBDebug\fR A number indicating the level of additional logging desired for the plugin. The default value is zero, which generates no additional logging. .TP \fBHotSpareCount\fR This identifies how many nodes in each partition should be maintained as spare resources. When a job fails, this pool of resources will be depleted and then replenished when possible using idle resources. The value should be a comma delimited list of partition and node count pairs separated by a colon. .TP \fBMaxSpareNodeCount\fR This identifies the maximum number of nodes any single job may replace through the job's entire lifetime. This could prevent a single job from causing all of the nodes in a cluster to fail. By default, there is no maximum node count. .TP \fBPort\fR Port used for communications. The default value is 6820. .TP \fBTimeLimitDelay\fR If a job requires replacement resources and none are immediately available, then permit a job to extend its time limit by the length of time required to secure replacement resources up to the number of minutes specified by \fBTimeLimitDelay\fR. This option will only take effect if no hot spare resources are available at the time replacement resources are requested. This time limit extension is in addition to the value calculated using the \fBTimeLimitExtend\fR. The default value is zero (no time limit extension). The value may not exceed 65533 seconds. .TP \fBTimeLimitDrop\fR Specifies the number of minutes that a job can extend it's time limit for each failed or failing node removed from the job's allocation. The default value is zero (no time limit extension). The value may not exceed 65533 seconds. .TP \fBTimeLimitExtend\fR Specifies the number of minutes that a job can extend it's time limit for each replaced node. The default value is zero (no time limit extension). The value may not exceed 65533 seconds. .TP \fBUserDrainAllow\fR This identifies a comma delimited list of user names or user IDs of users who are authorized to drain nodes they believe are failing. Specify a value of "ALL" to permit any user to drain nodes. By default, no users may drain nodes using this interface. .TP \fBUserDrainDeny\fR This identifies a comma delimited list of user names or user IDs of users who are NOT authorized to drain nodes they believe are failing. Specifying a value for \fBUserDrainDeny\fR implicitly allows all other users to drain nodes (sets the value of UserDrainAllow to "ALL"). .SH "EXAMPLE" .LP # .br # Sample nonstop.conf file .br # Date: 12 Feb 2013 .br # .br ControlAddr=12.34.56.78 .br BackupAddr=12.34.56.79 .br Port=1234 .br # .br HotSpareCount=batch:6,interactive:0 .br MaxSpareNodesCount=4 .br TimeLimitDelay=30 .br TimeLimitExtend=20 .br TimeLimitExtend=10 .br UserDrainAllow=adam,brenda .SH "COPYING" Copyright (C) 2013-2014 SchedMD LLC. All rights reserved. .LP Slurm is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. .SH "SEE ALSO" .LP \fBslurm.conf\fR(5) slurm-slurm-15-08-7-1/doc/man/man5/slurm.conf.5000066400000000000000000006006051265000126300206700ustar00rootroot00000000000000.TH "slurm.conf" "5" "Slurm Configuration File" "January 2016" "Slurm Configuration File" .SH "NAME" slurm.conf \- Slurm configuration file .SH "DESCRIPTION" \fBslurm.conf\fP is an ASCII file which describes general Slurm configuration information, the nodes to be managed, information about how those nodes are grouped into partitions, and various scheduling parameters associated with those partitions. This file should be consistent across all nodes in the cluster. .LP The file location can be modified at system build time using the DEFAULT_SLURM_CONF parameter or at execution time by setting the SLURM_CONF environment variable. The Slurm daemons also allow you to override both the built\-in and environment\-provided location using the "\-f" option on the command line. .LP The contents of the file are case insensitive except for the names of nodes and partitions. Any text following a "#" in the configuration file is treated as a comment through the end of that line. Changes to the configuration file take effect upon restart of Slurm daemons, daemon receipt of the SIGHUP signal, or execution of the command "scontrol reconfigure" unless otherwise noted. .LP If a line begins with the word "Include" followed by whitespace and then a file name, that file will be included inline with the current configuration file. For large or complex systems, multiple configuration files may prove easier to manage and enable reuse of some files (See INCLUDE MODIFIERS for more details). .LP Note on file permissions: .LP The \fIslurm.conf\fR file must be readable by all users of Slurm, since it is used by many of the Slurm commands. Other files that are defined in the \fIslurm.conf\fR file, such as log files and job accounting files, may need to be created/owned by the user "SlurmUser" to be successfully accessed. Use the "chown" and "chmod" commands to set the ownership and permissions appropriately. See the section \fBFILE AND DIRECTORY PERMISSIONS\fR for information about the various files and directories used by Slurm. .SH "PARAMETERS" .LP The overall configuration parameters available include: .TP \fBAccountingStorageBackupHost\fR The name of the backup machine hosting the accounting storage database. If used with the accounting_storage/slurmdbd plugin, this is where the backup slurmdbd would be running. Only used for database type storage plugins, ignored otherwise. .TP \fBAccountingStorageEnforce\fR This controls what level of association\-based enforcement to impose on job submissions. Valid options are any combination of \fIassociations\fR, \fIlimits\fR, \fInojobs\fR, \fInosteps\fR, \fIqos\fR, \fIsafe\fR, and \fIwckeys\fR, or \fIall\fR for all things (expect nojobs and nosteps, they must be requested as well). If limits, qos, or wckeys are set, associations will automatically be set. If wckeys is set, TrackWCKey will automatically be set. If safe is set, limits and associations will automatically be set. If nojobs is set nosteps will automatically be set. By enforcing Associations no new job is allowed to run unless a corresponding association exists in the system. If limits are enforced users can be limited by association to whatever job size or run time limits are defined. If nojobs is set Slurm will not account for any jobs or steps on the system, like wise if nosteps is set Slurm will not account for any steps ran limits will still be enforced. If safe is enforced a job will only be launched against an association or qos that has a GrpCPUMins limit set if the job will be able to run to completion. Without this option set, jobs will be launched as long as their usage hasn't reached the cpu-minutes limit which can lead to jobs being launched but then killed when the limit is reached. With qos and/or wckeys enforced jobs will not be scheduled unless a valid qos and/or workload characterization key is specified. When \fBAccountingStorageEnforce\fR is changed, a restart of the slurmctld daemon is required (not just a "scontrol reconfig"). .TP \fBAccountingStorageHost\fR The name of the machine hosting the accounting storage database. Only used for database type storage plugins, ignored otherwise. Also see \fBDefaultStorageHost\fR. .TP \fBAccountingStorageLoc\fR The fully qualified file name where accounting records are written when the \fBAccountingStorageType\fR is "accounting_storage/filetxt" or else the name of the database where accounting records are stored when the \fBAccountingStorageType\fR is a database. Also see \fBDefaultStorageLoc\fR. .TP \fBAccountingStoragePass\fR The password used to gain access to the database to store the accounting data. Only used for database type storage plugins, ignored otherwise. In the case of Slurm DBD (Database Daemon) with MUNGE authentication this can be configured to use a MUNGE daemon specifically configured to provide authentication between clusters while the default MUNGE daemon provides authentication within a cluster. In that case, \fBAccountingStoragePass\fR should specify the named port to be used for communications with the alternate MUNGE daemon (e.g. "/var/run/munge/global.socket.2"). The default value is NULL. Also see \fBDefaultStoragePass\fR. .TP \fBAccountingStoragePort\fR The listening port of the accounting storage database server. Only used for database type storage plugins, ignored otherwise. Also see \fBDefaultStoragePort\fR. .TP \fBAccountingStorageTRES\fR Comma separated list of resources you wish to track on the cluster. These are the resources requested by the sbatch/srun job when it is submitted. Currently this consists of any GRES, BB (burst buffer) or license along with CPU, Memory, Node, and Energy. By default CPU, Energy, Memory, and Node are tracked. AccountingStorageTRES=gres/craynetwork,license/iop1 will track cpu, energy, memory, nodes along with a gres called craynetwork as well as a license called iop1. Whenever these resources are used on the cluster they are recorded. The TRES are automatically set up in the database on the start of the slurmctld. .TP \fBAccountingStorageType\fR The accounting storage mechanism type. Acceptable values at present include "accounting_storage/filetxt", "accounting_storage/mysql", "accounting_storage/none" and "accounting_storage/slurmdbd". The "accounting_storage/filetxt" value indicates that accounting records will be written to the file specified by the \fBAccountingStorageLoc\fR parameter. The "accounting_storage/mysql" value indicates that accounting records will be written to a MySQL or MariaDB database specified by the \fBAccountingStorageLoc\fR parameter. The "accounting_storage/slurmdbd" value indicates that accounting records will be written to the Slurm DBD, which manages an underlying MySQL database. See "man slurmdbd" for more information. The default value is "accounting_storage/none" and indicates that account records are not maintained. Note: The filetxt plugin records only a limited subset of accounting information and will prevent some sacct options from proper operation. Also see \fBDefaultStorageType\fR. .TP \fBAccountingStorageUser\fR The user account for accessing the accounting storage database. Only used for database type storage plugins, ignored otherwise. Also see \fBDefaultStorageUser\fR. .TP \fBAccountingStoreJobComment\fR If set to "YES" then include the job's comment field in the job complete message sent to the Accounting Storage database. The default is "YES". .TP \fBAcctGatherNodeFreq\fR The AcctGather plugins sampling interval for node accounting. For AcctGather plugin values of none, this parameter is ignored. For all other values this parameter is the number of seconds between node accounting samples. For the acct_gather_energy/rapl plugin, set a value less than 300 because the counters may overflow beyond this rate. The default value is zero. This value disables accounting sampling for nodes. Note: The accounting sampling interval for jobs is determined by the value of \fBJobAcctGatherFrequency\fR. .TP \fBAcctGatherEnergyType\fR Identifies the plugin to be used for energy consumption accounting. The jobacct_gather plugin and slurmd daemon call this plugin to collect energy consumption data for jobs and nodes. The collection of energy consumption data takes place on node level, hence only in case of exclusive job allocation the energy consumption measurements will reflect the jobs real consumption. In case of node sharing between jobs the reported consumed energy per job (through sstat or sacct) will not reflect the real energy consumed by the jobs. Configurable values at present are: .RS .TP 20 \fBacct_gather_energy/none\fR No energy consumption data is collected. .TP \fBacct_gather_energy/ipmi\fR Energy consumption data is collected from the Baseboard Management Controller (BMC) using the Intelligent Platform Management Interface (IPMI). .TP \fBacct_gather_energy/rapl\fR Energy consumption data is collected from hardware sensors using the Running Average Power Limit (RAPL) mechanism. Note that enabling RAPL may require the execution of the command "sudo modprobe msr". .RE .TP \fBAcctGatherInfinibandType\fR Identifies the plugin to be used for infiniband network traffic accounting. The plugin is activated only when profiling on hdf5 files is activated and the user asks for network data collection for jobs through \-\-profile=Network (or =All). The collection of network traffic data takes place on node level, hence only in case of exclusive job allocation the collected values will reflect the jobs real traffic. All network traffic data are logged on hdf5 files per job on each node. No storage on the Slurm database takes place. Configurable values at present are: .RS .TP 20 \fBacct_gather_infiniband/none\fR No infiniband network data are collected. .TP \fBacct_gather_infiniband/ofed\fR Infiniband network traffic data are collected from the hardware monitoring counters of Infiniband devices through the OFED library. .RE .TP \fBAcctGatherFilesystemType\fR Identifies the plugin to be used for filesystem traffic accounting. The plugin is activated only when profiling on hdf5 files is activated and the user asks for filesystem data collection for jobs through \-\-profile=Lustre (or =All). The collection of filesystem traffic data takes place on node level, hence only in case of exclusive job allocation the collected values will reflect the jobs real traffic. All filesystem traffic data are logged on hdf5 files per job on each node. No storage on the Slurm database takes place. Configurable values at present are: .RS .TP 20 \fBacct_gather_filesystem/none\fR No filesystem data are collected. .TP \fBacct_gather_filesystem/lustre\fR Lustre filesystem traffic data are collected from the counters found in /proc/fs/lustre/. .RE .TP \fBAcctGatherProfileType\fR Identifies the plugin to be used for detailed job profiling. The jobacct_gather plugin and slurmd daemon call this plugin to collect detailed data such as I/O counts, memory usage, or energy consumption for jobs and nodes. There are interfaces in this plugin to collect data as step start and completion, task start and completion, and at the account gather frequency. The data collected at the node level is related to jobs only in case of exclusive job allocation. Configurable values at present are: .RS .TP 20 \fBacct_gather_profile/none\fR No profile data is collected. .TP \fBacct_gather_profile/hdf5\fR This enables the HDF5 plugin. The directory where the profile files are stored and which values are collected are configured in the acct_gather.conf file. .RE .TP \fBAllowSpecResourcesUsage\fR If set to 1, Slurm allows individual jobs to override node's configured CoreSpecCount value. For a job to take advantage of this feature, a command line option of \-\-core\-spec must be specified. The default value for this option is 1 for Cray systems and 0 for other system types. .TP \fBAuthInfo\fR Additional information to be used for authentication of communications between the Slurm daemons (slurmctld and slurmd) and the Slurm clients. The interpretation of this option is specific to the configured \fBAuthType\fR. Multiple options may be specified in a comma delimited list. If not specified, the default authentication information will be used. .RS .TP 14 \fBcred_expire\fR Default job step credential lifetime, in seconds (e.g. "cred_expire=1200"). It must be sufficiently long enough to load user environment, run prolog, deal with the slurmd getting paged out of memory, etc. This also controls how long a requeued job must wait before starting again. The default value is 120 seconds. .TP \fBsocket\fR Path name to a MUNGE daemon socket to use (e.g. "socket=/var/run/munge/munge.socket.2"). The default value is "/var/run/munge/munge.socket.2". Used by \fIauth/munge\fR and \fIcrypto/munge\fR. .TP \fBttl\fR Credential lifetime, in seconds (e.g. "ttl=300"). The default value is dependent upon the Munge installation, but is typically 300 seconds. .RE .TP \fBAuthType\fR The authentication method for communications between Slurm components. Acceptable values at present include "auth/none", "auth/authd", and "auth/munge". The default value is "auth/munge". "auth/none" includes the UID in each communication, but it is not verified. This may be fine for testing purposes, but \fBdo not use "auth/none" if you desire any security\fR. "auth/authd" indicates that Brett Chun's authd is to be used (see "http://www.theether.org/authd/" for more information. Note that authd is no longer actively supported). "auth/munge" indicates that LLNL's MUNGE is to be used (this is the best supported authentication mechanism for Slurm, see "http://munge.googlecode.com/" for more information). All Slurm daemons and commands must be terminated prior to changing the value of \fBAuthType\fR and later restarted (Slurm jobs can be preserved). .TP \fBBackupAddr\fR The name that \fBBackupController\fR should be referred to in establishing a communications path. This name will be used as an argument to the gethostbyname() function for identification. For example, "elx0000" might be used to designate the Ethernet address for node "lx0000". By default the \fBBackupAddr\fR will be identical in value to \fBBackupController\fR. .TP \fBBackupController\fR The name of the machine where Slurm control functions are to be executed in the event that \fBControlMachine\fR fails. This node may also be used as a compute server if so desired. It will come into service as a controller only upon the failure of ControlMachine and will revert to a "standby" mode when the ControlMachine becomes available once again. This should be a node name without the full domain name. I.e., the hostname returned by the \fIgethostname()\fR function cut at the first dot (e.g. use "tux001" rather than "tux001.my.com"). The backup controller recovers state information from the \fBStateSaveLocation\fR directory, which must be readable and writable from both the primary and backup controllers. While not essential, it is recommended that you specify a backup controller. See the \fBRELOCATING CONTROLLERS\fR section if you change this. .TP \fBBatchStartTimeout\fR The maximum time (in seconds) that a batch job is permitted for launching before being considered missing and releasing the allocation. The default value is 10 (seconds). Larger values may be required if more time is required to execute the \fBProlog\fR, load user environment variables (for Moab spawned jobs), or if the slurmd daemon gets paged from memory. .br .br \fBNote\fR: The test for a job being successfully launched is only performed when the Slurm daemon on the compute node registers state with the slurmctld daemon on the head node, which happens fairly rarely. Therefore a job will not necessarily be terminated if its start time exceeds \fBBatchStartTimeout\fR. This configuration parameter is also applied to launch tasks and avoid aborting \fBsrun\fR commands due to long running \fBProlog\fR scripts. .TP \fBBurstBufferType\fR The plugin used to manage burst buffers. Acceptable values at present include "burst_buffer/none". More information later... .TP \fBCacheGroups\fR If set to 1, the slurmd daemon will cache /etc/groups entries. This can improve performance for highly parallel jobs if NIS servers are used and unable to respond very quickly. The default value is 0 to disable caching group data. .TP \fBCheckpointType\fR The system\-initiated checkpoint method to be used for user jobs. The slurmctld daemon must be restarted for a change in \fBCheckpointType\fR to take effect. Supported values presently include: .RS .TP 18 \fBcheckpoint/aix\fR for IBM AIX systems only .TP \fBcheckpoint/blcr\fR Berkeley Lab Checkpoint Restart (BLCR). NOTE: If a file is found at sbin/scch (relative to the Slurm installation location), it will be executed upon completion of the checkpoint. This can be a script used for managing the checkpoint files. NOTE: Slurm's BLCR logic only supports batch jobs. .TP \fBcheckpoint/none\fR no checkpoint support (default) .TP \fBcheckpoint/ompi\fR OpenMPI (version 1.3 or higher) .TP \fBcheckpoint/poe\fR for use with IBM POE (Parallel Operating Environment) only .RE .TP \fBChosLoc\fR If configured, then any processes invoked on the user behalf (namely the SPANK prolog/epilog scripts and the slurmstepd processes, which in turn spawn the user batch script and applications) are not directly executed by the slurmd daemon, but instead the \fBChosLoc\fR program is executed. Both are spawned with the same user ID as the configured SlurmdUser (typically user root). That program's argument are the program and arguments that would otherwise be invoked directly by the slurmd daemon. The intent of this feature is to be able to run a user application in some sort of container. This option specified the fully qualified pathname of the chos command (see https://github.com/scanon/chos for details). .TP \fBClusterName\fR The name by which this Slurm managed cluster is known in the accounting database. This is needed distinguish accounting records when multiple clusters report to the same database. Because of limitations in some databases, any upper case letters in the name will be silently mapped to lower case. In order to avoid confusion, it is recommended that the name be lower case. .TP \fBCompleteWait\fR The time, in seconds, given for a job to remain in COMPLETING state before any additional jobs are scheduled. If set to zero, pending jobs will be started as soon as possible. Since a COMPLETING job's resources are released for use by other jobs as soon as the \fBEpilog\fR completes on each individual node, this can result in very fragmented resource allocations. To provide jobs with the minimum response time, a value of zero is recommended (no waiting). To minimize fragmentation of resources, a value equal to \fBKillWait\fR plus two is recommended. In that case, setting \fBKillWait\fR to a small value may be beneficial. The default value of \fBCompleteWait\fR is zero seconds. The value may not exceed 65533. .TP \fBControlAddr\fR Name that \fBControlMachine\fR should be referred to in establishing a communications path. This name will be used as an argument to the gethostbyname() function for identification. For example, "elx0000" might be used to designate the Ethernet address for node "lx0000". By default the \fBControlAddr\fR will be identical in value to \fBControlMachine\fR. .TP \fBControlMachine\fR The short hostname of the machine where Slurm control functions are executed (i.e. the name returned by the command "hostname \-s", use "tux001" rather than "tux001.my.com"). This value must be specified. In order to support some high availability architectures, multiple hostnames may be listed with comma separators and one \fBControlAddr\fR must be specified. The high availability system must insure that the slurmctld daemon is running on only one of these hosts at a time. See the \fBRELOCATING CONTROLLERS\fR section if you change this. .TP \fBCoreSpecPlugin\fR Identifies the plugins to be used for enforcement of core specialization. The slurmd daemon must be restarted for a change in CoreSpecPlugin to take effect. Acceptable values at present include: .RS .TP 20 \fBcore_spec/cray\fR used only for Cray systems .TP \fBcore_spec/none\fR used for all other system types .RE .TP \fBCpuFreqDef\fR Default CPU frequency to be set when no jobs are running. The CPU frequency can also be set to this value after a catastrophic failure when state information has been lost. Acceptable values at present include: .RS .TP 14 \fBLow\fR the lowest available frequency .TP \fBHigh\fR the highest available frequency .TP \fBHighM1\fR (high minus one) will select the next highest available frequency .TP \fBMedium\fR attempts to set a frequency in the middle of the available range .TP \fBConservative\fR attempts to use the Conservative CPU governor .TP \fBOnDemand\fR attempts to use the OnDemand CPU governor (the default value) .TP \fBPerformance\fR attempts to use the Performance CPU governor .TP \fBPowerSave\fR attempts to use the PowerSave CPU governor .RE .TP \fBCpuFreqGovernors\fR List of CPU frequency governors allowed to be set with the salloc, sbatch, or srun option \-\-cpu\-freq. Acceptable values at present include: .RS .TP 14 \fBConservative\fR attempts to use the Conservative CPU governor .TP \fBOnDemand\fR attempts to use the OnDemand CPU governor (the default value) .TP \fBPerformance\fR attempts to use the Performance CPU governor .TP \fBPowerSave\fR attempts to use the PowerSave CPU governor .TP \fBUserSpace\fR attempts to use the UserSpace CPU governor .RE The default is OnDemand. .TP \fBCryptoType\fR The cryptographic signature tool to be used in the creation of job step credentials. The slurmctld daemon must be restarted for a change in \fBCryptoType\fR to take effect. Acceptable values at present include "crypto/munge" and "crypto/openssl". The default value is "crypto/munge". .TP \fBDebugFlags\fR Defines specific subsystems which should provide more detailed event logging. Multiple subsystems can be specified with comma separators. Most DebugFlags will result in verbose logging for the identified subsystems and could impact performance. The below DB_* flags are only useful when writing directly to the database. If using the DBD put these debug flags in the slurmdbd.conf. Valid subsystems available today (with more to come) include: .RS .TP 17 \fBBackfill\fR Backfill scheduler details .TP \fBBackfillMap\fR Backfill scheduler to log a very verbose map of reserved resources through time. Combine with \fBBackfill\fR for a verbose and complete view of the backfill scheduler's work. .TP \fBBGBlockAlgo\fR BlueGene block selection details .TP \fBBGBlockAlgoDeep\fR BlueGene block selection, more details .TP \fBBGBlockPick\fR BlueGene block selection for jobs .TP \fBBGBlockWires\fR BlueGene block wiring (switch state details) .TP \fBBurstBuffer\fR Burst Buffer plugin .TP \fBCPU_Bind\fR CPU binding details for jobs and steps .TP \fBCpuFrequency\fR Cpu frequency details for jobs and steps using the \-\-cpu\-freq option. .TP \fBDB_ASSOC\fR SQL statements/queries when dealing with associations in the database. .TP \fBDB_EVENT\fR SQL statements/queries when dealing with (node) events in the database. .TP \fBDB_JOB\fR SQL statements/queries when dealing with jobs in the database. .TP \fBDB_QOS\fR SQL statements/queries when dealing with QOS in the database. .TP \fBDB_QUERY\fR SQL statements/queries when dealing with transactions and such in the database. .TP \fBDB_RESERVATION\fR SQL statements/queries when dealing with reservations in the database. .TP \fBDB_RESOURCE\fR SQL statements/queries when dealing with resources like licenses in the database. .TP \fBDB_STEP\fR SQL statements/queries when dealing with steps in the database. .TP \fBDB_USAGE\fR SQL statements/queries when dealing with usage queries and inserts in the database. .TP \fBDB_WCKEY\fR SQL statements/queries when dealing with wckeys in the database. .TP \fBElasticsearch\fR Elasticsearch debug info .TP \fBEnergy\fR AcctGatherEnergy debug info .TP \fBExtSensors\fR External Sensors debug info .TP \fBFrontEnd\fR Front end node details .TP \fBGres\fR Generic resource details .TP \fBGang\fR Gang scheduling details .TP \fBJobContainer\fR Job container plugin details .TP \fBLicense\fR License management details .TP \fBNO_CONF_HASH\fR Do not log when the slurm.conf files differs between Slurm daemons .TP \fBPower\fR Power management plugin .TP \fBPriority\fR Job prioritization .TP \fBProtocol\fR Communication protocol details .TP \fBReservation\fR Advanced reservations .TP \fBSelectType\fR Resource selection plugin .TP \fBSICP\fR Inter\-cluster job details .TP \fBSteps\fR Slurmctld resource allocation for job steps .TP \fBSwitch\fR Switch plugin .TP \fBTraceJobs\fR Trace jobs in slurmctld. It will print detailed job information including state, job ids and allocated nodes counter. .TP \fBTriggers\fR Slurmctld triggers .TP \fBWiki\fR Sched/wiki and wiki2 communications .RE .TP \fBDefMemPerCPU\fR Default real memory size available per allocated CPU in MegaBytes. Used to avoid over\-subscribing memory and causing paging. \fBDefMemPerCPU\fR would generally be used if individual processors are allocated to jobs (\fBSelectType=select/cons_res\fR). The default value is 0 (unlimited). Also see \fBDefMemPerNode\fR and \fBMaxMemPerCPU\fR. \fBDefMemPerCPU\fR and \fBDefMemPerNode\fR are mutually exclusive. NOTE: Enforcement of memory limits currently requires enabling of accounting, which samples memory use on a periodic basis (data need not be stored, just collected). .TP \fBDefMemPerNode\fR Default real memory size available per allocated node in MegaBytes. Used to avoid over\-subscribing memory and causing paging. \fBDefMemPerNode\fR would generally be used if whole nodes are allocated to jobs (\fBSelectType=select/linear\fR) and resources are shared (\fBShared=yes\fR or \fBShared=force\fR). The default value is 0 (unlimited). Also see \fBDefMemPerCPU\fR and \fBMaxMemPerNode\fR. \fBDefMemPerCPU\fR and \fBDefMemPerNode\fR are mutually exclusive. NOTE: Enforcement of memory limits currently requires enabling of accounting, which samples memory use on a periodic basis (data need not be stored, just collected). .TP \fBDefaultStorageHost\fR The default name of the machine hosting the accounting storage and job completion databases. Only used for database type storage plugins and when the \fBAccountingStorageHost\fR and \fBJobCompHost\fR have not been defined. .TP \fBDefaultStorageLoc\fR The fully qualified file name where accounting records and/or job completion records are written when the \fBDefaultStorageType\fR is "filetxt" or the name of the database where accounting records and/or job completion records are stored when the \fBDefaultStorageType\fR is a database. Also see \fBAccountingStorageLoc\fR and \fBJobCompLoc\fR. .TP \fBDefaultStoragePass\fR The password used to gain access to the database to store the accounting and job completion data. Only used for database type storage plugins, ignored otherwise. Also see \fBAccountingStoragePass\fR and \fBJobCompPass\fR. .TP \fBDefaultStoragePort\fR The listening port of the accounting storage and/or job completion database server. Only used for database type storage plugins, ignored otherwise. Also see \fBAccountingStoragePort\fR and \fBJobCompPort\fR. .TP \fBDefaultStorageType\fR The accounting and job completion storage mechanism type. Acceptable values at present include "filetxt", "mysql" and "none". The value "filetxt" indicates that records will be written to a file. The value "mysql" indicates that accounting records will be written to a MySQL or MariaDB database. The default value is "none", which means that records are not maintained. Also see \fBAccountingStorageType\fR and \fBJobCompType\fR. .TP \fBDefaultStorageUser\fR The user account for accessing the accounting storage and/or job completion database. Only used for database type storage plugins, ignored otherwise. Also see \fBAccountingStorageUser\fR and \fBJobCompUser\fR. .TP \fBDisableRootJobs\fR If set to "YES" then user root will be prevented from running any jobs. The default value is "NO", meaning user root will be able to execute jobs. \fBDisableRootJobs\fR may also be set by partition. .TP \fBEioTimeout\fR The number of seconds srun waits for slurmstepd to close the TCP/IP connection used to relay data between the user application and srun when the user application terminates. The default value is 60 seconds. May not exceed 65533. .TP \fBEnforcePartLimits\fR If set to "YES" then jobs which exceed a partition's size and/or time limits will be rejected at submission time. If set to "NO" then the job will be accepted and remain queued until the partition limits are altered. The default value is "NO". NOTE: If set, then a job's QOS can not be used to exceed partition limits. .TP \fBEpilog\fR Fully qualified pathname of a script to execute as user root on every node when a user's job completes (e.g. "/usr/local/slurm/epilog"). A glob pattern (See \fBglob\fR (7)) may also be used to run more than one epilog script (e.g. "/etc/slurm/epilog.d/*"). The Epilog script or scripts may be used to purge files, disable user login, etc. By default there is no epilog. See \fBProlog and Epilog Scripts\fR for more information. .TP \fBEpilogMsgTime\fR The number of microseconds that the slurmctld daemon requires to process an epilog completion message from the slurmd daemons. This parameter can be used to prevent a burst of epilog completion messages from being sent at the same time which should help prevent lost messages and improve throughput for large jobs. The default value is 2000 microseconds. For a 1000 node job, this spreads the epilog completion messages out over two seconds. .TP \fBEpilogSlurmctld\fR Fully qualified pathname of a program for the slurmctld to execute upon termination of a job allocation (e.g. "/usr/local/slurm/epilog_controller"). The program executes as SlurmUser, which gives it permission to drain nodes and requeue the job if a failure occurs (See scontrol(1)). Exactly what the program does and how it accomplishes this is completely at the discretion of the system administrator. Information about the job being initiated, it's allocated nodes, etc. are passed to the program using environment variables. See \fBProlog and Epilog Scripts\fR for more information. .TP \fBExtSensorsFreq\fR The external sensors plugin sampling interval. If \fBExtSensorsType=ext_sensors/none\fR, this parameter is ignored. For all other values of \fBExtSensorsType\fR, this parameter is the number of seconds between external sensors samples for hardware components (nodes, switches, etc.) The default value is zero. This value disables external sensors sampling. Note: This parameter does not affect external sensors data collection for jobs/steps. .TP \fBExtSensorsType\fR Identifies the plugin to be used for external sensors data collection. Slurmctld calls this plugin to collect external sensors data for jobs/steps and hardware components. In case of node sharing between jobs the reported values per job/step (through sstat or sacct) may not be accurate. See also "man ext_sensors.conf". Configurable values at present are: .RS .TP 20 \fBext_sensors/none\fR No external sensors data is collected. .TP \fBext_sensors/rrd\fR External sensors data is collected from the RRD database. .RE .TP \fBFairShareDampeningFactor\fR Dampen the effect of exceeding a user or group's fair share of allocated resources. Higher values will provides greater ability to differentiate between exceeding the fair share at high levels (e.g. a value of 1 results in almost no difference between overconsumption by a factor of 10 and 100, while a value of 5 will result in a significant difference in priority). The default value is 1. .TP \fBFastSchedule\fR Controls how a node's configuration specifications in slurm.conf are used. If the number of node configuration entries in the configuration file is significantly lower than the number of nodes, setting FastSchedule to 1 will permit much faster scheduling decisions to be made. (The scheduler can just check the values in a few configuration records instead of possibly thousands of node records.) Note that on systems with hyper\-threading, the processor count reported by the node will be twice the actual processor count. Consider which value you want to be used for scheduling purposes. .RS .TP 5 \fB0\fR Base scheduling decisions upon the actual configuration of each individual node except that the node's processor count in Slurm's configuration must match the actual hardware configuration if \fBPreemptMode=suspend,gang\fR or \fBSelectType=select/cons_res\fR are configured (both of those plugins maintain resource allocation information using bitmaps for the cores in the system and must remain static, while the node's memory and disk space can be established later). .TP \fB1\fR (default) Consider the configuration of each node to be that specified in the slurm.conf configuration file and any node with less than the configured resources will be set to DRAIN. .TP \fB2\fR Consider the configuration of each node to be that specified in the slurm.conf configuration file and any node with less than the configured resources will \fBnot\fR be set DRAIN. This option is generally only useful for testing purposes. .RE .TP \fBFirstJobId\fR The job id to be used for the first submitted to Slurm without a specific requested value. Job id values generated will incremented by 1 for each subsequent job. This may be used to provide a meta\-scheduler with a job id space which is disjoint from the interactive jobs. The default value is 1. Also see \fBMaxJobId\fR .TP \fBGetEnvTimeout\fR Used for Moab scheduled jobs only. Controls how long job should wait in seconds for loading the user's environment before attempting to load it from a cache file. Applies when the srun or sbatch \fI\-\-get\-user\-env\fR option is used. If set to 0 then always load the user's environment from the cache file. The default value is 2 seconds. .TP \fBGresTypes\fR A comma delimited list of generic resources to be managed. These generic resources may have an associated plugin available to provide additional functionality. No generic resources are managed by default. Insure this parameter is consistent across all nodes in the cluster for proper operation. The slurmctld daemon must be restarted for changes to this parameter to become effective. .TP \fBGroupUpdateForce\fR If set to a non\-zero value, then information about which users are members of groups allowed to use a partition will be updated periodically, even when there have been no changes to the /etc/group file. Otherwise group member information will be updated periodically only after the /etc/group file is updated The default value is 0. Also see the \fBGroupUpdateTime\fR parameter. .TP \fBGroupUpdateTime\fR Controls how frequently information about which users are members of groups allowed to use a partition will be updated. The time interval is given in seconds with a default value of 600 seconds and a maximum value of 4095 seconds. A value of zero will prevent periodic updating of group membership information. Also see the \fBGroupUpdateForce\fR parameter. .TP \fBHealthCheckInterval\fR The interval in seconds between executions of \fBHealthCheckProgram\fR. The default value is zero, which disables execution. .TP \fBHealthCheckNodeState\fR Identify what node states should execute the \fBHealthCheckProgram\fR. Multiple state values may be specified with a comma separator. The default value is ANY to execute on nodes in any state. .RS .TP 12 \fBALLOC\fR Run on nodes in the ALLOC state (all CPUs allocated). .TP \fBANY\fR Run on nodes in any state. .TP \fBCYCLE\fR Rather than running the health check program on all nodes at the same time, cycle through running on all compute nodes through the course of the \fBHealthCheckInterval\fR. May be combined with the various node state options. .TP \fBIDLE\fR Run on nodes in the IDLE state. .TP \fBMIXED\fR Run on nodes in the MIXED state (some CPUs idle and other CPUs allocated). .RE .TP \fBHealthCheckProgram\fR Fully qualified pathname of a script to execute as user root periodically on all compute nodes that are \fBnot\fR in the NOT_RESPONDING state. This program may be used to verify the node is fully operational and DRAIN the node or send email if a problem is detected. Any action to be taken must be explicitly performed by the program (e.g. execute "scontrol update NodeName=foo State=drain Reason=tmp_file_system_full" to drain a node). The execution interval is controlled using the \fBHealthCheckInterval\fR parameter. Note that the \fBHealthCheckProgram\fR will be executed at the same time on all nodes to minimize its impact upon parallel programs. This program is will be killed if it does not terminate normally within 60 seconds. By default, no program will be executed. .TP \fBInactiveLimit\fR The interval, in seconds, after which a non\-responsive job allocation command (e.g. \fBsrun\fR or \fBsalloc\fR) will result in the job being terminated. If the node on which the command is executed fails or the command abnormally terminates, this will terminate its job allocation. This option has no effect upon batch jobs. When setting a value, take into consideration that a debugger using \fBsrun\fR to launch an application may leave the \fBsrun\fR command in a stopped state for extended periods of time. This limit is ignored for jobs running in partitions with the \fBRootOnly\fR flag set (the scheduler running as root will be responsible for the job). The default value is unlimited (zero) and may not exceed 65533 seconds. .TP \fBJobAcctGatherType\fR The job accounting mechanism type. Acceptable values at present include "jobacct_gather/aix" (for AIX operating system), "jobacct_gather/linux" (for Linux operating system), "jobacct_gather/cgroup" and "jobacct_gather/none" (no accounting data collected). The default value is "jobacct_gather/none". "jobacct_gather/cgroup" is a plugin for the Linux operating system that uses cgroups to collect accounting statistics. The plugin collects the following statistics: From the cgroup memory subsystem: memory.usage_in_bytes (reported as 'pages') and rss from memory.stat (reported as 'rss'). From the cgroup cpuacct subsystem: user cpu time and system cpu time. No value is provided by cgroups for virtual memory size ('vsize'). In order to use the \fBsstat\fR tool, "jobacct_gather/aix", "jobacct_gather/linux", or "jobacct_gather/cgroup" must be configured. .br \fBNOTE:\fR Changing this configuration parameter changes the contents of the messages between Slurm daemons. Any previously running job steps are managed by a slurmstepd daemon that will persist through the lifetime of that job step and not change it's communication protocol. Only change this configuration parameter when there are no running job steps. .TP \fBJobAcctGatherFrequency\fR The job accounting and profiling sampling intervals. The supported format is follows: .RS .TP 12 \fBJobAcctGatherFrequency=\fR\fI\fR\fB=\fR\fI\fR where \fI\fR=\fI\fR specifies the task sampling interval for the jobacct_gather plugin or a sampling interval for a profiling type by the acct_gather_profile plugin. Multiple, comma-separated \fI\fR=\fI\fR intervals may be specified. Supported datatypes are as follows: .RS .TP \fBtask=\fI\fR where \fI\fR is the task sampling interval in seconds for the jobacct_gather plugins and for task profiling by the acct_gather_profile plugin. .TP \fBenergy=\fI\fR where \fI\fR is the sampling interval in seconds for energy profiling using the acct_gather_energy plugin .TP \fBnetwork=\fI\fR where \fI\fR is the sampling interval in seconds for infiniband profiling using the acct_gather_infiniband plugin. .TP \fBfilesystem=\fI\fR where \fI\fR is the sampling interval in seconds for filesystem profiling using the acct_gather_filesystem plugin. .TP .RE .RE The default value for task sampling interval is 30 seconds. The default value for all other intervals is 0. An interval of 0 disables sampling of the specified type. If the task sampling interval is 0, accounting information is collected only at job termination (reducing Slurm interference with the job). .br .br Smaller (non\-zero) values have a greater impact upon job performance, but a value of 30 seconds is not likely to be noticeable for applications having less than 10,000 tasks. .br .br Users can independently override each interval on a per job basis using the \fB\-\-acctg\-freq\fR option when submitting the job. .RE .TP \fBJobAcctGatherParams\fR Arbitrary parameters for the job account gather plugin Acceptable values at present include: .RS .TP 20 \fBNoShared\fR Exclude shared memory from accounting. .TP \fBUsePss\fR Use PSS value instead of RSS to calculate real usage of memory. The PSS value will be saved as RSS. .TP \fBNoOverMemoryKill\fR Do not kill process that uses more then requested memory. This parameter should be used with caution as if jobs exceeds its memory allocation it may affect other processes and/or machine health. .RE .TP \fBJobCheckpointDir\fR Specifies the default directory for storing or reading job checkpoint information. The data stored here is only a few thousand bytes per job and includes information needed to resubmit the job request, not job's memory image. The directory must be readable and writable by \fBSlurmUser\fR, but not writable by regular users. The job memory images may be in a different location as specified by \fB\-\-checkpoint\-dir\fR option at job submit time or scontrol's \fBImageDir\fR option. .TP \fBJobCompHost\fR The name of the machine hosting the job completion database. Only used for database type storage plugins, ignored otherwise. Also see \fBDefaultStorageHost\fR. .TP \fBJobCompLoc\fR The fully qualified file name where job completion records are written when the \fBJobCompType\fR is "jobcomp/filetxt" or the database where job completion records are stored when the \fBJobCompType\fR is a database or an url with format http://yourelasticserver:port where job completion records are indexed when the \fBJobCompType\fR is "jobcomp/elasticsearch". Also see \fBDefaultStorageLoc\fR. .TP \fBJobCompPass\fR The password used to gain access to the database to store the job completion data. Only used for database type storage plugins, ignored otherwise. Also see \fBDefaultStoragePass\fR. .TP \fBJobCompPort\fR The listening port of the job completion database server. Only used for database type storage plugins, ignored otherwise. Also see \fBDefaultStoragePort\fR. .TP \fBJobCompType\fR The job completion logging mechanism type. Acceptable values at present include "jobcomp/none", "jobcomp/filetxt", "jobcomp/mysql", "jobcomp/elasticsearch" and "jobcomp/script"". The default value is "jobcomp/none", which means that upon job completion the record of the job is purged from the system. If using the accounting infrastructure this plugin may not be of interest since the information here is redundant. The value "jobcomp/filetxt" indicates that a record of the job should be written to a text file specified by the \fBJobCompLoc\fR parameter. The value "jobcomp/mysql" indicates that a record of the job should be written to a MySQL or MariaDB database specified by the \fBJobCompLoc\fR parameter. The value "jobcomp/script" indicates that a script specified by the \fBJobCompLoc\fR parameter is to be executed with environment variables indicating the job information. The value "jobcomp/elasticsearch" indicates that a record of the job should be written to an Elasticsearch server specified by the \fBJobCompLoc\fR parameter. .TP \fBJobCompUser\fR The user account for accessing the job completion database. Only used for database type storage plugins, ignored otherwise. Also see \fBDefaultStorageUser\fR. .TP \fBJobContainerType\fR Identifies the plugin to be used for job tracking. The slurmd daemon must be restarted for a change in JobContainerType to take effect. NOTE: The \fBJobContainerType\fR applies to a job allocation, while \fBProctrackType\fR applies to job steps. Acceptable values at present include: .RS .TP 20 \fBjob_container/cncu\fR used only for Cray systems (CNCU = Compute Node Clean Up) .TP \fBjob_container/none\fR used for all other system types .RE .TP \fBJobCredentialPrivateKey\fR Fully qualified pathname of a file containing a private key used for authentication by Slurm daemons. This parameter is ignored if \fBCryptoType=crypto/munge\fR. .TP \fBJobCredentialPublicCertificate\fR Fully qualified pathname of a file containing a public key used for authentication by Slurm daemons. This parameter is ignored if \fBCryptoType=crypto/munge\fR. .TP \fBJobFileAppend\fR This option controls what to do if a job's output or error file exist when the job is started. If \fBJobFileAppend\fR is set to a value of 1, then append to the existing file. By default, any existing file is truncated. .TP \fBJobRequeue\fR This option controls the default ability for batch jobs to be requeued. Jobs may be requeued explicitly by a system administrator, after node failure, or upon preemption by a higher priority job. If \fBJobRequeue\fR is set to a value of 1, then batch job may be requeued unless explicitly disabled by the user. If \fBJobRequeue\fR is set to a value of 0, then batch job will not be requeued unless explicitly enabled by the user. Use the \fBsbatch\fR \fI\-\-no\-requeue\fR or \fI\-\-requeue\fR option to change the default behavior for individual jobs. The default value is 1. .TP \fBJobSubmitPlugins\fR A comma delimited list of job submission plugins to be used. The specified plugins will be executed in the order listed. These are intended to be site\-specific plugins which can be used to set default job parameters and/or logging events. Sample plugins available in the distribution include "all_partitions", "cnode", "defaults", "logging", "lua", and "partition". For examples of use, see the Slurm code in "src/plugins/job_submit" and "contribs/lua/job_submit*.lua" then modify the code to satisfy your needs. Slurm can be configured to use multiple job_submit plugins if desired, however the lua plugin will only execute one lua script named "job_submit.lua" located in the default script directory (typically the subdirectory "etc" of the installation directory). No job submission plugins are used by default. .TP \fBKeepAliveTime\fR Specifies how long sockets communications used between the srun command and its slurmstepd process are kept alive after disconnect. Longer values can be used to improve reliability of communications in the event of network failures. The default value leaves the system default value. The value may not exceed 65533. .TP \fBKillOnBadExit\fR If set to 1, the job will be terminated immediately when one of the processes is crashed or aborted. With the default value of 0, if one of the processes is crashed or aborted the other processes will continue to run. The user can override this configuration parameter by using srun's \fB\-K\fR, \fB\-\-kill\-on\-bad\-exit\fR. .TP \fBKillWait\fR The interval, in seconds, given to a job's processes between the SIGTERM and SIGKILL signals upon reaching its time limit. If the job fails to terminate gracefully in the interval specified, it will be forcibly terminated. The default value is 30 seconds. The value may not exceed 65533. .TP \fBLaunchParameters\fR Identifies options to the job launch plugin. Acceptable values include: .RS .TP 12 \fBtest_exec\fR Validate the executable command's existence prior to attemping launch on the compute nodes .RE .TP \fBLaunchType\fR Identifies the mechanism to be used to launch application tasks. Acceptable values include: .RS .TP 15 \fBlaunch/aprun\fR For use with Cray systems with ALPS and the default value for those systems .TP \fBlaunch/poe\fR For use with IBM Parallel Environment (PE) and the default value for systems with the IBM NRT library installed. .TP \fBlaunch/runjob\fR For use with IBM BlueGene/Q systems and the default value for those systems .TP \fBlaunch/slurm\fR For all other systems and the default value for those systems .RE .TP \fBLicenses\fR Specification of licenses (or other resources available on all nodes of the cluster) which can be allocated to jobs. License names can optionally be followed by a colon and count with a default count of one. Multiple license names should be comma separated (e.g. "Licenses=foo:4,bar"). Note that Slurm prevents jobs from being scheduled if their required license specification is not available. Slurm does not prevent jobs from using licenses that are not explicitly listed in the job submission specification. .TP \fBLogTimeFormat\fR Format of the timestamp in slurmctld and slurmd log files. Accepted values are "iso8601", "iso8601_ms", "rfc5424", "rfc5424_ms", "clock", "short" and "thread_id". The values ending in "_ms" differ from the ones without in that fractional seconds with millisecond precision are printed. The default value is "iso8601_ms". The "rfc5424" formats are the same as the "iso8601" formats except that the timezone value is also shown. The "clock" format shows a timestamp in microseconds retrieved with the C standard clock() function. The "short" format is a short date and time format. The "thread_id" format shows the timestamp in the C standard ctime() function form without the year but including the microseconds, the daemon's process ID and the current thread ID. .TP \fBMailProg\fR Fully qualified pathname to the program used to send email per user request. The default value is "/bin/mail". .TP \fBMaxArraySize\fR The maximum job array size. The maximum job array task index value will be one less than MaxArraySize to allow for an index value of zero. Configure MaxArraySize to 0 in order to disable job array use. The value may not exceed 4000001. The value of \fBMaxJobCount\fR should be much larger than \fBMaxArraySize\fR. The default value is 1001. .TP \fBMaxJobCount\fR The maximum number of jobs Slurm can have in its active database at one time. Set the values of \fBMaxJobCount\fR and \fBMinJobAge\fR to insure the slurmctld daemon does not exhaust its memory or other resources. Once this limit is reached, requests to submit additional jobs will fail. The default value is 10000 jobs. NOTE: Each task of a job array counts as one job even though they will not occupy separate job records until modified or initiated. Performance can suffer with more than a few hundred thousand jobs. Setting per MaxSubmitJobs per user is generally valuable to prevent a single user from filling the system with jobs. This is accomplished using Slurm's database and configuring enforcement of resource limits. This value may not be reset via "scontrol reconfig". It only takes effect upon restart of the slurmctld daemon. .TP \fBMaxJobId\fR The maximum job id to be used for jobs submitted to Slurm without a specific requested value EXCEPT for jobs visible between clusters. Job id values generated will incremented by 1 for each subsequent job. Once \fBMaxJobId\fR is reached, the next job will be assigned \fBFirstJobId\fR. The default value is 2,147,418,112 (0x7fff0000). Jobs visible across clusters will always have a job ID of 2,147,483,648 or higher. Also see \fBFirstJobId\fR. .TP \fBMaxMemPerCPU\fR Maximum real memory size available per allocated CPU in MegaBytes. Used to avoid over\-subscribing memory and causing paging. \fBMaxMemPerCPU\fR would generally be used if individual processors are allocated to jobs (\fBSelectType=select/cons_res\fR). The default value is 0 (unlimited). Also see \fBDefMemPerCPU\fR and \fBMaxMemPerNode\fR. \fBMaxMemPerCPU\fR and \fBMaxMemPerNode\fR are mutually exclusive. NOTE: Enforcement of memory limits currently requires enabling of accounting, which samples memory use on a periodic basis (data need not be stored, just collected). NOTE: If a job specifies a memory per CPU limit that exceeds this system limit, that job's count of CPUs per task will automatically be increased. This may result in the job failing due to CPU count limits. .TP \fBMaxMemPerNode\fR Maximum real memory size available per allocated node in MegaBytes. Used to avoid over\-subscribing memory and causing paging. \fBMaxMemPerNode\fR would generally be used if whole nodes are allocated to jobs (\fBSelectType=select/linear\fR) and resources are shared (\fBShared=yes\fR or \fBShared=force\fR). The default value is 0 (unlimited). Also see \fBDefMemPerNode\fR and \fBMaxMemPerCPU\fR. \fBMaxMemPerCPU\fR and \fBMaxMemPerNode\fR are mutually exclusive. NOTE: Enforcement of memory limits currently requires enabling of accounting, which samples memory use on a periodic basis (data need not be stored, just collected). .TP \fBMaxStepCount\fR The maximum number of steps that any job can initiate. This parameter is intended to limit the effect of bad batch scripts. The default value is 40000 steps. .TP \fBMaxTasksPerNode\fR Maximum number of tasks Slurm will allow a job step to spawn on a single node. The default \fBMaxTasksPerNode\fR is 128. May not exceed 65533. .TP \fBMemLimitEnforce\fR If set to "no" then Slurm will not terminate the job or the job step if they exceeds the value requested using the \-\-mem\-per\-cpu option of salloc/sbatch/srun. This is useful if jobs need to specify \-\-mem\-per\-cpu for scheduling but they should not be terminate if they exceed the estimated value. The default value is 'yes', terminate the job/step if exceed the requested memory. .TP \fBMessageTimeout\fR Time permitted for a round\-trip communication to complete in seconds. Default value is 10 seconds. For systems with shared nodes, the slurmd daemon could be paged out and necessitate higher values. .TP \fBMinJobAge\fR The minimum age of a completed job before its record is purged from Slurm's active database. Set the values of \fBMaxJobCount\fR and to insure the slurmctld daemon does not exhaust its memory or other resources. The default value is 300 seconds. A value of zero prevents any job record purging. In order to eliminate some possible race conditions, the minimum non\-zero value for \fBMinJobAge\fR recommended is 2. .TP \fBMpiDefault\fR Identifies the default type of MPI to be used. Srun may override this configuration parameter in any case. Currently supported versions include: \fBlam\fR, \fBmpich1_p4\fR, \fBmpich1_shmem\fR, \fBmpichgm\fR, \fBmpichmx\fR, \fBmvapich\fR, \fBnone\fR (default, which works for many other versions of MPI) and \fBopenmpi\fR. \fBpmi2\fR, More information about MPI use is available here . .TP \fBMpiParams\fR MPI parameters. Used to identify ports used by OpenMPI only and the input format is "ports=12000\-12999" to identify a range of communication ports to be used. .TP \fBMsgAggregationParams\fR Message aggregation parameters. Message aggregation is an optional feature that may improve system performance by reducing the number of separate messages passed between nodes. The feature works by routing messages through one or more message collector nodes between their source and destination nodes. At each collector node, messages with the same destination received during a defined message collection window are packaged into a single composite message. When the window expires, the composite message is sent to the next collector node on the route to its destination. The route between each source and destination node is provided by the Route plugin. When a composite message is received at its destination node, the original messages are extracted and processed as if they had been sent directly. .br .br Currently, the only message types supported by message aggregation are the node registration, batch script completion, step completion, and epilog complete messages. .br .br .RE The format for this parameter is as follows: .br .RS .TP 12 \fBMsgAggregationParams=\fR\fI