cons-2.2.0.orig/0040775000175000017500000000000007245475142013664 5ustar jgoerzenjgoerzencons-2.2.0.orig/MANIFEST0100444000175000017500000000013007206610310014761 0ustar jgoerzenjgoerzenCHANGES COPYING COPYRIGHT INSTALL README RELEASE TODO cons cons.1.gz cons.bat cons.html cons-2.2.0.orig/CHANGES0100644000175000017500000004562107205142174014652 0ustar jgoerzenjgoerzenCons is a software construction system. A description appears under "Cons 1.0", below. $Id: CHANGES,v 1.101.2.1 2000/11/17 05:39:08 knight Exp $ Cons 2.2.0 ********** - Use the Digest::MD5 module in preference to the now-deprecated MD5 module. Use MD5 if Digest::MD5 isn't available. - Fix the Objects method to return meaningful path names even if the object isn't underneath the Conscript directory: a top-relative path name if it's underneath the Construct directory, an absolute path name otherwise. - Fix documentation: CCOM should be CCCOM. Remove POD directives that were showing up in a verbatim paragraph. The verbatim %[-%] example didn't work intuitively. Change misleading use of $ENV for an environment name, which doesn't work because it's special to Perl. - Missed some WIN32-specific -w warnings; now fixed. - File name fixes: Added canonicalization so dir//file and dir/file refer to the same file. Simplified lookups in the root directory to avoid reporting /bin/cat as //bin/cat. - Fix a bug that wouldn't allow SUFOBJ to have more than one dot (that is, no object files with names like .arch.o). - Die with an appropriate message when someone tries to Export ENV or any other special Perl variable that can only exist in the main:: package. (Bug reported by Johan Holmberg and Eric Brown.) - Rework Conscript variable namespace maintenance to allow external packages to be used. Remove variable names from the script:: namespace instead of undef'ing their values. Restore names whenever we execute perl code instead of a shell command. (Based on work by Johan Holmberg.) - When using the -t option, change how the top is located to avoid calling dir::init() too early. This was causing Cons to use full path names for files and causing needless recompiles when jumping back and forth between using -t and not. (Based on a mailing-list patch by Wayne Scott.) - When using the -t option, emulate make by printing a "cons: Entering directory `xxx'" message when we chdir to the Construct directory, to allow the emacs compile feature to keep track of where files with build errors really are. (Based on a mailing-list patch by Wayne Scott.) - Allow the Depends method to take an array ref of targets to specify multiple target dependencies at once. (Bug reported by Zachary Deretsky.) - The QuickScan method's PATH was hard-coded to use ':' as the separator. Fix to use ';' on Windows NT, or to use an array ref of directories. (Bug reported by Zachary Deretsky.) - Fix the SourcePath method so it can process and return an array. Contributions from Gary Oberbrunner - Support executing perl code instead of shell commands to build a target. Cons 2.1.2 ********** - Unlink after rename of similarly named files, per Unix98. (Reported by Bradley White.) - Allow %% to insert a single % anywhere in a command line, even when surrounded by non-whitespace. A lone % still gets passed through untouched, but a lone %% now becomes %. - Cons (and all tests) are now completely Perl -w clean. - Add support for changing directory to the containing directory of Conscript files, controlled by a Conscript_chdir() method, and disabled by default (preserving current behavior). This allows file globbing via Perl's <*.c> syntax. - When building all targets in a directory, explicitly skip '.' and '..' instead of skipping all entries begining with '.', to allow specification of targets with initial '.' names. - Use $Config{} to fetch the platform's file suffixes. Put these in a platform-independent default environment. Put the rest of what used to be the default environment into a separate-but-equal UNIX defaults "overrides" hash, like the Win32 overrides. This will all help future portability to other platforms. - We now call $env->_subst() to expand variables from the construction environment in source and target file names for all cons methods. Existing tests have been modified to verify this for all tested methods. (Bug reported by Johan Holmberg.) - Change the case-insensitive mapping for initial drive letters from lower-case to upper-case. This better conforms to the canonical Win32 representation. - Make futil::copy more robust: Use chmod() to make the file writable before calling utime() (permissions govern the ability to modify file times on some operating systems), and then chmod back to the desired mode if needed. - Fetch a command's exit status by looking at the proper upper-order bits on Win32, too. We were reporting an exit status of 1 as 256, for example. Move where this happens so the WIN32 logic is grouped. - The internal File::Spec::splitpath method wasn't recognizing UNC volume names; now it does. (Bug reported by Bruce Leverett.) - Add a file::addsuffix call to the Program method so it can append SUFEXE if it's not already there, like the Library method appends SUFLIB. This will increase Construct/Conscript file portability by allowing people to drop the ".exe" from the Program specification, like they do with ".lib" for libraries. (Reported by M. C. Srivas.) Contributions from Johan Holmberg - Make Cons execute cleanly under "use strict" (-Mstrict). Cons 2.1.1 ********** - First separate development release. - Specifying a directory target on the command line wouldn't work with -t because it didn't add the updir entry ('..') when it re-blessed the target from a generic entry to a directory. (Bug reported by Damien Neil.) - Executing cons -t from the top level directory unnecessarily prefixed all targets with the current working directory path name. No longer. - Handle WIN32 case-insensitive initial drive letters (C: is the same as c:). Add an internal File::Spec::case_tolerant method (parallel to the real interface) to decide whether we should lower-case the volume name. (Bug reported by Greg Spencer.) - Add file name and line numbers to the error messages for ignoring a non-existent Conscript file and attempting to build a target two different ways. Contributions from Johan Holmberg - Change an explicit check for "file" targets in a directory being built to a less-restrictive check for any non-"dir" targets. This preserves existing behavior if the target is an "entry" (as can happen via the Depends directive). Contributions from Damien Neil - Fix %[ %] processing to match multiple whitespace characters and suppress zero-length arguments. - Remove "use File::Spec" since we ship our own internal copy. Contributions from Gary Oberbrunner - When using -d, add a "Rebuilding" line to the output to identify what's being rebuilt (and, implicitly, what's causing the rebuild). - When computing the signature of a file's contents, add a binmode() call to the file handle so end-of-file character values (like ^Z on WIN32) won't prematurely terminate signature calculation. Contributions from Juan Altmayer Pizzorno - In the Win32 'CXXCOM' command, change the '-c' to '/c' for consistency. Cons 2.0.1 ********** - Fix a QuickScan regression: CODREFs that return arrays broke because the calling context changed from array to scalar. Keep it in array context and use grep to weed out null entries. - The map{} function SplitPath returned was using @_, not $_, so it would return multiple copies of the whole list, thereby only working for a single argument. - Fix the cons.bat file copyright comments. (pointed out by Juan Altmayer Pizzorno ) - Add CXX* default rules for win32. (pointed out by Juan Altmayer Pizzorno ) - Documentation fix in "Adding new methods" section. (pointed out by Juan Altmayer Pizzorno ) Cons 2.0 ******** - Release Version of 1.8b2 (see 1.8b2 for all the changes) - Redo the copyright to be GNU General Public License. - Integrate the tests from the cons-test suite into the cons release, creating separate tar files with and without the tests. Cons 1.8b2 ********** Contributions from Wayne Scott - Create subdirectories in the Cache directory if they don't already exist. - Don't match #include delimiters in comments on the same line. - Prevent scan::quickscan::scan from adding null entries to its include-file list. Contributions from Damien Neill - An initial '@' on a command line suppresses its printing (a la make). - Make Cons realize that topdir is part of the file system, allowing building/installing in directories more than one level up. - Add support for rewriting portions of the command line enclosed by %[-%] by passing them to a named code ref in the Cons environment. - Add a -t flag to walk up the directory hierarchy looking for a Construct file, allowing invocation from child directories. (Contributed with Greg Spencer.) Contributions from Brad Garcia - Make the default environment '%LINK => '%CXX', which in turn transparently maps to '%CC' for anyone not using C++. Contributions from Greg Spencer - Add support for %:b, %:s, %:F and %:a suffixes on pseudo-variables. - Add Win32 default environment overrides for Microsoft VC++ 6.0. - Add a -t flag to walk up the directory hierarchy looking for a Construct file, allowing invocation from child directories. (Contributed with Damien Neill.) Add hooks to builds an associated Linked subdirectory if -t is invoked from a source subdirectory. - Fix futil::install (and the other futil methods) so they warn, not die, and return failure on error. - Add a Precious method to suppress unlinking a file before building. Contributions from Eric Shienbrood - Fix the Objects method to return path names relative to the Conscript file. Contributions from Johan Holmberg - Bug fixes in new File::Spec logic for incorrect use of Boolean tests instead of string comparisons against ''. - Performance optimizations: fix caching values in dir::is_linked and futil::mkdir methods. - CPPPATH as an array ref wasn't properly expanding %-variables, causing missed dependencies. Contributions from Gary Oberbrunner - Have -d print the dependencies in-line with build. Identify what's a Target and what's a dependency we're Checking. Contributions from Steven Knight - Rewrite all path name manipulation using File::Spec so we're as portable as possible (at least between UNIX and NT). Add support for multiple file system volumes through a hash of $root nodes. - Due to incompatibility problems with some versions of File::Spec on some versions of Perl, create our own internal File::Spec class with methods cut-and-pasted from the real module, so people don't have to install an external File::Spec. - Rewrite "options" routine; now hash-driven for speed and readability. - Let Default be called more than once. Document it. - Rewrite the dir::lookup and dir::lookupdir methods to avoid changing a file node into a dir node if the names match. This was causing "Can't locate object method" errors. We now enforce a distinction between the two node types, so add a dir::lookupfile and dir::lookupdir methods and change calls as appropriate. If someone tries to use a dir as a file or vice versa, die and report the error with info about what's wrong, where we detected the conflict, and where the node was created. If a dir was created as a file via "FilePath," change it to a dir (preserve old behavior), but warn them that they should use "DirPath" instead. Add an empty 'entry' subclass for nodes which we don't yet know are files or directories. (Bug reported by Eric Shienbrood.) - Have srcsig return a '' signature, not die, if the file can't be read. - Fix a bunch of perl -w warnings. There are still a few warnings left. - Replace the hard-coded global FLAG_CHARACTER and LIB_FLAG_PREFIX values with INCDIRPREFIX and LIBDIRPREFIX values in the Cons environment, so people can simultaneously use multiple compilers/ linkers with different calling conventions. (Bug reported by Johan Holmberg.) - Documentation cleanup and updates. - Minor changes to make sure Cons still executes under Perl 5.003. Expand the testing to include Cons under Perl versions 5.00[345] as well as 5.6.0. Cons 1.7 ******** Contributions from Rajesh Vaidheeswarran - Add QuickScan documentation. - Add Argument passing from cons to Construct. - Remove modification history feature from cons due to duplication, and bloat. Contributions from Brad Garcia - Add ability to compile both C and C++ files using the same environment. Based off of code written by Michael Polis . Cons 1.6b1 ********** Contributions from Rajesh Vaidheeswarran - Fix futil::copy to return undef if unable to copy a target to the cache instead of aborting the build process. Cons 1.6a3 ********** Contributions from - Fix for Win32: [problem: 'require'ing Win32 won't work when Win32 module isn't installed, as is the case for a generic perl installation (not ActiveState)]. Contributions from Rajesh Vaidheeswarran - InstallAs modified to accept either a single file or a list of files as source and target. Cons 1.6a2 ********** Contributions from Rajesh Vaidheeswarran - Bugfix for Repository to add the path to INC to enable perl packages in the local tree to be used in Cons scripts. - Bugfix for SourcePath to return the correct path in a repositorey enabled build. - Add InstallAs method to link/copy a source file to any arbitrary target file name. - Move all the documentation to cons itself, and get rid of the pod file from the distribution. Users can now use perlpod on cons to see the docs. Cons 1.6a1 ********** Contributions from Steven Knight - Fixed a bug where .consign seems to be closed improperly when cons is terminated with a signal, leading to rebuilds of various targets that had already been built. Contributions from Rajesh Vaidheeswarran - New flag -q (for quiet) to be less verbose about what files were `Install'ed (or `Removed' when used in conjunction with -r). Cons 1.5 ******** Contributions from Steven Knight - Added Repository global function, similar to make's VPATH, and supporting functions: Local, Install_Local, Repository_List, Repository_Sig_Times_OK. - Added -R command-line option as a synonym for Repository. - Bugfix for SplitPath (when given a path with N components, it returned N**2 components). Cons 1.4a3 ********** Contributions from Rajesh Vaidheeswarran - Construction local help. Exported command: Help ; Cons 1.4a2 ********** Contributions from Bob Sidebotham: - LIBPATH, CPPPATH, and program search path ENV->{PATH} may all now be arrays. The old usage (colon separated strings) is deprecated, but still supported. In the old form, on Unix, ":"'s separate path components, and on Windows, ";" separates them. - PATH_SEPERATOR was finally changed to PATH_SEPARATOR Contributions from Steven Knight - Default target support has been added (see RELEASE for details). Cons 1.4 (alpha1) ***************** This release of Cons has a number of changes. Briefly, they are: - a QuickScan function that makes it trivial to set up simple dependency scanners by Bob Sidebotham. - improvements in signature calculation for better control of rebuilds - a caching mechanism for sharing derived files between builds - new global functions: UseCache, Salt, SourcePath, ConsPath. - some minor cleanup Cons 1.3.1 ********** This is a minor release with limited shared library support contributed by Gary Oberbrunner . Documentation is now maintained in pod format, thanks to the cons.pod file from Ulrich Pfeifer. Cons 1.3 ******** This is the first combined Win32 and unix cons contributed by Rajesh Vaidheeswarran . This contains some significant fixes that enables the same cons file to be used for both platforms. Cons 1.2 ******** This is the WIN32 port of cons by Chriss Stephens , Rajesh Vaidheeswarran and Jochen Schwarze . Cons 1.1 ******** This is a minor patch release to cons 1.0. This contains a number of minor changes, a bug fix affecting multi-target commands, and a couple of minor new features. A list of changes from 1.0 to 1.1 is included in the file CHANGES. There are no incompatible changes between 1.0 and 1.1. The NT support is working well here, but it still hasn't been integrated into an single version of cons. The changes are quite simple, and if anyone wants it let me know. Cons 1.0 ******** This is a Perl5-based make replacement, but does not provide make compatibility. You will need Perl 5.002 or better and the Perl MD5 Extension (MD5-1.6.tar.gz), available from CPAN. This program is known to work on a variety of platforms: it's in production use on versions of SunOS, Solaris, HPUX, AIX, and IRIX. The current program will not work correctly on Windows/NT, but we do have an internal version that does appear to work on that platform, but has not been well tested. If anyone is interested, contact me. PostScript documentation is in cons.ps. The following is an excerpt from the introduction in cons.ps: Cons is a system for constructing, primarily, software, but is quite different from previous software construction systems. Cons was designed from the ground up to deal easily with the construction of software spread over multiple source directories. Cons makes it easy to create build scripts that are simple, understandable and maintainable. Cons ensures that complex software is easily and accurately reproducible. Cons uses a number of techniques to accomplish all of this. Construction scripts are just Perl scripts, making them both easy to comprehend and very flexible. Global scoping of variables is replaced with an import/export mechanism for sharing information between scripts, significantly improving the readability and maintainability of each script. Construction environments are introduced: these are Perl objects that capture the information required for controlling the build process. Multiple environments are used when different semantics are required for generating products in the build tree. Cons implements automatic dependency analysis and uses this to globally sequence the entire build. Variant builds are easily produced from a single source tree. Intelligent build subsetting is possible, when working on localized changes. Overrides can be setup to easily override build instructions without modifying any scripts. MD5 cryptographic signatures are associated with derived files, and are used to accurately determine whether a given file needs to be rebuilt. Complaints, suggestions, kudos, etc. to: Bob Sidebotham cons-discuss@eng.fore.com FORE Systems Pittsburgh, PA. cons-2.2.0.orig/COPYING0100644000175000017500000004311007113251552014700 0ustar jgoerzenjgoerzen GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. cons-2.2.0.orig/COPYRIGHT0100644000175000017500000000134407115542235015146 0ustar jgoerzenjgoerzenCopyright (C) 1996-2000 Free Software Foundation, Inc. Cons is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. Cons is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; see the file COPYING. If not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. cons-2.2.0.orig/INSTALL0100644000175000017500000000346707117263002014706 0ustar jgoerzenjgoerzen$Id: INSTALL,v 1.9 2000/06/06 20:58:42 rv Exp $ This cons distribution contains the following files: CHANGES - Change log for cons. COPYING - The GNU General Public License under which cons is released. INSTALL - This file. README - Explanation of software. RELEASE - Release Notes for current distribution. TODO - The list of things to do with cons. cons - The cons program cons.1 - A nroff-formatted manual page documentation of cons. cons.bat - cons as an MS-DOS batch file. cons.html - HTML formatted cons documentation. test.*.log - Log file(s) from regression tests of cons with various versions of Perl. This distribution may optionally contain a test/ subdirectory containing the regression test suite for cons. See test/README for details about its contents. To use cons, simply copy the `cons' program to a location where you keep other executable binaries or scripts, like /usr/local/bin. Win-NT users can execute the cons program in 2 ways: 1. Rename it to cons.pl and associate the .pl suffix with perl so that Windows will always execute the program inside perl. 2. Run cons.bat. Make sure that perl is in your path when use do this. There are a lot of ways of getting cons documentation. Listed here are a few: 1. Users may generate various types of documentation with the cons program itself using the pod2* utilities available with the perl distribution. 2. Another way is to use perldoc. perldoc /path/to/cons will generate a manpage-like formatted output. 3. Alternately, users can use the man page cons.1 supplied with this distribution and move it to a location in your manpath (e.g. /usr/local/man/man1) and use the UNIX man command (man cons) to view cons documentation. 4. Or, users can use the html page cons.html supplied with this distribution and view it through a browser. cons-2.2.0.orig/README0100644000175000017500000000270507162276645014550 0ustar jgoerzenjgoerzen$Id: README,v 1.10 2000/09/21 03:16:53 knight Exp $ Cons is a Perl-based make replacement. It is not compatible with make, but has a number of powerful capabilities not found in other software construction systems, including make. This package contains a development release of Cons and its documentation. Development releases are intended for advanced features and quick delivery of bug fixes. If you are looking for the latest stable release, look for an even-digit minor version number (2.0.x, 2.2.x, etc.). A related package (cons-test) contains a portable test suite for Cons, plus a wrapper script and supporting modules for executing the tests. You do not need this other package to use Cons. You will need Perl 5.002 or better and the Perl Digest::MD5 module, available from CPAN. Cons is known to work on a variety of platforms: it's in production use on versions of FreeBSD, Linux, SunOS, Solaris, HPUX, AIX, IRIX and Windows NT. Documentation is supplied in POD format (thanks to Ulrich Pfeifer). A cons discussion group, cons-discuss@gnu.org, has been created. If you wish to subscribe, send mail to cons-discuss-request@gnu.org with the word "subscribe" in the BODY of the message. Please send all comments, requests, complaints, etc. to the mailing list. Cons is maintained by members of the cons-discuss mailing list. The official cons web site is http://www.dsmit.com/cons An article about Cons appeared in The Perl Journal, issue #9, Spring 1998. cons-2.2.0.orig/RELEASE0100644000175000017500000000302607205142174014653 0ustar jgoerzenjgoerzen$Id: RELEASE,v 1.21.2.1 2000/11/17 05:39:08 knight Exp $ Cons 2.2.0 ********** This release is a stable release of Cons. It contains all the changes from the 2.1 development branch, including many changes developed on that branch after the release of 2.1.2. Visible changes include: -- Cons will now use the Digest::MD5 module in preference to the older, deprecated MD5 module. The MD5 module will be used if Digest::MD5 is not installed. -- When using the -t option, Cons now prints a message about its internal directory-change to the top-level directory. This allows emacs to be aware of the directory to which path names in build output are relative. -- The QuickScan method on Windows NT now uses ';' instead of ':' as the PATH separator. An array ref may also be used. -- The Depends and SourcePath methods can now cope with an array of targets. This release contains many other fixes and less-visible changes. See the CHANGES file for details. General ******* Thanks to Marconi Communications (formerly FORE Systems) management for having transferred the copyright to the Free Software Foundation and for previously having released `cons' under the GNU General Public License (GPL) so that everyone can benefit from it. Thanks to the Free Software Foundation for the infrastructure support for cons. And thanks to the members of the Cons community for the bug reports, patches, discussion, and continued interest. Steven Knight - Development Release Co-ordinator Rajesh Vaidheeswarran - Maintainer cons-2.2.0.orig/TODO0100644000175000017500000000470507166204556014356 0ustar jgoerzenjgoerzen# $Id: TODO,v 1.45 2000/10/02 22:17:18 knight Exp $ Fix signature documentation Johan Holmberg 27 January 2000 (lots of other mail with Subject: "Signature confusion") file names with spaces in them commands with spaces in them additional methods: cons::SharedLibrary method? Greg Spencer 4 November 1999 cons::Yacc method? Wayne Scott 24 January 2000 cons::Java and cons::Jar methods? Damien Neil 19 April 2000 Be able to show why a target was built ("out of date w.r.t. to file X") Gary Oberbrunner 18 April 1998 Suffix rules (like make %.foo : %.bar) Separate into plug-in modules for: languages compiler/linker subsystem OS (environment variables, path transformations?) Target aliases Steven Knight 12 April 1999 Case-insensitive file names (for Windows NT, others?) Content-based file: build every time, then compute MD5 checksum on result Brad Garcia 16 December 1999 Wayne Scott 25 January 2000 AddTarget method to add targets dynamically to the list Gary Oberbrunner 4 May 2000 additional discussion: Dean Roehrich 12 May 2000 Gary Oberbrunner 12 May 2000 Suppress local -I/-L flags when the directory doesn't exist Dean Roehrich 5 May 2000 NoCache method Gary Oberbrunner 5 May 2000 Better Carp error messages Gary Oberbrunner 8 May 2000 Give all .consign entries both a time stampe and a signature Theo Petersen 10 May 2000 AfterBuild method (used to dynamically add targets) Gary Oberbrunner 11 May 2000 additional discussion: Gary Oberbrunner 12 May 2000 Add Erich Waelde's example to the web page. Erich Walde 30 May 2000 NT extensions Greg Spencer 5 Jun 2000 Collect all output prints to a common routine and provide an interface for customizing output. Don't "die" immediately on errors (e.g. building a file two different ways); finish parsing the file to catch as many errors as possible and then die before buliding anything. Calling the Objects method with a null list: $sources = qw( ); $env->Objects($sources); generates errors: cons.pl: error in file "libs\random\conscript" (don't know how to construct libs\random\.obj from libs\random\.) cons.pl: script errors encountered: construction aborted Check other methods for the same problem. Bug reported by Allan Stokes. Redirect STDOUT and/or STDERR to log file on NT. Zachary Deretsky 2 October 2000 Integrate the FreeBSD ports packaging into the normal build. Redesign web site. Archive mailing list (use hypermail). cons-2.2.0.orig/cons0100555000175000017500000055702607206610241014550 0ustar jgoerzenjgoerzen#!/usr/bin/env perl # NOTE: Cons intentionally does not use the "perl -w" option or # "use strict." Because Cons "configuration files" are actually # Perl scripts, enabling those restrictions here would force them # on every user's config files, wanted or not. Would users write # "better" Construct and Conscript files if we forced "use strict" # on them? Probably. But we want people to use Cons to get work # done, not force everyone to become a Perl guru to use it, so we # don't insist. # # That said, Cons' code is both "perl -w" and "use strict" clean. # Regression tests keep the code honest by checking for warnings # and "use strict" failures. # $Id: cons.pl,v 1.129 2000/11/16 12:22:37 knight Exp $ use vars qw( $ver_num $ver_rev $version ); $ver_num = "2.2"; $ver_rev = ".0"; $version = sprintf "This is Cons %s%s " . '($Id: cons.pl,v 1.129 2000/11/16 12:22:37 knight Exp $)'. "\n", $ver_num, $ver_rev; # Cons: A Software Construction Tool. # Copyright (c) 1996-2000 Free Software Foundation, Inc. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; see the file COPYING. If not, write to # the Free Software Foundation, Inc., 59 Temple Place - Suite 330, # Boston, MA 02111-1307, USA. require 5.002; # See the NOTE above about why Cons doesn't "use strict". use integer; use Cwd; use File::Copy; use vars qw( $_WIN32 $_a $_exe $_o $_so ); #------------------------------------------------------------------ # Determine if running on win32 platform - either Windows NT or 95 #------------------------------------------------------------------ use vars qw( $PATH_SEPARATOR $iswin32 $_WIN32 $usage $indent @targets ); BEGIN { use Config; # if the version is 5.003, we can check $^O if ($] < 5.003) { eval("require Win32"); $_WIN32 = (!$@); } else { $_WIN32 = ($^O eq "MSWin32") ? 1 : 0; } # Fetch the PATH separator from Config; # provide our old defaults in case it's not set. $PATH_SEPARATOR = $Config{path_sep}; $PATH_SEPARATOR = $_WIN32 ? ';' : ':' if ! defined $PATH_SEPARATOR; # Fetch file suffixes from Config, # accomodating differences in the Config variables # used by different Perl versions. $_exe = $Config{_exe}; $_exe = $Config{exe_ext} if ! defined $_exe; $_exe = $_WIN32 ? '.exe' : '' if ! defined $_exe; $_o = $Config{_o}; $_o = $Config{obj_ext} if ! defined $_o; $_o = $_WIN32 ? '.obj' : '.o' if ! defined $_o; $_a = $Config{_a}; $_a = $Config{lib_ext} if ! defined $_a; $_a = $_WIN32 ? '.lib' : '.a' if ! defined $_a; $_so = ".$Config{so}"; $_so = $_WIN32 ? '.dll' : '.so' if ! defined $_so; } # Flush stdout each time. $| = 1; # Seed random number generator. srand(time . $$); # this works better than time ^ $$ in perlfunc manpage. $usage = q( Usage: cons -- Arguments can be any of the following, in any order: Build the specified targets. If is a directory recursively build everything within that directory. + Limit the cons scripts considered to just those that match . Multiple + arguments are accepted. = Sets to value in the ARG hash passed to the top-level Construct file. -cc Show command that would have been executed, when retrieving from cache. No indication that the file has been retrieved is given; this is useful for generating build logs that can be compared with real build logs. -cd Disable all caching. Do not retrieve from cache nor flush to cache. -cr Build dependencies in random order. This is useful when building multiple similar trees with caching enabled. -cs Synchronize existing build targets that are found to be up-to-date with cache. This is useful if caching has been disabled with -cc or just recently enabled with UseCache. -d Enable dependency debugging. -f Use the specified file instead of "Construct" (but first change to containing directory of ). -h Show a help message local to the current build if one such is defined, and exit. -k Keep going as far as possible after errors. -o Read override file . -p Show construction products in specified trees. -pa Show construction products and associated actions. -pw Show products and where they are defined. -q Be quiet about Installing and Removing targets. -r Remove construction products associated with -R Search for files in . Multiple -R directories are searched in the order specified. -t Traverse up the directory hierarchy looking for a Construct file, if none exists in the current directory. (Targets will be modified to be relative to the Construct file.) -v Show cons version and continue processing. -V Show cons version and exit. -wf Write all filenames considered into . -x Show this message and exit. Please report any suggestions through the cons-discuss@gnu.org mailing list. To subscribe, send mail to cons-discuss-request@gnu.org with body 'subscribe'. If you find a bug, please report it through the bug-cons@gnu.org mailing list. Information about CONS can be obtained from the official cons web site http://www.dsmit.com/cons/ or its mirrors (listed there). The cons maintainers can be contacted by email at cons-maintainers@gnu.org User documentation of cons is contained in cons and can be obtained by doing 'perldoc /path/to/cons'. ); # Simplify program name, if it is a path. { my ($vol, $dir, $file) = File::Spec->splitpath(File::Spec->canonpath($0)); $0 = $file; } # Default parameters. $param::topfile = 'Construct'; # Top-level construction file. $param::install = 1; # Show installations $param::build = 1; # Build targets ### $param::show = 1; # Show building of targets. $param::sigpro = 'md5'; # Signature protocol. $param::depfile = ''; # Write all deps out to this file $param::salt = ''; # Salt derived file signatures with this. $param::rep_sig_times_ok = 1; # Repository .consign times are in sync # w/files. $param::conscript_chdir = 0; # Change dir to Conscript directory $param::quiet = 0; # should we show the command being executed. # $indent = ''; # Display a command while executing or otherwise. This # should be called by command builder action methods. sub showcom { print($indent . $_[0] . "\n"); } # Default environment. # This contains only the completely platform-independent information # we can figure out. Platform-specific information (UNIX, Win32) # gets added below. @param::defaults = ( 'SUFEXE' => $_exe, # '' on UNIX systems 'SUFLIB' => $_a, # '.a' on UNIX systems 'SUFLIBS' => "$_so:$_a", # '.so:.a' on UNIX 'SUFOBJ' => $_o, # '.o' on UNIX systems 'SUFMAP' => { '.c' => 'build::command::cc', '.s' => 'build::command::cc', '.S' => 'build::command::cc', '.C' => 'build::command::cxx', '.cc' => 'build::command::cxx', '.cxx'=> 'build::command::cxx', '.cpp'=> 'build::command::cxx', '.c++'=> 'build::command::cxx', '.C++'=> 'build::command::cxx', }, ); if ($_WIN32) { # Defaults for Win32. # Defined for VC++ 6.0 by Greg Spencer . # Your mileage may vary. my @win = ( 'CC' => 'cl', 'CFLAGS' => '/nologo', 'CCCOM' => '%CC %CFLAGS %_IFLAGS /c %< /Fo%>', 'CXX' => '%CC', 'CXXFLAGS' => '%CFLAGS', 'CXXCOM' => '%CXX %CXXFLAGS %_IFLAGS /c %< /Fo%>', 'INCDIRPREFIX' => '/I', 'LINK' => 'link', 'LINKCOM' => '%LINK %LDFLAGS /out:%> %< %_LDIRS %LIBS', 'LINKMODULECOM' => '%LD /r /o %> %<', 'LIBDIRPREFIX' => '/LIBPATH:', 'AR' => 'lib', 'ARFLAGS' => '/nologo ', 'ARCOM' => "%AR %ARFLAGS /out:%> %<", 'RANLIB' => '', 'LD' => 'link', 'LDFLAGS' => '/nologo ', 'PREFLIB' => '', ); push(@param::defaults, @win); } else { # Defaults for a typical (?) UNIX platform. # Your mileage may vary. my @unix = ( 'CC' => 'cc', 'CFLAGS' => '', 'CCCOM' => '%CC %CFLAGS %_IFLAGS -c %< -o %>', 'CXX' => '%CC', 'CXXFLAGS' => '%CFLAGS', 'CXXCOM' => '%CXX %CXXFLAGS %_IFLAGS -c %< -o %>', 'INCDIRPREFIX' => '-I', 'LINK' => '%CXX', 'LINKCOM' => '%LINK %LDFLAGS -o %> %< %_LDIRS %LIBS', 'LINKMODULECOM' => '%LD -r -o %> %<', 'LIBDIRPREFIX' => '-L', 'AR' => 'ar', 'ARFLAGS' => 'r', # rs? 'ARCOM' => "%AR %ARFLAGS %> %<\n%RANLIB %>", 'RANLIB' => 'ranlib', 'AS' => 'as', 'ASFLAGS' => '', 'ASCOM' => '%AS %ASFLAGS %< -o %>', 'LD' => 'ld', 'LDFLAGS' => '', 'PREFLIB' => 'lib', 'ENV' => { 'PATH' => '/bin:/usr/bin' }, ); push(@param::defaults, @unix); } # Handle command line arguments. while (@ARGV) { $_ = shift @ARGV; last if /^--$/; # Argument passing to Construct. &option, next if s/^-//; push (@param::include, $_), next if s/^\+//; &equate, next if /=/; push (@targets, $_), next; } sub option { my %opt = ( 'cc' => sub { $param::cachecom = 1; }, 'cd' => sub { $param::cachedisable = 1; }, 'cr' => sub { $param::random = 1; }, 'cs' => sub { $param::cachesync = 1; }, 'd' => sub { $param::depends = 1; }, 'h' => sub { $param::localhelp = 1; }, 'k' => sub { $param::kflag = 1; }, 'p' => sub { $param::pflag = 1; $param::build = 0; }, 'pa' => sub { $param::pflag = 1; $param::aflag = 1; $indent = "... "; $param::build = 0; }, 'pw' => sub { $param::pflag = 1; $param::wflag = 1; $param::build = 0; }, 'q' => sub { $param::quiet = 1; }, 'r' => sub { $param::rflag = 1; $param::build = 0; }, 't' => sub { $param::traverse = 1; }, 'v' => sub { print($version); }, 'V' => sub { print($version), exit(0); }, 'x' => sub { print($usage), exit 0; }, ); my %opt_arg = ( 'f' => sub { $param::topfile = $_[0]; }, 'o' => sub { $param::overfile = $_[0]; }, 'R' => sub { script::Repository($_[0]); }, 'wf' => sub { $param::depfile = $_[0]; }, ); if (defined $opt{$_}) { &{$opt{$_}}(); return; } $_ =~ m/(.)(.*)/; if (defined $opt_arg{$1}) { if (! $2) { $_ = shift @ARGV; die("$0: -$1 option requires an argument.\n") if ! $_; } &{$opt_arg{$1}}($2 || $_); return; } $_ =~ m/(..)(.*)/; if (defined $opt_arg{$1}) { if (! $2) { $_ = shift @ARGV; die("$0: -$1 option requires an argument.\n") if ! $_; } &{$opt_arg{$1}}($2 || $_); return; } if ($_) { die qq($0: unrecognized option "-$_". Use -x for a usage message.\n); } } # Process an equate argument (var=val). sub equate { my($var, $val) = /([^=]*)=(.*)/; $script::ARG{$var} = $val; } # Define file signature protocol. 'sig'->select($param::sigpro); # Cleanup after an interrupt. $SIG{INT} = $SIG{QUIT} = $SIG{TERM} = sub { $SIG{PIPE} = $SIG{INT} = $SIG{QUIT} = $SIG{TERM} = 'IGNORE'; $SIG{HUP} = $SIG{INT} if ! $main::_WIN32; warn("\n$0: killed\n"); # Call this first, to make sure that this processing # occurs even if a child process does not die (and we # hang on the wait). sig::hash::END(); wait(); exit(1); }; $SIG{HUP} = $SIG{INT} if ! $main::_WIN32; # Cleanup after a broken pipe (someone piped our stdout?) $SIG{PIPE} = sub { $SIG{PIPE} = $SIG{HUP} = $SIG{INT} = $SIG{QUIT} = $SIG{TERM} = 'IGNORE'; warn("\n$0: broken pipe\n"); sig::hash::END(); wait(); exit(1); }; if ($param::depfile) { open (main::DEPFILE, ">".$param::depfile) || die ("$0: couldn't open $param::depfile ($!)\n"); } # If the supplied top-level Conscript file is not in the # current directory, then change to that directory. { my ($vol, $dir, $file) = File::Spec->splitpath(File::Spec->canonpath($param::topfile)); if ($vol || $dir) { my($cd) = File::Spec->catpath($vol, $dir, undef); chdir($cd) || die("$0: couldn't change to directory $cd ($!)\n"); $param::topfile = $file; } } # Walk up the directory hierarchy looking for a Conscript file (if -t set). my($target_top); my(@targetdir) = (); if ($param::traverse && ! -f $param::topfile) { my($vol, $dirs, $file) = File::Spec->splitpath(cwd()); my(@dirs) = (File::Spec->splitdir($dirs), $file); while (! -f File::Spec->catpath($vol, File::Spec->catdir(@dirs), $param::topfile)) { die("$0: unable to find $param::topfile.\n") if ! @dirs; unshift(@targetdir, pop(@dirs)); } my($cwd) = File::Spec->catpath($vol, File::Spec->catdir(@dirs), ''); print "$0: Entering directory `$cwd'\n"; chdir($cwd); @targets = map {File::Spec->catdir(@targetdir, $_)} @targets; } # Set up $dir::top and $dir::cwd, now that we are in the right directory. dir::init(); # if (@targetdir) { $target_top = $dir::top->lookupdir(File::Spec->catdir(@targetdir)); } # Now handle override file. package override; if ($param::overfile) { my($ov) = $param::overfile; die qq($0: can\'t read override file "$ov" ($!)\n) if ! -f $ov; #' do $ov; if ($@) { chop($@); die qq($0: errors in override file "$ov" ($@)\n); } } # Provide this to user to setup override patterns. sub Override { my($re, @env) = @_; return if $param::overrides{$re}; # if identical, first will win. $param::overrides = 1; $param::overrides{$re} = \@env; push(@param::overrides, $re); } package main; use vars qw( %priority $errors ); # Check script inclusion regexps my $re; for $re (@param::include) { if (! defined eval {"" =~ /$re/}) { my($err) = $@; $err =~ s/in regexp at .*$//; die("$0: error in regexp $err"); } } # Read the top-level construct file and its included scripts. doscripts($param::topfile); # Status priorities. This lets us aggregate status for directories # and print an appropriate message (at the top-level). %priority = ('none' => 1, 'handled' => 2, 'built' => 3, 'unknown' => 4, 'errors' => 5); # If no targets were specified, supply default targets (if any). @targets = @param::default_targets if ! @targets; $errors = 0; # Build the supplied target patterns. my $tgt; for $tgt (map($dir::top->lookup($_), @targets)) { if ($target_top && ! $tgt->is_under($target_top)) { # A -t option was used, and this target is not underneath # the directory where we were invoked via -t. # If the target is a directory and the -t directory # is underneath it, then build the -t directory. if (ref $tgt ne "dir" || ! $target_top->is_under($tgt)) { next; } $tgt = $target_top; } buildtoptarget($tgt); } exit 0 + ($errors != 0); sub buildtoptarget { my($tgt) = @_; return if ! $tgt; my($status) = buildtarget($tgt); if ($status ne 'built') { my($path) = $tgt->path; if ($status eq "errors") { print qq($0: "$path" not remade because of errors.\n); $errors++; } elsif ($status eq "handled") { print qq($0: "$path" is up-to-date.\n); } elsif ($status eq "unknown") { # cons error already reported. $errors++; } elsif ($status eq "none") { # search for targets that may be linked to the given path. my @linked = dir::linked_targets($tgt) if $target_top; if (@linked) { my @names = map($_->path, @linked); print "Linked targets: @names\n" unless ($param::quiet); map(buildtoptarget($_), @linked); } else { print qq($0: nothing to be built in "$path".\n) if $param::build; } } else { print qq($0: don\'t know how to construct "$path".\n); #' $errors++; } } } # Build the supplied target directory or files. Return aggregated status. sub buildtarget { my($tgt) = @_; if (ref($tgt) eq "dir") { my($result) = "none"; my($priority) = $priority{$result}; if (exists $tgt->{member}) { my($members) = $tgt->{member}; my $entry; for $entry (sort keys %$members) { next if $entry eq $dir::CURDIR || $entry eq $dir::UPDIR; my($tgt) = $members->{$entry}; next if ref($tgt) ne "dir" && !exists($tgt->{builder}); my($stat) = buildtarget($members->{$entry}); my($pri) = $priority{$stat}; if ($pri > $priority) { $priority = $pri; $result = $stat; } } } return $result; } if ($param::depends) { my($path) = $tgt->path; if ($tgt->{builder}) { my(@dep) = (@{$tgt->{dep}}, @{$tgt->{sources}}); my($dep) = join(' ',map($_->path, @dep)); print("Target $path: $dep\n"); } else { print("Target $path: not a derived file\n"); } } if ($param::build) { return build $tgt; } elsif ($param::pflag || $param::wflag || $param::aflag) { if ($tgt->{builder}) { if ($param::wflag) { print qq(${\$tgt->path}: $tgt->{script}\n); } elsif ($param::pflag) { print qq(${\$tgt->path}:\n) if $param::aflag; print qq(${\$tgt->path}\n) if !$param::aflag; } if ($param::aflag) { $tgt->{builder}->action($tgt); } } } elsif ($param::rflag && $tgt->{builder}) { my($path) = $tgt->path; if (-f $path) { if (unlink($path)) { print("Removed $path\n") unless ($param::quiet); } else { warn("$0: couldn't remove $path\n"); } } } return "none"; } package NameSpace; # Return a hash that maps the name of symbols in a namespace to an # array of refs for all types for which the name has a defined value. # A list of symbols may be specified; default is all symbols in the # name space. sub save { my $package = shift; my(%namerefs, $var, $type); no strict 'refs'; @_ = keys %{$package."::"} if ! @_; foreach $var (@_) { $namerefs{$var} = []; my $fqvar = $package."::".$var; # If the scalar for this variable name doesn't already # exist, *foo{SCALAR} will autovivify the reference # instead of returning undef, so unlike the other types, # we have to dereference to find out if it exists. push(@{$namerefs{$var}}, *{$fqvar}{SCALAR}) if defined ${*{$fqvar}{SCALAR}}; foreach $type (qw(ARRAY HASH CODE IO)) { push(@{$namerefs{$var}}, *{$fqvar}{$type}) if defined *{$fqvar}{$type}; } } return \%namerefs; } # Remove the specified symbols from the namespace. # Default is to remove all. sub remove { my $package = shift; my(%namerefs, $var); no strict 'refs'; @_ = keys %{$package."::"} if ! @_; foreach $var (@_) { delete ${$package."::"}{$var}; } } # Restore values to symbols specified in a hash as returned # by NameSpace::save. sub restore { my($package, $namerefs) = @_; my($var, $ref); no strict 'refs'; foreach $var (keys %$namerefs) { my $fqvar = $package."::".$var; foreach $ref (@{$namerefs->{$var}}) { *{$fqvar} = $ref; } } } # Support for "building" scripts, importing and exporting variables. # With the exception of the top-level routine here (invoked from the # main package by cons), these are all invoked by user scripts. package script; use vars qw( $ARG $caller_dir_path %special_var ); BEGIN { # We can't Export or Import the following variables because Perl always # treats them as part of the "main::" package (see perlvar(1)). %special_var = map {$_ => 1} qw(ENV INC ARGV ARGVOUT SIG STDIN STDOUT STDERR); } # This is called from main to interpret/run the top-level Construct # file, passed in as the single argument. sub main::doscripts { my($script) = @_; Build($script); # Now set up the includes/excludes (after the Construct file is read). $param::include = join('|', @param::include); # Save the original variable names from the script package. # These will stay intact, but any other "script::" variables # defined in a Conscript file will get saved, deleted, # and (when necessary) restored. my(%orig_script_var) = map {$_ => 1} keys %script::; $caller_dir_path = undef; my $cwd = Cwd::cwd(); my(@scripts) = pop(@priv::scripts); while ($priv::self = shift(@scripts)) { my($path) = $priv::self->{script}->rsrcpath; if (-f $path) { $dir::cwd = $priv::self->{script}->{dir}; # Handle chdir to the Conscript file directory, if necessary. my ($vol, $dir, $file); if ($param::conscript_chdir) { ($vol, $dir, $file) = File::Spec->splitpath(File::Spec->canonpath($path)); if ($vol ne '' || $dir ne '') { $caller_dir_path = File::Spec->catpath($vol, $dir, undef); chdir($caller_dir_path) || die "Could not chdir to $caller_dir_path: $!\n"; } } else { $file = $path; } # Actually process the Conscript file. do $file; # Save any variables defined by the Conscript file # so we can restore them later, if needed; # then delete them from the script:: namespace. my(@del) = grep(! $orig_script_var{$_}, keys %script::); if (@del) { $priv::self->{script}->{pkgvars} = NameSpace::save('script', @del); NameSpace::remove('script', @del); } if ($caller_dir_path) { chdir($cwd); $caller_dir_path = undef; } if ($@) { chomp($@); my $err = ($@ =~ /\n/ms) ? ":\n$@" : " ($@)"; print qq($0: error in file "$path"$err\n); $run::errors++; } else { # Only process subsidiary scripts if no errors in parent. unshift(@scripts, @priv::scripts); } undef @priv::scripts; } else { my $where = ''; my $cref = $priv::self->{script}->creator; if (defined $cref) { my($_foo, $script, $line, $sub) = @$cref; $where = " ($sub in $script, line $line)"; } warn qq(Ignoring missing script "$path"$where); } } die("$0: script errors encountered: construction aborted\n") if $run::errors; } # Return caller info about the method being invoked. # This is everything from the Perl "caller" builtin function, # including which Construct/Conscript file, line number, # subroutine name, etc. sub caller_info { my($lev) = 1; my(@frame); do { @frame = caller ++$lev; if (defined($frame[3]) && $frame[3] eq '(eval)') { @frame = caller --$lev; if ($caller_dir_path) { $frame[1] = File::Spec->catfile($caller_dir_path, $frame[1]); } return @frame; } } while ($frame[3]); return; } # Link a directory to another. This simply means set up the *source* # for the directory to be the other directory. sub Link { dir::link(@_); } # Add directories to the repository search path for files. # We're careful about stripping our current directory from # the list, which we do by comparing the `pwd` results from # the current directory and the specified directory. This # is cumbersome, but assures that the paths will be reported # the same regardless of symbolic links. sub Repository { my($my_dir) = Cwd::cwd(); my $dir; foreach $dir (@_) { my($d) = `$^X -e "use Cwd; chdir('$dir') && print cwd"`; next if ! $d || ! -d $d || $d eq $my_dir; # We know we can get away with passing undef to lookupdir # as the directory because $dir is an absolute path. push(@param::rpath, dir::lookupdir(undef, $dir)); push @INC, $d; } } # Return the list of Repository directories specified. sub Repository_List { map($_->path, @param::rpath); } # Specify whether the .consign signature times in repository files are, # in fact, consistent with the times on the files themselves. sub Repository_Sig_Times_OK { $param::rep_sig_times_ok = shift; } # Specify whether we should chdir to the containing directories # of Conscript files. sub Conscript_chdir { $param::conscript_chdir = shift; } # Specify files/targets that must be present and built locally, # even if they exist already-built in a Repository. sub Local { my(@files) = map($dir::cwd->lookupfile($_), @_); map($_->local(1), @files); } # Export variables to any scripts invoked from this one. sub Export { my(@illegal) = grep($special_var{$_}, @_); if (@illegal) { die qq($0: cannot Export special Perl variables: @illegal\n); } @{$priv::self->{exports}} = grep(! defined $special_var{$_}, @_); } # Import variables from the export list of the caller # of the current script. sub Import { my(@illegal) = grep($special_var{$_}, @_); if (@illegal) { die qq($0: cannot Import special Perl variables: @illegal\n"); } my($parent) = $priv::self->{parent}; my($imports) = $priv::self->{imports}; @{$priv::self->{exports}} = keys %$imports; my($var); foreach $var (grep(! defined $special_var{$_}, @_)) { if (!exists $imports->{$var}) { my($path) = $parent->{script}->path; die qq($0: variable "$var" not exported by file "$path"\n); } if (!defined $imports->{$var}) { my $path = $parent->{script}->path; my $err = "$0: variable \"$var\" exported but not " . "defined by file \"$path\"\n"; die $err; } ${"script::$var"} = $imports->{$var}; } } # Build an inferior script. That is, arrange to read and execute # the specified script, passing to it any exported variables from # the current script. sub Build { my(@files) = map($dir::cwd->lookupfile($_), @_); my(%imports) = map {$_ => ${"script::$_"}} @{$priv::self->{exports}}; my $file; for $file (@files) { next if $param::include && $file->path !~ /$param::include/o; my($self) = {'script' => $file, 'parent' => $priv::self, 'imports' => \%imports}; bless $self; # may want to bless into class of parent in future push(@priv::scripts, $self); } } # Set up regexps dependencies to ignore. Should only be called once. sub Ignore { die("Ignore called more than once\n") if $param::ignore; $param::ignore = join("|", map("($_)", @_)) if @_; } # Specification of default targets. sub Default { push(@param::default_targets, map($dir::cwd->lookup($_)->path, @_)); } # Local Help. Should only be called once. sub Help { if ($param::localhelp) { print "@_\n"; exit 2; } } # Return the build name(s) of a file or file list. sub FilePath { wantarray ? map($dir::cwd->lookupfile($_)->path, @_) : $dir::cwd->lookupfile($_[0])->path; } # Return the build name(s) of a directory or directory list. sub DirPath { wantarray ? map($dir::cwd->lookupdir($_)->path, @_) : $dir::cwd->lookupdir($_[0])->path; } # Split the search path provided into components. Look each up # relative to the current directory. # The usual path separator problems abound; for now we'll use : sub SplitPath { my($dirs) = @_; if (ref($dirs) ne "ARRAY") { $dirs = [ split(/$main::PATH_SEPARATOR/o, $dirs) ]; } map { DirPath($_) } @$dirs; } # Return true if the supplied path is available as a source file # or is buildable (by rules seen to-date in the build). sub ConsPath { my($path) = @_; my($file) = $dir::cwd->lookup($path); return $file->accessible; } # Return the source path of the supplied path. sub SourcePath { wantarray ? map($dir::cwd->lookupfile($_)->rsrcpath, @_) : $dir::cwd->lookupfile($_[0])->rsrcpath; } # Search up the tree for the specified cache directory, starting with # the current directory. Returns undef if not found, 1 otherwise. # If the directory is found, then caching is enabled. The directory # must be readable and writable. If the argument "mixtargets" is provided, # then targets may be mixed in the cache (two targets may share the same # cache file--not recommended). sub UseCache($@) { my($dir, @args) = @_; # NOTE: it's important to process arguments here regardless of whether # the cache is disabled temporarily, since the mixtargets option affects # the salt for derived signatures. for (@args) { if ($_ eq "mixtargets") { # When mixtargets is enabled, we salt the target signatures. # This is done purely to avoid a scenario whereby if # mixtargets is turned on or off after doing builds, and # if cache synchronization with -cs is used, then # cache files may be shared in the cache itself (linked # under more than one name in the cache). This is not bad, # per se, but simply would mean that a cache cleaning algorithm # that looked for a link count of 1 would never find those # particular files; they would always appear to be in use. $param::salt = 'M' . $param::salt; $param::mixtargets = 1; } else { die qq($0: UseCache unrecognized option "$_"\n); } } if ($param::cachedisable) { warn("Note: caching disabled by -cd flag\n"); return 1; } my($depth) = 15; while ($depth-- && ! -d $dir) { $dir = File::Spec->catdir($dir::UPDIR, $dir); } if (-d $dir) { $param::cache = $dir; return 1; } return undef; } # Salt the signature generator. The salt (a number of string) is added # into the signature of each derived file. Changing the salt will # force recompilation of all derived files. sub Salt($) { # We append the value, so that UseCache and Salt may be used # in either order without changing the signature calculation. $param::salt .= $_[0]; } # Mark files (or directories) to not be removed before building. sub Precious { map($_->{precious} = 1, map($dir::cwd->lookup($_), @_)); } # These methods are callable from Conscript files, via a cons # object. Procs beginning with _ are intended for internal use. package cons; use vars qw( %envcache ); # This is passed the name of the base environment to instantiate. # Overrides to the base environment may also be passed in # as key/value pairs. sub new { my($package) = shift; my ($env) = {@param::defaults, @_}; @{$env->{_envcopy}} = %$env; # Note: we never change PATH $env->{_cwd} = $dir::cwd; # Save directory of environment for bless $env, $package; # any deferred name interpretation. } # Clone an environment. # Note that the working directory will be the initial directory # of the original environment. sub clone { my($env) = shift; my $clone = {@{$env->{_envcopy}}, @_}; @{$clone->{_envcopy}} = %$clone; # Note: we never change PATH $clone->{_cwd} = $env->{_cwd}; bless $clone, ref $env; } # Create a flattened hash representing the environment. # It also contains a copy of the PATH, so that the path # may be modified if it is converted back to a hash. sub copy { my($env) = shift; (@{$env->{_envcopy}}, 'ENV' => {%{$env->{ENV}}}, @_) } # Resolve which environment to actually use for a given # target. This is just used for simple overrides. sub _resolve { return $_[0] if !$param::overrides; my($env, $tgt) = @_; my($path) = $tgt->path; my $re; for $re (@param::overrides) { next if $path !~ /$re/; # Found one. Return a combination of the original environment # and the override. my($ovr) = $param::overrides{$re}; return $envcache{$env,$re} if $envcache{$env,$re}; my($newenv) = {@{$env->{_envcopy}}, @$ovr}; @{$newenv->{_envcopy}} = %$env; $newenv->{_cwd} = $env->{_cwd}; return $envcache{$env,$re} = bless $newenv, ref $env; } return $env; } # Substitute construction environment variables into a string. # Internal function/method. sub _subst { my($env, $str) = @_; if (! defined $str) { return undef; } elsif (ref($str) eq "ARRAY") { return [ map($env->_subst($_), @$str) ]; } else { # % expansion. %% gets converted to % later, so expand any # %keyword construction that doesn't have a % in front of it, # modulo multiple %% pairs in between. # In Perl 5.005 and later, we could actually do this in one regex # using a conditional expression as follows, # while ($str =~ s/($pre)\%(\{)?([_a-zA-Z]\w*)(?(2)\})/"$1".$env->{$3}/ge) {} # The following two-step approach is backwards-compatible # to (at least) Perl5.003. my $pre = '^|[^\%](?:\%\%)*'; while (($str =~ s/($pre)\%([_a-zA-Z]\w*)/$1.($env->{$2}||'')/ge) || ($str =~ s/($pre)\%\{([_a-zA-Z]\w*)\}/$1.($env->{$2}||'')/ge)) {} return $str; } } sub Install { my($env) = shift; my($tgtdir) = $dir::cwd->lookupdir($env->_subst(shift)); my $file; for $file (map($dir::cwd->lookupfile($env->_subst($_)), @_)) { my($tgt) = $tgtdir->lookupfile($file->{entry}); $tgt->bind(find build::install, $file); } } sub InstallAs { my $env = shift; my $tgt = shift; my $src = shift; my @sources = (); my @targets = (); if (ref $tgt) { die "InstallAs: Source is a file and target is a list!\n" if (!ref($src)); @sources = @$src; @targets = @$tgt; } elsif (ref $src) { die "InstallAs: Target is a file and source is a list!\n"; } else { push @sources, $src; push @targets, $tgt; } if ($#sources != $#targets) { my $tn = $#targets+1; my $sn = $#sources+1; die "InstallAs: Source file list ($sn) and target file list ($tn) " . "are inconsistent in length!\n"; } else { foreach (0..$#sources) { my $tfile = $dir::cwd->lookupfile($env->_subst($targets[$_])); my $sfile = $dir::cwd->lookupfile($env->_subst($sources[$_])); $tfile->bind(find build::install, $sfile); } } } # Installation in a local build directory, # copying from the repository if it's already built there. # Functionally equivalent to: # Install $env $dir, $file; # Local "$dir/$file"; sub Install_Local { my($env) = shift; my($tgtdir) = $dir::cwd->lookupdir($env->_subst(shift)); my $file; for $file (map($dir::cwd->lookupfile($env->_subst($_)), @_)) { my($tgt) = $tgtdir->lookupfile($file->{entry}); $tgt->bind(find build::install, $file); $tgt->local(1); } } sub Objects { my($env) = shift; map($dir::cwd->relpath($_), _Objects($env, map($dir::cwd->lookupfile($env->_subst($_)), @_))) } # Called with multiple source file references (or object files). # Returns corresponding object files references. sub _Objects { my($env) = shift; my($suffix) = $env->{SUFOBJ}; map(_Object($env, $_, $_->{dir}->lookupfile($_->base_suf($suffix))), @_); } # Called with an object and source reference. If no object reference # is supplied, then the object file is determined implicitly from the # source file's extension. Sets up the appropriate rules for creating # the object from the source. Returns the object reference. sub _Object { my($env, $src, $obj) = @_; return $obj if $src eq $obj; # don't need to build self from self. my($objenv) = $env->_resolve($obj); my($suffix) = $src->suffix; my($builder) = $env->{SUFMAP}{$suffix}; if ($builder) { $obj->bind((find $builder($objenv)), $src); } else { die("don't know how to construct ${\$obj->path} from " . "${\$src->path}.\n"); } $obj } sub Program { my($env) = shift; my($tgt) = $dir::cwd->lookupfile(file::addsuffix($env->_subst(shift), $env->{SUFEXE})); my($progenv) = $env->_resolve($tgt); $tgt->bind(find build::command::link($progenv, $progenv->{LINKCOM}), $env->_Objects(map($dir::cwd->lookupfile($env->_subst($_)), @_))); } sub Module { my($env) = shift; my($tgt) = $dir::cwd->lookupfile($env->_subst(shift)); my($modenv) = $env->_resolve($tgt); my($com) = pop(@_); $tgt->bind(find build::command::link($modenv, $com), $env->_Objects(map($dir::cwd->lookupfile($env->_subst($_)), @_))); } sub LinkedModule { my($env) = shift; my($tgt) = $dir::cwd->lookupfile($env->_subst(shift)); my($progenv) = $env->_resolve($tgt); $tgt->bind(find build::command::linkedmodule ($progenv, $progenv->{LINKMODULECOM}), $env->_Objects(map($dir::cwd->lookupfile($env->_subst($_)), @_))); } sub Library { my($env) = shift; my($lib) = $dir::cwd->lookupfile(file::addsuffix($env->_subst(shift), $env->{SUFLIB})); my($libenv) = $env->_resolve($lib); $lib->bind(find build::command::library($libenv), $env->_Objects(map($dir::cwd->lookupfile($env->_subst($_)), @_))); } # Simple derivation: you provide target, source(s), command. # Special variables substitute into the rule. # Target may be a reference, in which case it is taken # to be a multiple target (all targets built at once). sub Command { my($env) = shift; my($tgt) = $env->_subst(shift); my($com) = pop(@_); my(@sources) = map($dir::cwd->lookupfile($env->_subst($_)), @_); if (ref($tgt)) { # A multi-target command. my(@tgts) = map($dir::cwd->lookupfile($_), @$tgt); die("empty target list in multi-target command\n") if !@tgts; $env = $env->_resolve($tgts[0]); my $builder = find build::command::user($env, $com, 'script'); my($multi) = build::multiple->new($builder, \@tgts); for $tgt (@tgts) { $tgt->bind($multi, @sources); } } else { $tgt = $dir::cwd->lookupfile($tgt); $env = $env->_resolve($tgt); my $builder = find build::command::user($env, $com, 'script'); $tgt->bind($builder, @sources); } } sub Depends { my($env) = shift; my($tgt) = $env->_subst(shift); my(@deps) = map($dir::cwd->lookup($env->_subst($_)), @_); if (! ref($tgt)) { $tgt = [ $tgt ]; } my($t); foreach $t (map($dir::cwd->lookupfile($_), @$tgt)) { push(@{$t->{dep}}, @deps); } } # Setup a quick scanner for the specified input file, for the # associated environment. Any use of the input file will cause the # scanner to be invoked, once only. The scanner sees just one line at # a time of the file, and is expected to return a list of # dependencies. sub QuickScan { my($env, $code, $file, $path) = @_; $dir::cwd->lookup($env->_subst($file))->{'srcscan',$env} = find scan::quickscan($code, $env, $env->_subst($path)); } # Generic builder module. Just a few default methods. Every derivable # file must have a builder object of some sort attached. Usually # builder objects are shared. package build; # Null signature for dynamic includes. sub includes { () } # Null signature for build script. sub script { () } # Not compatible with any other builder, by default. sub compatible { 0 } # Builder module for the Install command. package build::install; use vars qw( @ISA $installer ); BEGIN { @ISA = qw(build); bless $installer = {} # handle for this class. } sub find { $installer } # Caching not supported for Install: generally install is trivial anyway, # and we don't want to clutter the cache. sub cachin { undef } sub cachout { } # Do the installation. sub action { my($self, $tgt) = @_; my($src) = $tgt->{sources}[0]; main::showcom("Install ${\$src->rpath} as ${\$tgt->path}") if ($param::install && !$param::quiet); return unless $param::build; futil::install($src->rpath, $tgt); return 1; } # Builder module for generic UNIX commands. package build::command; use vars qw( @ISA %com ); BEGIN { @ISA = qw(build) } sub find { my ($class, $env, $com, $package) = @_; $com = $env->_subst($com); $package ||= ''; $com{$env,$com,$package} || do { # Remove unwanted bits from signature -- those bracketed by %( ... %) my $comsig = $com; $comsig =~ s/^\@\s*//mg; while ($comsig =~ s/%\(([^%]|%[^\(])*?%\)//g) { } my $self = { env => $env, com => $com, 'package' => $package, comsig => $comsig }; $com{$env,$com,$package} = bless $self, $class; } } # Default cache in function. sub cachin { my($self, $tgt, $sig) = @_; if (cache::in($tgt, $sig)) { if ($param::cachecom) { map { if (! s/^\@\s*//) { main::showcom($_) } } $self->getcoms($tgt); } else { printf("Retrieved %s from cache\n", $tgt->path) unless ($param::quiet); } return 1; } return undef; } # Default cache out function. sub cachout { my($self, $tgt, $sig) = @_; cache::out($tgt, $sig); } # internal routine to process variable options. # f: return file part # F: return file part, but strip any suffix # d: return directory part # b: return full path, but strip any suffix (a.k.a. return basename) # s: return only the suffix (or an empty string, if no suffix is there) # a: return the absolute path to the file # no option: return full path to file sub _variant { my($opt, $file) = @_; $opt = '' if ! defined $opt; if ($opt eq 'f') { return $file->{entry}; } elsif ($opt eq 'd') { return $file->{dir}->path; } elsif ($opt eq 'F') { my $subst = $file->{entry}; $subst =~ s/\.[^\.]+$//; return $subst; } elsif ($opt eq 'b') { my $subst = $file->path; $subst =~ s/\.[^\.]+$//; return $subst; } elsif ($opt eq 's') { my $subst = $file->{entry}; $subst =~ m/(\.[^\.]+)$/; return $1; } elsif ($opt eq 'a') { my $path = $file->path; if (! File::Spec->file_name_is_absolute($path)) { $path = File::Spec->catfile(Cwd::cwd(), $path); } return $path; } else { return $file->path; } } # For the signature of a basic command, we don't bother # including the command itself. This is not strictly correct, # and if we wanted to be rigorous, we might want to insist # that the command was checked for all the basic commands # like gcc, etc. For this reason we don't have an includes # method. # Call this to get the command line script: an array of # fully substituted commands. sub getcoms { my($self, $tgt) = @_; my(@coms); my $com; for $com (split(/\n/, $self->{com})) { my(@src) = (undef, @{$tgt->{sources}}); my(@src1) = @src; next if $com =~ /^\s*$/; # NOTE: we used to have a more elegant s//.../e solution # for the items below, but this caused a bus error... # Remove %( and %) -- those are only used to bracket parts # of the command that we don't depend on. $com =~ s/%[()]//g; # Deal with %n, n=1,9 and variants. while ($com =~ /%([1-9])(:([fdbsFa]?))?/) { my($match) = $&; my($src) = $src1[$1]; my($subst) = _variant($3, $src1[$1]->rfile); undef $src[$1]; $com =~ s/$match/$subst/; } # Deal with %0 aka %> and variants. while ($com =~ /%[0>](:([fdbsFa]?))?/) { my($match) = $&; my($subst) = _variant($2, $tgt); $com =~ s/$match/$subst/; } # Deal with %< (all sources except %n's already used) while ($com =~ /%<(:([fdbsFa]?))?/) { my($match) = $&; my @list = (); foreach (@src) { push(@list, _variant($2, $_->rfile)) if $_; } my($subst) = join(' ', @list); $com =~ s/$match/$subst/; } # Deal with %[ %]. $com =~ s{%\[(.*?)%\]}{ my($func, @args) = grep { $_ ne '' } split(/\s+/, $1); die("$0: \"$func\" is not defined.\n") unless ($self->{env}->{$func}); &{$self->{env}->{$func}}(@args); }gex; # Convert left-over %% into %. $com =~ s/%%/%/g; # White space cleanup. XXX NO WAY FOR USER TO HAVE QUOTED SPACES $com = join(' ', split(' ', $com)); next if $com =~ /^:/ && $com !~ /^:\S/; push(@coms, $com); } @coms } # Build the target using the previously specified commands. sub action { my($self, $tgt) = @_; my($env) = $self->{env}; if ($param::build) { futil::mkdir($tgt->{dir}); unlink($tgt->path) if ! $tgt->precious; } # Set environment. map(delete $ENV{$_}, keys %ENV); %ENV = %{$env->{ENV}}; # Handle multi-line commands. my $com; for $com ($self->getcoms($tgt)) { if ($com !~ s/^\@\s*//) { main::showcom($com); } if ($param::build) { if ($com =~ /^\[perl\]\s*/) { my $perlcmd = $'; my $status; { # Restore the script package variables that were defined # in the Conscript file that defined this [perl] build, # so the code executes with the expected variables. my($package) = $self->{'package'}; my($pkgvars) = $tgt->{conscript}->{pkgvars}; NameSpace::restore($package, $pkgvars) if $pkgvars; # Actually execute the [perl] command to build the target. $status = eval "package $package; $perlcmd"; # Clean up the namespace by deleting the package variables # we just restored. NameSpace::remove($package, keys %$pkgvars) if $pkgvars; } if (!defined($status)) { warn "$0: *** Error during perl command eval: $@.\n"; return undef; } elsif ($status == 0) { warn "$0: *** Perl command returned $status (this indicates an error).\n"; return undef; } next; } #--------------------- # Can't fork on Win32 #--------------------- if ($main::_WIN32) { system($com); if ($?) { my ($b0, $b1) = ($? & 0xFF, $? >> 8); my $err = $b1 || $?; my $path = $tgt->path; my $warn = qq($0: *** [$path] Error $err); $warn .= " (executable not found in path?)" if $b1 == 0xFF; warn "$warn\n"; return undef; } } else { my($pid) = fork(); die("$0: unable to fork child process ($!)\n") if !defined $pid; if (!$pid) { # This is the child. We eval the command to suppress -w # warnings about not reaching the statements afterwards. eval 'exec($com)'; $com =~ s/\s.*//; die qq($0: failed to execute "$com" ($!). ) . qq(Is this an executable on path "$ENV{PATH}"?\n); } for (;;) { do {} until wait() == $pid; my ($b0, $b1) = ($? & 0xFF, $? >> 8); # Don't actually see 0177 on stopped process; is this necessary? next if $b0 == 0177; # process stopped; we can wait. if ($b0) { my($core, $sig) = ($b0 & 0200, $b0 & 0177); my($coremsg) = $core ? "; core dumped" : ""; $com =~ s/\s.*//; my $path = $tgt->path; my $err = "$0: *** \[$path\] $com terminated by signal " . "$sig$coremsg\n"; warn $err; return undef; } if ($b1) { my($path) = $tgt->path; warn qq($0: *** [$path] Error $b1\n); # trying to be like make. return undef; } last; } } } } # success. return 1; } # Return script signature. sub script { $_[0]->{comsig} } # Create a linked module. package build::command::link; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } # Find an appropriate linker. sub find { my($class, $env, $command) = @_; if (!exists $env->{_LDIRS}) { my($ldirs) = ''; my($wd) = $env->{_cwd}; my($pdirs) = $env->{LIBPATH}; if (! defined $pdirs) { $pdirs = [ ]; } elsif (ref($pdirs) ne 'ARRAY') { $pdirs = [ split(/$main::PATH_SEPARATOR/o, $pdirs) ]; } my $dir; for $dir (map($wd->lookupdir($env->_subst($_)), @$pdirs)) { my($dpath) = $dir->path; $ldirs .= " ".$env->{LIBDIRPREFIX}.$dpath; next if File::Spec->file_name_is_absolute($dpath); if (@param::rpath) { my $d; if ($dpath eq $dir::CURDIR) { foreach $d (map($_->path, @param::rpath)) { $ldirs .= " ".$env->{LIBDIRPREFIX}.$d; } } else { foreach $d (map($_->path, @param::rpath)) { $ldirs .= " ".$env->{LIBDIRPREFIX}.File::Spec->catfile($d, $dpath); } } } } $env->{_LDIRS} = "%($ldirs%)"; } # Introduce a new magic _LIBS symbol which allows to use the # Unix-style -lNAME syntax for Win32 only. -lNAME will be replaced # with %{PREFLIB}NAME%{SUFLIB}. 1998-06-18 if ($main::_WIN32 && !exists $env->{_LIBS}) { my $libs; my $name; for $name (split(' ', $env->_subst($env->{LIBS} || ''))) { if ($name =~ /^-l(.*)/) { $name = "$env->{PREFLIB}$1$env->{SUFLIB}"; } $libs .= ' ' . $name; } $env->{_LIBS} = $libs ? "%($libs%)" : ''; } bless find build::command($env, $command); } # Called from file::build. Make sure any libraries needed by the # environment are built, and return the collected signatures # of the libraries in the path. sub includes { return $_[0]->{sig} if exists $_[0]->{sig}; my($self, $tgt) = @_; my($env) = $self->{env}; my($ewd) = $env->{_cwd}; my $ldirs = $env->{LIBPATH}; if (! defined $ldirs) { $ldirs = [ ]; } elsif (ref($ldirs) ne 'ARRAY') { $ldirs = [ split(/$main::PATH_SEPARATOR/o, $ldirs) ]; } my @lpath = map($ewd->lookupdir($_), @$ldirs); my(@sigs); my(@names); if ($main::_WIN32) { # Pass %LIBS symbol through %-substituition # 1998-06-18 @names = split(' ', $env->_subst($env->{LIBS} || '')); } else { @names = split(' ', $env->{LIBS} || ''); } my $name; for $name (@names) { my ($lpath, @allnames); if ($name =~ /^-l(.*)/) { # -l style names are looked up on LIBPATH, using all # possible lib suffixes in the same search order the # linker uses (according to SUFLIBS). # Recognize new PREFLIB symbol, which should be 'lib' on # Unix, and empty on Win32. TODO: What about shared # library suffixes? 1998-05-13 @allnames = map("$env->{PREFLIB}$1$_", split(/:/, $env->{SUFLIBS})); $lpath = \@lpath; } else { @allnames = ($name); # On Win32, all library names are looked up in LIBPATH # 1998-05-13 if ($main::_WIN32) { $lpath = [$dir::top, @lpath]; } else { $lpath = [$dir::top]; } } my $dir; DIR: for $dir (@$lpath) { my $n; for $n (@allnames) { my($lib) = $dir->lookup_accessible($n); if ($lib) { last DIR if $lib->ignore; if ((build $lib) eq 'errors') { $tgt->{status} = 'errors'; return undef; } push(@sigs, 'sig'->signature($lib)); last DIR; } } } } $self->{sig} = 'sig'->collect(@sigs); } # Always compatible with other such builders, so the user # can define a single program or module from multiple places. sub compatible { my($self, $other) = @_; ref($other) eq "build::command::link"; } # Link a program. package build::command::linkedmodule; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } # Always compatible with other such builders, so the user # can define a single linked module from multiple places. sub compatible { my($self, $other) = @_; ref($other) eq "build::command::linkedmodule"; } # Builder for a C module package build::command::cc; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } sub find { $_[1]->{_cc} || do { my($class, $env) = @_; my($cpppath) = $env->_subst($env->{CPPPATH}); my($cscanner) = find scan::cpp($env->{_cwd}, $cpppath); $env->{_IFLAGS} = "%(" . $cscanner->iflags($env) . "%)"; my($self) = find build::command($env, $env->{CCCOM}); $self->{scanner} = $cscanner; bless $env->{_cc} = $self; } } # Invoke the associated C scanner to get signature of included files. sub includes { my($self, $tgt) = @_; $self->{scanner}->includes($tgt, $tgt->{sources}[0]); } # Builder for a C++ module package build::command::cxx; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } sub find { $_[1]->{_cxx} || do { my($class, $env) = @_; my($cpppath) = $env->_subst($env->{CPPPATH}); my($cscanner) = find scan::cpp($env->{_cwd}, $cpppath); $env->{_IFLAGS} = "%(" . $cscanner->iflags($env) . "%)"; my($self) = find build::command($env, $env->{CXXCOM}); $self->{scanner} = $cscanner; bless $env->{_cxx} = $self; } } # Invoke the associated C scanner to get signature of included files. sub includes { my($self, $tgt) = @_; $self->{scanner}->includes($tgt, $tgt->{sources}[0]); } # Builder for a user command (cons::Command). We assume that a user # command might be built and implement the appropriate dependencies on # the command itself (actually, just on the first word of the command # line). package build::command::user; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } # XXX Optimize this to not use ignored paths. sub comsig { return $_[0]->{_comsig} if exists $_[0]->{_comsig}; my($self, $tgt) = @_; my($env) = $self->{env}; $self->{_comsig} = ''; my $com; com: for $com (split(/[\n;]/, $self->script)) { # Isolate command word. $com =~ s/^\s*//; $com =~ s/\s.*//; next if !$com; # blank line my($pdirs) = $env->{ENV}->{PATH}; if (! defined $pdirs) { $pdirs = [ ]; } elsif (ref($pdirs) ne 'ARRAY') { $pdirs = [ split(/$main::PATH_SEPARATOR/o, $pdirs) ]; } my $dir; for $dir (map($dir::top->lookupdir($_), @$pdirs)) { my($prog) = $dir->lookup_accessible($com); if ($prog) { # XXX Not checking execute permission. if ((build $prog) eq 'errors') { $tgt->{status} = 'errors'; return undef; } next com if $prog->ignore; $self->{_comsig} .= 'sig'->signature($prog); next com; } } # Not found: let shell give an error. } $self->{_comsig} } sub includes { my($self, $tgt) = @_; my($sig) = ''; # Check for any quick scanners attached to source files. my $dep; for $dep (@{$tgt->{dep}}, @{$tgt->{sources}}) { my($scanner) = $dep->{'srcscan',$self->{env}}; if ($scanner) { $sig .= $scanner->includes($tgt, $dep); } } # Add the command signature. return &comsig . $sig; } # Builder for a library module (archive). # We assume that a user command might be built and implement the # appropriate dependencies on the command itself. package build::command::library; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } sub find { my($class, $env) = @_; bless find build::command($env, $env->{ARCOM}) } # Always compatible with other library builders, so the user # can define a single library from multiple places. sub compatible { my($self, $other) = @_; ref($other) eq "build::command::library"; } # A multi-target builder. # This allows multiple targets to be associated with a single build # script, without forcing all the code to be aware of multiple targets. package build::multiple; sub new { my($class, $builder, $tgts) = @_; bless { 'builder' => $builder, 'tgts' => $tgts }; } sub script { my($self, $tgt) = @_; $self->{builder}->script($tgt); } sub includes { my($self, $tgt) = @_; $self->{builder}->includes($tgt); } sub compatible { my($self, $tgt) = @_; $self->{builder}->compatible($tgt); } sub cachin { my($self, $tgt, $sig) = @_; $self->{builder}->cachin($tgt, $sig); } sub cachout { my($self, $tgt, $sig) = @_; $self->{builder}->cachout($tgt, $sig); } sub action { my($self, $invoked_tgt) = @_; return $self->{built} if exists $self->{built}; # Make sure all targets in the group are unlinked before building any. my($tgts) = $self->{tgts}; my $tgt; for $tgt (@$tgts) { futil::mkdir($tgt->{dir}); unlink($tgt->path) if ! $tgt->precious; } # Now do the action to build all the targets. For consistency # we always call the action on the first target, just so that # $> is deterministic. $self->{built} = $self->{builder}->action($tgts->[0]); # Now "build" all the other targets (except for the one # we were called with). This guarantees that the signature # of each target is updated appropriately. We force the # targets to be built even if they have been previously # considered and found to be OK; the only effect this # has is to make sure that signature files are updated # correctly. for $tgt (@$tgts) { if ($tgt ne $invoked_tgt) { delete $tgt->{status}; 'sig'->invalidate($tgt); build $tgt; } } # Status of action. $self->{built}; } # Generic scanning module. package scan; # Returns the signature of files included by the specified files on # behalf of the associated target. Any errors in handling the included # files are propagated to the target on whose behalf this processing # is being done. Signatures are cached for each unique file/scanner # pair. sub includes { my($self, $tgt, @files) = @_; my(%files, $file); my($inc) = $self->{includes} || ($self->{includes} = {}); while ($file = pop @files) { next if exists $files{$file}; if ($inc->{$file}) { push(@files, @{$inc->{$file}}); $files{$file} = 'sig'->signature($file->rfile); } else { if ((build $file) eq 'errors') { $tgt->{status} = 'errors'; # tgt inherits build status return (); } $files{$file} = 'sig'->signature($file->rfile); my(@includes) = $self->scan($file); $inc->{$file} = \@includes; push(@files, @includes); } } 'sig'->collect(sort values %files) } # A simple scanner. This is used by the QuickScanfunction, to setup # one-time target and environment-independent scanning for a source # file. Only used for commands run by the Command method. package scan::quickscan; use vars qw( @ISA %scanner ); BEGIN { @ISA = qw(scan) } sub find { my($class, $code, $env, $pdirs) = @_; if (! defined $pdirs) { $pdirs = [ ] ; } elsif (ref($pdirs) ne 'ARRAY') { $pdirs = [ split(/$main::PATH_SEPARATOR/o, $pdirs) ]; } my(@path) = map { $dir::cwd->lookupdir($_) } @$pdirs; my($spath) = "@path"; $scanner{$code,$env,$spath} || do { my($self) = { code => $code, env => $env, path => \@path }; $scanner{$code,$env,$spath} = bless $self; } } # Scan the specified file for included file names. sub scan { my($self, $file) = @_; my($code) = $self->{code}; my(@includes); # File should have been built by now. If not, we'll ignore it. return () unless open(SCAN, $file->rpath); while() { push(@includes, grep($_ ne '', &$code)); } close(SCAN); my($wd) = $file->{dir}; my(@files); my $name; for $name (@includes) { my $dir; for $dir ($file->{dir}, @{$self->{path}}) { my($include) = $dir->lookup_accessible($name); if ($include) { push(@files, $include) unless $include->ignore; last; } } } @files } # CPP (C preprocessor) scanning module package scan::cpp; use vars qw( @ISA %scanner ); BEGIN { @ISA = qw(scan) } # For this constructor, provide the include path argument (colon # separated). Each path is taken relative to the provided directory. # Note: a particular scanning object is assumed to always return the # same result for the same input. This is why the search path is a # parameter to the constructor for a CPP scanning object. We go to # some pains to make sure that we return the same scanner object # for the same path: otherwise we will unecessarily scan files. sub find { my($class, $dir, $pdirs) = @_; if (! defined $pdirs) { $pdirs = [ ]; } elsif (ref($pdirs) ne 'ARRAY') { $pdirs = [ split(/$main::PATH_SEPARATOR/o, $pdirs) ]; } my @path = map($dir->lookupdir($_), @$pdirs); my($spath) = "@path"; $scanner{$spath} || do { my($self) = {'path' => \@path}; $scanner{$spath} = bless $self; } } # Scan the specified file for include lines. sub scan { my($self, $file) = @_; my($angles, $quotes); if (exists $file->{angles}) { $angles = $file->{angles}; $quotes = $file->{quotes}; } else { my(@anglenames, @quotenames); return () unless open(SCAN, $file->rpath); while () { next unless /^\s*#/; if (/^\s*#\s*include\s*([<"])(.*?)[>"]/) { if ($1 eq "<") { push(@anglenames, $2); } else { push(@quotenames, $2); } } } close(SCAN); $angles = $file->{angles} = \@anglenames; $quotes = $file->{quotes} = \@quotenames; } my(@shortpath) = @{$self->{path}}; # path for <> style includes my(@longpath) = ($file->{dir}, @shortpath); # path for "" style includes my(@includes); my $name; for $name (@$angles) { my $dir; for $dir (@shortpath) { my($include) = $dir->lookup_accessible($name); if ($include) { push(@includes, $include) unless $include->ignore; last; } } } for $name (@$quotes) { my $dir; for $dir(@longpath) { my($include) = $dir->lookup_accessible($name); if ($include) { push(@includes, $include) unless $include->ignore; last; } } } return @includes } # Return the include flags that would be used for a C Compile. sub iflags { my($self, $env) = @_; my($iflags) = ''; my($dpath); for $dpath (map($_->path, @{$self->{path}})) { $iflags .= " ".$env->{INCDIRPREFIX}.$dpath; next if File::Spec->file_name_is_absolute($dpath); if (@param::rpath) { my $d; if ($dpath eq $dir::CURDIR) { foreach $d (map($_->path, @param::rpath)) { $iflags .= " ".$env->{INCDIRPREFIX}.$d; } } else { foreach $d (map($_->path, @param::rpath)) { $iflags .= " ".$env->{INCDIRPREFIX}.File::Spec->catfile($d, $dpath); } } } } $iflags } package File::Spec; use vars qw( $_SEP $_MATCH_SEP $_MATCH_VOL ); # Cons is migrating to using File::Spec for portable path name # manipulation. This is the right long-term direction, but there are # some problems with making the transition: # # For multi-volume support, we need to use newer interfaces # (splitpath, catpath, splitdir) that are only available in # File::Spec 0.8. # # File::Spec 0.8 doesn't work with Perl 5.00[34] due to # regular expression incompatibilities (use of \z). # # Forcing people to use a new version of a module is painful # because (in the workplace) their administrators aren't # always going to agree to install it everywhere. # # As a middle ground, we provide our own versions of all the File::Spec # methods we use, supporting both UNIX and Win32. Some of these methods # are home brew, some are cut-and-pasted from the real File::Spec methods. # This way, we're not reinventing the whole wheel, at least. # # We can (and should) get rid of this class whenever 5.00[34] and # versions of File::Spec prior to 0.9 (?) have faded sufficiently. # We also may need to revisit whenever someone first wants to use # Cons on some platform other than UNIX or Win32. BEGIN { if ($main::_WIN32) { $_SEP = '\\'; $_MATCH_SEP = "[\Q/$_SEP\E]"; $_MATCH_VOL = "([a-z]:)?$_MATCH_SEP"; } else { $_SEP = '/'; $_MATCH_SEP = "\Q$_SEP\E"; $_MATCH_VOL = $_MATCH_SEP; } } sub canonpath { my ($self, $path) = @_; if ($main::_WIN32) { $path =~ s/^([a-z]:)/\u$1/s; $path =~ s|/|\\|g; $path =~ s|([^\\])\\+|$1\\|g; # xx////xx -> xx/xx $path =~ s|(\\\.)+\\|\\|g; # xx/././xx -> xx/xx $path =~ s|^(\.\\)+||s unless $path eq ".\\"; # ./xx -> xx $path =~ s|\\$|| unless $path =~ m#^([A-Z]:)?\\$#s; # xx/ -> xx } else { $path =~ s|/+|/|g unless($^O eq 'cygwin'); # xx////xx -> xx/xx $path =~ s|(/\.)+/|/|g; # xx/././xx -> xx/xx $path =~ s|^(\./)+||s unless $path eq "./"; # ./xx -> xx $path =~ s|^/(\.\./)+|/|s; # /../../xx -> xx $path =~ s|/$|| unless $path eq "/"; # xx/ -> xx } return $path; } sub catdir { my $self = shift; my @args = @_; foreach (@args) { # append a slash to each argument unless it has one there $_ .= $_SEP if $_ eq '' || substr($_,-1) ne $_SEP; } return $self->canonpath(join('', @args)); } sub catfile { my $self = shift; my $file = pop @_; return $file unless @_; my $dir = $self->catdir(@_); $dir .= $_SEP unless substr($dir,-1) eq $_SEP; $file = '' if ! defined($file); return $dir.$file; } sub catpath { my $path = $_[1] . $_[0]->catfile(@_[2..$#_]); $path =~ s/(.)$_MATCH_SEP*$/$1/; $path; } sub curdir { '.' } sub file_name_is_absolute { my ($self, $file) = @_; return scalar($file =~ m{^$_MATCH_VOL}is); } sub splitdir { my @dirs = split(/$_MATCH_SEP/, $_[1], -1); push(@dirs, '') if $dirs[$#dirs]; @dirs; } sub splitpath { my ($self, $path) = @_; my $vol = ''; my $sep = $_SEP; if ($main::_WIN32) { if ($path =~ s#^([A-Za-z]:|(?:\\\\|//)[^\\/]+[\\/][^\\/]+)([\\/])#$2#) { $vol = $1; $sep = $2; } } my(@path) = split(/$_MATCH_SEP/, $path, -1); my $file = pop @path; my $dirs = join($sep, @path, ''); return ($vol, $dirs, $file); } sub updir { '..' } sub case_tolerant { return $main::_WIN32; } # Directory and file handling. Files/dirs are represented by objects. # Other packages are welcome to add component-specific attributes. package dir; use vars qw( $SEPARATOR $MATCH_SEPARATOR $CURDIR $UPDIR $cwd_vol %root $top $cwd ); BEGIN { # A portable way of determing our directory separator. $SEPARATOR = File::Spec->catdir('', ''); # A fast-path regular expression to match a directory separator # anywhere in a path name. if ($SEPARATOR eq '/') { $MATCH_SEPARATOR = "\Q$SEPARATOR\E"; } else { $MATCH_SEPARATOR = "[\Q/$SEPARATOR\E]"; } # Cache these values so we don't have to make a method call # every time we need them. $CURDIR = File::Spec->curdir; # '.' on UNIX $UPDIR = File::Spec->updir; # '..' on UNIX # $cwd_vol = ''; } # Annotate a node (file or directory) with info about the # method that created it. sub creator { my($self, @frame) = @_; $self->{'creator'} = \@frame if @frame; $self->{'creator'}; } # Handle a file|dir type exception. We only die if we find we were # invoked by something in a Conscript/Construct file, because # dependencies created directly by Cons' analysis shouldn't cause # an error. sub _type_exception { my($e) = @_; my($line, $sub); (undef, undef, $line, $sub) = script::caller_info; if (defined $line) { my $err = "\"${\$e->path}\" already in use as a " . ref($e) . " before $sub on line $line"; if ($e->{'creator'}) { my $script; (undef, $script, $line, $sub) = @{$e->{'creator'}}; $err = "\t" . $err . ",\n\t\tdefined by $sub in $script, line $line"; } $err .= "\n"; die $err; } } # This wraps up all the common File::Spec logic that we use for parsing # directory separators in a path and turning it into individual # subdirectories that we must create, as well as creation of root # nodes for any new file system volumes we find. File::Spec doesn't have # intuitively obvious interfaces, so this is heavily commented. # # Note: This is NOT an object or class method; # it's just a utility subroutine. sub _parse_path { my($dir, $path) = @_; # Convert all slashes to the native directory separator. # This allows Construct files to always be written with good # old POSIX path names, regardless of what we're running on. $path = File::Spec->canonpath($path); # File::Spec doesn't understand the Cons convention of # an initial '#' for top-relative files. Strip it. my($toprel) = $path =~ s/^#//; # Let File::Spec do the heavy lifting of parsing the path name. my($vol, $directories, $entry) = File::Spec->splitpath($path); my @dirs = File::Spec->splitdir($directories); # If there was a file entry on the end of the path, then the # last @dirs element is '' and we don't need it. If there # wasn't a file entry on the end (File::Spec->splitpath() knew # the last component was a directory), then the last @dirs # element becomes the entry we want to look up. my($e) = pop @dirs; $entry = $e if $entry eq ''; if (File::Spec->file_name_is_absolute($path)) { # An absolute path name. If no volume was supplied, # use the volume of our current directory. $vol = $cwd_vol if $vol eq ''; $vol = uc($vol) if File::Spec->case_tolerant; if (! defined $root{$vol}) { # This is our first time looking up a path name # on this volume, so create a root node for it. # (On UNIX systems, $vol is always '', so '/' # always maps to the $root{''} node.) $root{$vol} = {path => $vol.$SEPARATOR, prefix => $vol.$SEPARATOR, srcpath => $vol.$SEPARATOR, 'exists' => 1 }; $root{$vol}->{'srcdir'} = $root{$vol}; bless $root{$vol}; } # We're at the top, so strip the blank entry from the front of # the @dirs array since the initial '/' it represents will now # be supplied by the root node we return. shift @dirs; $dir = $root{$vol}; } elsif ($toprel) { $dir = $dir::top; } ($dir, \@dirs, $entry); } # Common subroutine for creating directory nodes. sub _create_dirs { my ($dir, @dirs) = @_; my $e; foreach $e (@dirs) { my $d = $dir->{member}->{$e}; if (! defined $d) { bless $d = { 'entry' => $e, 'dir' => $dir, }, 'dir'; $d->creator(script::caller_info); $d->{member}->{$dir::CURDIR} = $d; $d->{member}->{$dir::UPDIR} = $dir; $dir->{member}->{$e} = $d; } elsif (ref $d eq 'entry') { bless $d, 'dir'; $d->{member}->{$dir::CURDIR} = $d; $d->{member}->{$dir::UPDIR} = $dir; } elsif (ref $d eq 'file') { # This clause is to supply backwards compatibility, # with a warning, for anyone that's used FilePath # to refer to a directory. After people have using # 1.8 have had time to adjust (sometime in version # 1.9 or later), we should remove this entire clause. my($script, $line, $sub); (undef, $script, $line, $sub) = @{$d->{'creator'}}; if ($sub eq 'script::FilePath') { print STDERR "$0: Warning: $sub used to refer to a directory\n" . "\tat line $line of $script. Use DirPath instead.\n"; bless $d, 'dir'; } else { _type_exception($d); } } elsif (ref $d ne 'dir') { _type_exception($d); } $dir = $d; } $dir; } # Look up an entry in a directory. This method is for when we don't # care whether a file or directory is returned, so if the entry already # exists, it will simply be returned. If not, we create it as a # generic "entry" which can be later turned into a file or directory # by a more-specific lookup. # # The file entry may be specified as relative, absolute (starts with /), # or top-relative (starts with #). sub lookup { my($dir, $entry) = @_; if ($entry !~ m#$MATCH_SEPARATOR#o) { # Fast path: simple entry name in a known directory. if ($entry =~ s/^#//) { # Top-relative names begin with #. $dir = $dir::top; } } else { my $dirsref; ($dir, $dirsref, $entry) = _parse_path($dir, $entry); $dir = _create_dirs($dir, @$dirsref) if @$dirsref; return if ! defined $dir; return $dir if $entry eq ''; } my $e = $dir->{member}->{$entry}; if (! defined $e) { bless $e = { 'entry' => $entry, 'dir' => $dir, }, 'entry'; $e->creator(script::caller_info); $dir->{member}->{$entry} = $e; } $e; } # Look up a file entry in a directory. # # The file entry may be specified as relative, absolute (starts with /), # or top-relative (starts with #). sub lookupfile { my($dir, $entry) = @_; if ($entry !~ m#$MATCH_SEPARATOR#o) { # Fast path: simple entry name in a known directory. if ($entry =~ s/^#//) { # Top-relative names begin with #. $dir = $dir::top; } } else { my $dirsref; ($dir, $dirsref, $entry) = _parse_path($dir, $entry); $dir = _create_dirs($dir, @$dirsref) if @$dirsref; return undef if $entry eq ''; } my $f = $dir->{member}->{$entry}; if (! defined $f) { bless $f = { 'entry' => $entry, 'dir' => $dir, }, 'file'; $f->creator(script::caller_info); $dir->{member}->{$entry} = $f; } elsif (ref $f eq 'entry') { bless $f, 'file'; } elsif (ref $f ne 'file') { _type_exception($f); } $f; } # Look up a (sub-)directory entry in a directory. # # The (sub-)directory entry may be specified as relative, absolute # (starts with /), or top-relative (starts with #). sub lookupdir { my($dir, $entry) = @_; my $dirsref; if ($entry !~ m#$MATCH_SEPARATOR#o) { # Fast path: simple entry name in a known directory. if ($entry =~ s/^#//) { # Top-relative names begin with #. $dir = $dir::top; } } else { ($dir, $dirsref, $entry) = _parse_path($dir, $entry); } _create_dirs($dir, @$dirsref, $entry); } # Look up a file entry and return it if it's accessible. sub lookup_accessible { my $file = $_[0]->lookupfile($_[1]); return ($file && $file->accessible) ? $file : undef; } # Return the parent directory without doing a lookupdir, # which would create a parent if it doesn't already exist. # A return value of undef (! $dir->up) indicates a root directory. sub up { $_[0]->{member}->{$dir::UPDIR}; } # Return whether this is an entry somewhere underneath the # specified directory. sub is_under { my $dir = $_[0]; while ($dir) { return 1 if $_[1] == $dir; $dir = $dir->up; } return undef; } # Return the relative path from the calling directory ($_[1]) # to the object. If the object is not under the directory, then # we return it as a top-relative or absolute path name. sub relpath { my ($dir, $obj) = @_; my @dirs; my $o = $obj; while ($o) { if ($dir == $o) { if (@dirs < 2) { return $dirs[0] || ''; } else { return File::Spec->catdir(@dirs); } } unshift(@dirs, $o->{entry}); $o = $o->up; } # The object was not underneath the specified directory. # Use the node's cached path, which is either top-relative # (in which case we append '#' to the beginning) or # absolute. my $p = $obj->path; $p = '#' . $p if ! File::Spec->file_name_is_absolute($p); return $p; } # Return the path of the directory (file paths implemented # separately, below). sub path { $_[0]->{path} || ($_[0]->{path} = $_[0]->{dir}->prefix . $_[0]->{entry}); } # Return the pathname as a prefix to be concatenated with an entry. sub prefix { return $_[0]->{prefix} if exists $_[0]->{prefix}; $_[0]->{prefix} = $_[0]->path . $SEPARATOR; } # Return the related source path prefix. sub srcprefix { return $_[0]->{srcprefix} if exists $_[0]->{srcprefix}; my($srcdir) = $_[0]->srcdir; $srcdir->{srcprefix} = $srcdir eq $_[0] ? $srcdir->prefix : $srcdir->srcprefix; } # Return the related source directory. sub srcdir { $_[0]->{'srcdir'} || ($_[0]->{'srcdir'} = $_[0]->{dir}->srcdir->lookupdir($_[0]->{entry})) } # Return if the directory is linked to a separate source directory. sub is_linked { return $_[0]->{is_linked} if defined $_[0]->{is_linked}; $_[0]->{is_linked} = $_[0]->path ne $_[0]->srcdir->path; } sub link { my(@paths) = @_; my($srcdir) = $dir::cwd->lookupdir(pop @paths)->srcdir; map($dir::cwd->lookupdir($_)->{'srcdir'} = $srcdir, @paths); # make a reverse lookup for the link. $srcdir->{links} = [] if ! $srcdir->{links}; push @{$srcdir->{links}}, @paths; } use vars qw( @tail ); # TODO: Why global ???? sub linked_targets { my $tgt = shift; my @targets = (); my $dir; if (ref $tgt eq 'dir') { $dir = $tgt; } else { push @tail, $tgt; $dir = $tgt->{dir}; } while ($dir) { if (defined $dir->{links} && @{$dir->{links}}) { push(@targets, map(File::Spec->catdir($_, @tail), @{$dir->{links}})); #print STDERR "Found Link: ${\$dir->path} -> @{\$dir->{links}}\n"; } unshift @tail, $dir->{entry}; $dir = $dir->up; } return map($dir::top->lookupdir($_), @targets); } sub accessible { my $path = $_[0]->path; my $err = "$0: you have attempted to use path \"$path\" both as a file " . "and as a directory!\n"; die $err; } sub init { my $path = Cwd::cwd(); # We know we can get away with passing undef to lookupdir # as the directory because $dir is an absolute path. $top = lookupdir(undef, $path); $top->{'path'} = $top->{srcpath} = $dir::CURDIR; $top->{'prefix'} = ''; $top->{'srcdir'} = $top; $cwd = $top; ($cwd_vol, undef, undef) = File::Spec->splitpath($path); $cwd_vol = '' if ! defined $cwd_vol; $cwd_vol = uc($cwd_vol) if File::Spec->case_tolerant; } package file; use vars qw( @ISA $level ); BEGIN { @ISA = qw(dir); $level = 0 } # Return the pathname of the file. # Define this separately from dir::path because we don't want to # cache all file pathnames (just directory pathnames). sub path { $_[0]->{dir}->prefix . $_[0]->{entry} } # Return the related source file path. sub srcpath { $_[0]->{dir}->srcprefix . $_[0]->{entry} } # Return if the file is (should be) linked to a separate source file. sub is_linked { $_[0]->{dir}->is_linked } # Repository file search. If the local file exists, that wins. # Otherwise, return the first existing same-named file under a # Repository directory. If there isn't anything with the same name # under a Repository directory, return the local file name anyway # so that some higher layer can try to construct it. sub rfile { return $_[0]->{rfile} if exists $_[0]->{rfile}; my($self) = @_; my($rfile) = $self; if (@param::rpath) { my($path) = $self->path; if (! File::Spec->file_name_is_absolute($path) && ! -f $path) { my($dir); foreach $dir (@param::rpath) { my($t) = $dir->prefix . $path; if (-f $t) { $rfile = $_[0]->lookupfile($t); $rfile->{is_on_rpath} = 1; last; } } } } $self->{rfile} = $rfile; } # returns the "precious" status of this file. sub precious { return $_[0]->{precious}; } # "Erase" reference to a Repository file, # making this a completely local file object # by pointing it back to itself. sub no_rfile { $_[0]->{'rfile'} = $_[0]; } # Return a path to the first existing file under a Repository directory, # implicitly returning the current file's path if there isn't a # same-named file under a Repository directory. sub rpath { $_[0]->{rpath} || ($_[0]->{rpath} = $_[0]->rfile->path) } # Return a path to the first linked srcpath file under a Repositoy # directory, implicitly returning the current file's srcpath if there # isn't a same-named file under a Repository directory. sub rsrcpath { return $_[0]->{rsrcpath} if exists $_[0]->{rsrcpath}; my($self) = @_; my($path) = $self->{rsrcpath} = $self->srcpath; if (@param::rpath && ! File::Spec->file_name_is_absolute($path) && ! -f $path) { my($dir); foreach $dir (@param::rpath) { my($t) = $dir->prefix . $path; if (-f $t) { $self->{rsrcpath} = $t; last; } } } $self->{rsrcpath}; } # Return if a same-named file source file exists. # This handles the interaction of Link and Repository logic. # As a side effect, it will link a source file from its Linked # directory (preferably local, but maybe in a repository) # into a build directory from its proper Linked directory. sub source_exists { return $_[0]->{source_exists} if defined $_[0]->{source_exists}; my($self) = @_; my($path) = $self->path; my($time) = (stat($path))[9]; if ($self->is_linked) { # Linked directory, local logic. my($srcpath) = $self->srcpath; my($srctime) = (stat($srcpath))[9]; if ($srctime) { if (! $time || $srctime != $time) { futil::install($srcpath, $self); } return $self->{source_exists} = 1; } # Linked directory, repository logic. if (@param::rpath) { if ($self != $self->rfile) { return $self->{source_exists} = 1; } my($rsrcpath) = $self->rsrcpath; if ($path ne $rsrcpath) { my($rsrctime) = (stat($rsrcpath))[9]; if ($rsrctime) { if (! $time || $rsrctime != $time) { futil::install($rsrcpath, $self); } return $self->{source_exists} = 1; } } } # There was no source file in any Linked directory # under any Repository. If there's one in the local # build directory, it no longer belongs there. if ($time) { unlink($path) || die("$0: couldn't unlink $path ($!)\n"); } return $self->{source_exists} = ''; } else { if ($time) { return $self->{source_exists} = 1; } if (@param::rpath && $self != $self->rfile) { return $self->{source_exists} = 1; } return $self->{source_exists} = ''; } } # Return if a same-named derived file exists under a Repository directory. sub derived_exists { $_[0]->{derived_exists} || ($_[0]->{derived_exists} = ($_[0] != $_[0]->rfile)); } # Return if this file is somewhere under a Repository directory. sub is_on_rpath { $_[0]->{is_on_rpath}; } sub local { my($self, $arg) = @_; if (defined $arg) { $self->{'local'} = $arg; } $self->{'local'}; } # Return the entry name of the specified file with the specified # suffix appended. Leave it untouched if the suffix is already there. # Differs from the addsuffix function, below, in that this strips # the existing suffix (if any) before appending the desired one. sub base_suf { my($entry) = $_[0]->{entry}; if ($entry !~ m/$_[1]$/) { $entry =~ s/\.[^\.]*$//; $entry .= $_[1]; } $entry; } # Return the suffix of the file, for up to a 3 character # suffix. Anything less returns nothing. sub suffix { if (! $main::_WIN32) { $_[0]->{entry} =~ /\.[^\.\/]{0,3}$/; $& } else { my @pieces = split(/\./, $_[0]->{entry}); my $suffix = pop(@pieces); return ".$suffix"; } } # Called as a simple function file::addsuffix(name, suffix) sub addsuffix { my($name, $suffix) = @_; if ($suffix && substr($name, -length($suffix)) ne $suffix) { return $name .= $suffix; } $name; } # Return true if the file is (or will be) accessible. # That is, if we can build it, or if it is already present. sub accessible { (exists $_[0]->{builder}) || ($_[0]->source_exists); } # Return true if the file should be ignored for the purpose # of computing dependency information (should not be considered # as a dependency and, further, should not be scanned for # dependencies). sub ignore { return 0 if !$param::ignore; return $_[0]->{ignore} if exists $_[0]->{ignore}; $_[0]->{ignore} = $_[0]->path =~ /$param::ignore/o; } # Build the file, if necessary. sub build { $_[0]->{status} || &file::_build; } sub _build { my($self) = @_; print main::DEPFILE $self->path, "\n" if $param::depfile; print((' ' x $level), "Checking ", $self->path, "\n") if $param::depends; if (!exists $self->{builder}) { # We don't know how to build the file. This is OK, if # the file is present as a source file, under either the # local tree or a Repository. if ($self->source_exists) { return $self->{status} = 'handled'; } else { my($name) = $self->path; print("$0: don't know how to construct \"$name\"\n"); exit(1) unless $param::kflag; return $self->{status} = 'errors'; # xxx used to be 'unknown' } } # An associated build object exists, so we know how to build # the file. We first compute the signature of the file, based # on its dependendencies, then only rebuild the file if the # signature has changed. my($builder) = $self->{builder}; $level += 2; my(@deps) = (@{$self->{dep}}, @{$self->{sources}}); my($rdeps) = \@deps; if ($param::random) { # If requested, build in a random order, instead of the # order that the dependencies were listed. my(%rdeps); map { $rdeps{$_,'*' x int(rand 10)} = $_ } @deps; $rdeps = [values(%rdeps)]; } $self->{status} = ''; my $dep; for $dep (@$rdeps) { if ((build $dep) eq 'errors') { # Propagate dependent errors to target. # but try to build all dependents regardless of errors. $self->{status} = 'errors'; } } # If any dependents had errors, then we abort. if ($self->{status} eq 'errors') { $level -= 2; return 'errors'; } # Compute the final signature of the file, based on # the static dependencies (in order), dynamic dependencies, # output path name, and (non-substituted) build script. my($sig) = 'sig'->collect(map('sig'->signature($_->rfile), @deps), $builder->includes($self), $builder->script); # May have gotten errors during computation of dynamic # dependency signature, above. $level -= 2; return 'errors' if $self->{status} eq 'errors'; if (@param::rpath && $self->derived_exists) { # There is no local file of this name, but there is one # under a Repository directory. if ('sig'->current($self->rfile, $sig)) { # The Repository copy is current (its signature matches # our calculated signature). if ($self->local) { # ...but they want a local copy, so provide it. main::showcom("Local copy of ${\$self->path} from " . "${\$self->rpath}"); futil::install($self->rpath, $self); 'sig'->set($self, $sig); } return $self->{status} = 'handled'; } # The signatures don't match, implicitly because something # on which we depend exists locally. Get rid of the reference # to the Repository file; we'll build this (and anything that # depends on it) locally. $self->no_rfile; } # Then check for currency. if (! 'sig'->current($self, $sig)) { # We have to build/derive the file. print((' ' x $level), "Rebuilding ", $self->path, ": out of date.\n") if $param::depends; # First check to see if the built file is cached. if ($builder->cachin($self, $sig)) { 'sig'->set($self, $sig); return $self->{status} = 'built'; } elsif ($builder->action($self)) { $builder->cachout($self, $sig); 'sig'->set($self, $sig); return $self->{status} = 'built'; } else { die("$0: errors constructing ${\$self->path}\n") unless $param::kflag; return $self->{status} = 'errors'; } } else { # Push this out to the cache if we've been asked to (-C option). # Don't normally do this because it slows us down. # In a fully built system, no accesses to the cache directory # are required to check any files. This is a win if cache is # heavily shared. Enabling this option puts the directory in the # loop. Useful only when you wish to recreate a cache from a build. if ($param::cachesync) { $builder->cachout($self, $sig); 'sig'->set($self, $sig); } return $self->{status} = 'handled'; } } # Bind an action to a file, with the specified sources. No return value. sub bind { my($self, $builder, @sources) = @_; if ($self->{builder} && !$self->{builder}->compatible($builder)) { # Even if not "compatible", we can still check to see if the # derivation is identical. It should be identical if the builder is # the same and the sources are the same. if ("$self->{builder} @{$self->{sources}}" ne "$builder @sources") { $main::errors++; my($_foo1, $script1, $line1, $sub1) = @{$self->creator}; my($_foo2, $script2, $line2, $sub2) = script::caller_info; my $err = "\t${\$self->path}\n" . "\tbuilt (at least) two different ways:\n" . "\t\t$script1, line $line1: $sub1\n" . "\t\t$script2, line $line2: $sub2\n"; die $err; } return; } if ($param::wflag) { my($script, $line, $sub); (undef, $script, $line, $sub) = script::caller_info; $self->{script} = '' if ! defined $self->{script}; $self->{script} .= "; " if $self->{script}; $self->{script} .= qq($sub in "$script", line $line); } $self->{builder} = $builder; push(@{$self->{sources}}, @sources); @{$self->{dep}} = () if ! defined $self->{dep}; $self->{conscript} = $priv::self->{script}; } sub is_under { $_[0]->{dir}->is_under($_[1]); } sub relpath { my $dirpath = $_[0]->relpath($_[1]->{dir}); if (! $dirpath) { return $_[1]->{entry}; } else { File::Spec->catfile($dirpath, $_[1]->{entry}); } } # Generic entry (file or directory) handling. # This is an empty subclass for nodes that haven't # quite decided whether they're files or dirs. # Use file methods until someone blesses them one way or the other. package entry; use vars qw( @ISA ); BEGIN { @ISA = qw(file) } # File utilities package futil; # Install one file as another. # Links them if possible (hard link), otherwise copies. # Don't ask why, but the source is a path, the tgt is a file obj. sub install { my($sp, $tgt) = @_; my($tp) = $tgt->path; return 1 if $tp eq $sp; return 1 if eval { link($sp, $tp) }; unlink($tp); if (! futil::mkdir($tgt->{dir})) { return undef; } return 1 if eval { link($sp, $tp) }; futil::copy($sp, $tp); } # Copy one file to another. Arguments are actual file names. # Returns undef on failure. Preserves mtime and mode. sub copy { my ($sp, $tp) = @_; my ($mode, $length, $atime, $mtime) = (stat($sp))[2,7,8,9]; # Use Perl standard library module for file copying, which handles # binary copies. 1998-06-18 if (! File::Copy::copy($sp, $tp)) { warn qq($0: can\'t install "$sp" to "$tp" ($!)\n); #' return undef; } # The file has been created, so try both the chmod and utime, # first making sure the copy is writable (because permissions # affect the ability to modify file times on some operating # systems), and then changing permissions back if necessary. my $ret = 1; my $wmode = $mode | 0700; if (! chmod $wmode, $tp) { warn qq($0: can\'t set mode $wmode on file "$tp" ($!)\n); #' $ret = undef; } if (! utime $atime, $mtime, $tp) { warn qq($0: can\'t set modification time for file "$tp" ($!)\n); #' $ret = undef; } if ($mode != $wmode && ! chmod $mode, $tp) { warn qq($0: can\'t set mode $mode on file "$tp" ($!)\n); #' $ret = undef; } return $ret; } # Ensure that the specified directory exists. # Aborts on failure. sub mkdir { return 1 if $_[0]->{'exists'}; if (! futil::mkdir($_[0]->{dir})) { # Recursively make parent. return undef; } my($path) = $_[0]->path; if (!-d $path && !mkdir($path, 0777)) { warn qq($0: can't create directory $path ($!).\n); #' return undef; } $_[0]->{'exists'} = 1; } # Signature package. package sig::hash; use vars qw( $called ); sub init { my($dir) = @_; my($consign) = $dir->prefix . ".consign"; my($dhash) = $dir->{consign} = {}; if (-f $consign) { open(CONSIGN, $consign) || die("$0: can't open $consign ($!)\n"); while() { chop; my ($file, $sig) = split(/:/,$_); $dhash->{$file} = $sig; } close(CONSIGN); } $dhash } # Read the hash entry for a particular file. sub in { my($dir) = $_[0]->{dir}; ($dir->{consign} || init($dir))->{$_[0]->{entry}} } # Write the hash entry for a particular file. sub out { my($file, $sig) = @_; my($dir) = $file->{dir}; ($dir->{consign} || init($dir))->{$file->{entry}} = $sig; $sig::hash::dirty{$dir} = $dir; } # Flush hash entries. Called at end or via ^C interrupt. sub END { return if $called++; # May be called twice. close(CONSIGN); # in case this came in via ^C. my $dir; for $dir (values %sig::hash::dirty) { my($consign) = $dir->prefix . ".consign"; my($constemp) = $consign . ".$$"; if (! open(CONSIGN, ">$constemp")) { die("$0: can't create $constemp ($!)\n"); } my($entry, $sig); while (($entry, $sig) = each %{$dir->{consign}}) { if (! print CONSIGN "$entry:$sig\n") { die("$0: error writing to $constemp ($!)\n"); } } close(CONSIGN); if (! rename($constemp, $consign)) { if (futil::copy($constemp, $consign)) { unlink($constemp); } else { die("$0: couldn't rename or copy $constemp to $consign " . "($!)\n"); } } } } # Derived file caching. package cache; # Find a file in the cache. Return non-null if the file is in the cache. sub in { return undef unless $param::cache; my($file, $sig) = @_; # Add the path to the signature, to make it unique. $sig = 'sig'->collect($sig, $file->path) unless $param::mixtargets; my($dir) = substr($sig, 0, 1); my($cp) = File::Spec->catfile($param::cache, $dir, $sig); return -f $cp && futil::install($cp, $file); } # Try to flush a file to the cache, if not already there. # If it doesn't make it out, due to an error, then that doesn't # really matter. sub out { return unless $param::cache; my($file, $sig) = @_; # Add the path to the signature, to make it unique. $sig = 'sig'->collect($sig, $file->path) unless $param::mixtargets; my($dir) = substr($sig, 0, 1); my($sp) = $file->path; my($cp) = File::Spec->catfile($param::cache, $dir, $sig); my($cdir) = File::Spec->catfile($param::cache, $dir); if (! -d $cdir) { mkdir($cdir, 0777) || die("$0: can't create cache directory $cdir ($!).\n"); } elsif (-f $cp) { # Already cached: try to use that instead, to save space. # This can happen if the -cs option is used on a previously # uncached build, or if two builds occur simultaneously. my($lp) = ".$sig"; unlink($lp); return if ! eval { link($cp, $lp) }; rename($lp, $sp); # Unix98 says, "If the old argument and the new argument both # [refer] to the same existing file, the rename() function # returns successfully and performs no other action." So, if # $lp and $sp are links (i.e., $cp and $sp are links), $lp is # left, and we must unlink it ourselves. If the rename failed # for any reason, it is also good form to unlink the temporary # $lp. Otherwise $lp no longer exists and, barring some race, # the unlink fails silently. unlink($lp); return; } return if eval { link($sp, $cp) }; return if ! -f $sp; # if nothing to cache. if (futil::copy($sp, "$cp.new")) { rename("$cp.new", $cp); } } # Generic signature handling package sig; use vars qw( @ISA ); sub select { my($package, $subclass) = @_; @ISA = ($package . "::" . $subclass); }; # MD5-based signature package. package sig::md5; use vars qw( $md5 ); BEGIN { my $module; my @md5_modules = qw(Digest::MD5 MD5); for (@md5_modules) { eval "use $_"; if (! $@) { $module = $_; last; } } die "Cannot find any MD5 module from: @md5_modules" if $@; $md5 = new $module; } # Invalidate a cache entry. sub invalidate { delete $_[1]->{sig} } # Determine the current signature of an already-existing or # non-existant file. sub signature { if (defined $_[1]->{sig}) { return $_[1]->{sig}; } my ($self, $file) = @_; my($path) = $file->path; my($time) = (stat($path))[9]; if ($time) { my($sigtime) = sig::hash::in($file); if ($file->is_on_rpath) { if ($sigtime) { my ($htime, $hsig) = split(' ',$sigtime); if (! $hsig) { # There was no separate $htime recorded in # the .consign file, which implies that this # is a source file in the repository. # (Source file .consign entries don't record # $htime.) Just return the signature that # someone else conveniently calculated for us. return $htime; # actually the signature } else { if (! $param::rep_sig_times_ok || $htime == $time) { return $file->{sig} = $hsig; } } } return $file->{sig} = $file->path . $time; } if ($sigtime) { my ($htime, $hsig) = split(' ',$sigtime); if ($htime eq $time) { return $file->{sig} = $hsig; } } if (! File::Spec->file_name_is_absolute($path)) { # A file in the local build directory. Assume we can write # a signature file for it, and compute the actual source # signature. We compute the file based on the build path, # not source path, only because there might be parallel # builds going on... In principle, we could use the source # path and only compute this once. my($sig) = srcsig($path); sig::hash::out($file, $sig); return $file->{sig} = $sig; } else { return $file->{sig} = $file->{entry} . $time; } } $file->{sig} = ''; } # Is the provided signature equal to the signature of the current # instantiation of the target (and does the target exist)? sub current { my($self, $file, $sig) = @_; # Uncomment this to debug checks for signature currency. # 1998-10-29 # my $fsig = $self->signature($file); # print STDOUT "\$self->signature(${\$file->path}) # '$fsig' eq \$sig '$sig'\n"; # return $fsig eq $sig; $self->signature($file) eq $sig; } # Store the signature for a file. sub set { my($self, $file, $sig) = @_; my($time) = (stat($file->path))[9]; sig::hash::out($file, "$time $sig"); $file->{sig} = $sig } # Return an aggregate signature sub collect { my($self, @sigs) = @_; # The following sequence is faster than calling the hex interface. $md5->reset(); $md5->add(join('', $param::salt, @sigs)); # Uncomment this to debug dependency signatures. # 1998-05-08 # my $buf = join(', ', $param::salt, @sigs); # print STDOUT "sigbuf=|$buf|\n"; # Uncomment this to print the result of dependency signature calculation. # 1998-10-13 # $buf = unpack("H*", $md5->digest()); # print STDOUT "\t=>|$buf|\n"; # return $buf; unpack("H*", $md5->digest()); } # Directly compute a file signature as the MD5 checksum of the # bytes in the file. sub srcsig { my($path) = @_; $md5->reset(); open(FILE, $path) || return ''; binmode(FILE); $md5->addfile(\*FILE); close(FILE); # Uncomment this to print the result of file signature calculation. # 1998-10-13 # my $buf = unpack("H*", $md5->digest()); # print STDOUT "$path=|$buf|\n"; # return $buf; unpack("H*", $md5->digest()); } __END__; =head1 NAME Cons - A Software Construction System =head1 DESCRIPTION A guide and reference for version 2.2.0 Copyright (c) 1996-2000 Free Software Foundation, Inc. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; see the file COPYING. If not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. =head1 Introduction B is a system for constructing, primarily, software, but is quite different from previous software construction systems. Cons was designed from the ground up to deal easily with the construction of software spread over multiple source directories. Cons makes it easy to create build scripts that are simple, understandable and maintainable. Cons ensures that complex software is easily and accurately reproducible. Cons uses a number of techniques to accomplish all of this. Construction scripts are just Perl scripts, making them both easy to comprehend and very flexible. Global scoping of variables is replaced with an import/export mechanism for sharing information between scripts, significantly improving the readability and maintainability of each script. B are introduced: these are Perl objects that capture the information required for controlling the build process. Multiple environments are used when different semantics are required for generating products in the build tree. Cons implements automatic dependency analysis and uses this to globally sequence the entire build. Variant builds are easily produced from a single source tree. Intelligent build subsetting is possible, when working on localized changes. Overrides can be setup to easily override build instructions without modifying any scripts. MD5 cryptographic B are associated with derived files, and are used to accurately determine whether a given file needs to be rebuilt. While offering all of the above, and more, Cons remains simple and easy to use. This will, hopefully, become clear as you read the remainder of this document. =head1 Why Cons? Why not Make? Cons is a B replacement. In the following paragraphs, we look at a few of the undesirable characteristics of make--and typical build environments based on make--that motivated the development of Cons. =head2 Build complexity Traditional make-based systems of any size tend to become quite complex. The original make utility and its derivatives have contributed to this tendency in a number of ways. Make is not good at dealing with systems that are spread over multiple directories. Various work-arounds are used to overcome this difficulty; the usual choice is for make to invoke itself recursively for each sub-directory of a build. This leads to complicated code, in which it is often unclear how a variable is set, or what effect the setting of a variable will have on the build as a whole. The make scripting language has gradually been extended to provide more possibilities, but these have largely served to clutter an already overextended language. Often, builds are done in multiple passes in order to provide appropriate products from one directory to another directory. This represents a further increase in build complexity. =head2 Build reproducibility The bane of all makes has always been the correct handling of dependencies. Most often, an attempt is made to do a reasonable job of dependencies within a single directory, but no serious attempt is made to do the job between directories. Even when dependencies are working correctly, make's reliance on a simple time stamp comparison to determine whether a file is out of date with respect to its dependents is not, in general, adequate for determining when a file should be rederived. If an external library, for example, is rebuilt and then ``snapped'' into place, the timestamps on its newly created files may well be earlier than the last local build, since it was built before it became visible. =head2 Variant builds Make provides only limited facilities for handling variant builds. With the proliferation of hardware platforms and the need for debuggable vs. optimized code, the ability to easily create these variants is essential. More importantly, if variants are created, it is important to either be able to separate the variants or to be able to reproduce the original or variant at will. With make it is very difficult to separate the builds into multiple build directories, separate from the source. And if this technique isn't used, it's also virtually impossible to guarantee at any given time which variant is present in the tree, without resorting to a complete rebuild. =head2 Repositories Make provides only limited support for building software from code that exists in a central repository directory structure. The VPATH feature of GNU make (and some other make implementations) is intended to provide this, but doesn't work as expected: it changes the path of target file to the VPATH name too early in its analysis, and therefore searches for all dependencies in the VPATH directory. To ensure correct development builds, it is important to be able to create a file in a local build directory and have any files in a code repository (a VPATH directory, in make terms) that depend on the local file get rebuilt properly. This isn't possible with VPATH, without coding a lot of complex repository knowledge directly into the makefiles. =head1 Keeping it simple A few of the difficulties with make have been cited above. In this and subsequent sections, we shall introduce Cons and show how these issues are addressed. =head2 Perl scripts Cons is Perl-based. That is, Cons scripts--F and F files, the equivalent to F or F--are all written in Perl. This provides an immediate benefit: the language for writing scripts is a familiar one. Even if you don't happen to be a Perl programmer, it helps to know that Perl is basically just a simple declarative language, with a well-defined flow of control, and familiar semantics. It has variables that behave basically the way you would expect them to, subroutines, flow of control, and so on. There is no special syntax introduced for Cons. The use of Perl as a scripting language simplifies the task of expressing the appropriate solution to the often complex requirements of a build. =head2 Hello, World! To ground the following discussion, here's how you could build the B C application with Cons: $env = new cons(); Program $env 'hello', 'hello.c'; If you install this script in a directory, naming the script F, and create the F source file in the same directory, then you can type C to build the application: % cons hello cc -c hello.c -o hello.o cc -o hello hello.o =head2 Construction environments A key simplification of Cons is the idea of a B. A construction environment is an B characterized by a set of key/value pairs and a set of BIn order to tell Cons how to build something, you invoke the appropriate method via an appropriate construction environment. Consider the following example: $env = new cons( CC => 'gcc', LIBS => 'libworld.a' ); Program $env 'hello', 'hello.c'; In this case, rather than using the default construction environment, as is, we have overridden the value of C so that the GNU C Compiler equivalent is used, instead. Since this version of B requires a library, F, we have specified that any program linked in this environment should be linked with that library. If the library exists already, well and good, but if not, then we'll also have to include the statement: Library $env 'libworld', 'world.c'; Now if you type C, the library will be built before the program is linked, and, of course, C will be used to compile both modules: % cons hello gcc -c hello.c -o hello.o gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a =head2 Automatic and complete dependency analysis With Cons, dependencies are handled automatically. Continuing the previous example, note that when we modify F, F is recompiled, F recreated, and F relinked: % vi world.c [EDIT] % cons hello gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a This is a relatively simple example: Cons ``knows'' F depends upon F, because the dependency is explicitly set up by the C method. It also knows that F depends upon F and that F depends upon F, all for similar reasons. Now it turns out that F also includes the interface definition file, F: % emacs world.h [EDIT] % cons hello gcc -c hello.c -o hello.o gcc -o hello hello.o libworld.a How does Cons know that F includes F, and that F must therefore be recompiled? For now, suffice it to say that when considering whether or not F is up-to-date, Cons invokes a scanner for its dependency, F. This scanner enumerates the files included by F to come up with a list of further dependencies, beyond those made explicit by the Cons script. This process is recursive: any files included by included files will also be scanned. Isn't this expensive? The answer is--it depends. If you do a full build of a large system, the scanning time is insignificant. If you do a rebuild of a large system, then Cons will spend a fair amount of time thinking about it before it decides that nothing has to be done (although not necessarily more time than make!). The good news is that Cons makes it very easy to intelligently subset your build, when you are working on localized changes. =head2 Automatic global build sequencing Because Cons does full and accurate dependency analysis, and does this globally, for the entire build, Cons is able to use this information to take full control of the B of the build. This sequencing is evident in the above examples, and is equivalent to what you would expect for make, given a full set of dependencies. With Cons, this extends trivially to larger, multi-directory builds. As a result, all of the complexity involved in making sure that a build is organized correctly--including multi-pass hierarchical builds--is eliminated. We'll discuss this further in the next sections. =head1 Building large trees--still just as simple =head2 A hierarchy of build scripts A larger build, in Cons, is organized by creating a hierarchy of B. At the top of the tree is a script called F. The rest of the scripts, by convention, are each called F. These scripts are connected together, very simply, by the C, C, and C commands. =head2 The Build command The C command takes a list of F file names, and arranges for them to be included in the build. For example: Build qw( drivers/display/Conscript drivers/mouse/Conscript parser/Conscript utilities/Conscript ); This is a simple two-level hierarchy of build scripts: all the subsidiary F files are mentioned in the top-level F file. Notice that not all directories in the tree necessarily have build scripts associated with them. This could also be written as a multi-level script. For example, the F file might contain this command: Build qw( parser/Conscript drivers/Conscript utilities/Conscript ); and the F file in the F directory might contain this: Build qw( display/Conscript mouse/Conscript ); Experience has shown that the former model is a little easier to understand, since the whole construction tree is laid out in front of you, at the top-level. Hybrid schemes are also possible. A separately maintained component that needs to be incorporated into a build tree, for example, might hook into the build tree in one place, but define its own construction hierarchy. By default, Cons does not change its working directory to the directory containing a subsidiary F file it is including. This behavior can be enabled for a build by specifying, in the top-level F file: Conscript_chdir 1; When enabled, Cons will change to the subsidiary F file's containing directory while reading in that file, and then change back to the top-level directory once the file has been processed. It is expected that this behavior will become the default in some future version of Cons. To prepare for this transition, builds that expect Cons to remain at the top of the build while it reads in a subsidiary F file should explicitly disable this feature as follows: Conscript_chdir 0; =head2 Relative, top-relative, and absolute file names You may have noticed that the file names specified to the Build command are relative to the location of the script it is invoked from. This is generally true for other filename arguments to other commands, too, although we might as well mention here that if you begin a file name with a hash mark, ``#'', then that file is interpreted relative to the top-level directory (where the F file resides). And, not surprisingly, if you begin it with ``/'', then it is considered to be an absolute pathname. This is true even on systems which use a back slash rather than a forward slash to name absolute paths. =head2 Using modules in build scripts You may pull modules into each F file using the normal Perl C or C statements: use English; require My::Module; Each C or C only affects the one F file in which it appears. To use a module in multiple F files, you must put a C or C statement in each one that needs the module. =head2 Scope of variables The top-level F file and all F files begin life in a common, separate Perl package. B controls the symbol table for the package so that, the symbol table for each script is empty, except for the F file, which gets some of the command line arguments. All of the variables that are set or used, therefore, are set by the script itself--not by some external script. Variables can be explicitly B by a script from its parent script. To import a variable, it must have been B by the parent and initialized (otherwise an error will occur). =head2 The Export command The C command is used as in the following example: $env = new cons(); $INCLUDE = "#export/include"; $LIB = "#export/lib"; Export qw( env INCLUDE LIB ); Build qw( util/Conscript ); The values of the simple variables mentioned in the C list will be squirreled away by any subsequent C commands. The C command will only export Perl B variables, that is, variables whose name begins with C<$>. Other variables, objects, etc. can be exported by reference--but all scripts will refer to the same object, and this object should be considered to be read-only by the subsidiary scripts and by the original exporting script. It's acceptable, however, to assign a new value to the exported scalar variable--that won't change the underlying variable referenced. This sequence, for example, is OK: $env = new cons(); Export qw( env INCLUDE LIB ); Build qw( util/Conscript ); $env = new cons(CFLAGS => '-O'); Build qw( other/Conscript ); It doesn't matter whether the variable is set before or after the C command. The important thing is the value of the variable at the time the C command is executed. This is what gets squirreled away. Any subsequent C commands, by the way, invalidate the first: you must mention all the variables you wish to export on each C command. =head2 The Import command Variables exported by the C command can be imported into subsidiary scripts by the C command. The subsidiary script always imports variables directly from the superior script. Consider this example: Import qw( env INCLUDE ); This is only legal if the parent script exported both C<$env> and C<$INCLUDE>. It also must have given each of these variables values. It is OK for the subsidiary script to only import a subset of the exported variables (in this example, C<$LIB>, which was exported by the previous example, is not imported). All the imported variables are automatically re-exported, so the sequence: Import qw ( env INCLUDE ); Build qw ( beneath-me/Conscript ); will supply both C<$env> and C<$INCLUDE> to the subsidiary file. If only C<$env> is to be exported, then the following will suffice: Import qw ( env INCLUDE ); Export qw ( env ); Build qw ( beneath-me/Conscript ); Needless to say, the variables may be modified locally before invoking C on the subsidiary script. =head2 Build script evaluation order The only constraint on the ordering of build scripts is that superior scripts are evaluated before their inferior scripts. The top-level F file, for instance, is evaluated first, followed by any inferior scripts. This is all you really need to know about the evaluation order, since order is generally irrelevant. Consider the following C command: Build qw( drivers/display/Conscript drivers/mouse/Conscript parser/Conscript utilities/Conscript ); We've chosen to put the script names in alphabetical order, simply because that's the most convenient for maintenance purposes. Changing the order will make no difference to the build. =head1 A Model for sharing files =head2 Some simple conventions In any complex software system, a method for sharing build products needs to be established. We propose a simple set of conventions which are trivial to implement with Cons, but very effective. The basic rule is to require that all build products which need to be shared between directories are shared via an intermediate directory. We have typically called this F, and, in a C environment, provided conventional sub-directories of this directory, such as F, F, F, etc. These directories are defined by the top-level F file. A simple F file for a B application, organized using multiple directories, might look like this: # Construct file for Hello, World! # Where to put all our shared products. $EXPORT = '#export'; Export qw( CONS INCLUDE LIB BIN ); # Standard directories for sharing products. $INCLUDE = "$EXPORT/include"; $LIB = "$EXPORT/lib"; $BIN = "$EXPORT/bin"; # A standard construction environment. $CONS = new cons ( CPPPATH => $INCLUDE, # Include path for C Compilations LIBPATH => $LIB, # Library path for linking programs LIBS => '-lworld', # List of standard libraries ); Build qw( hello/Conscript world/Conscript ); The F directory's F file looks like this: # Conscript file for directory world Import qw( CONS INCLUDE LIB ); # Install the products of this directory Install $CONS $LIB, 'libworld.a'; Install $CONS $INCLUDE, 'world.h'; # Internal products Library $CONS 'libworld.a', 'world.c'; and the F directory's F file looks like this: # Conscript file for directory hello Import qw( CONS BIN ); # Exported products Install $CONS $BIN, 'hello'; # Internal products Program $CONS 'hello', 'hello.c'; To construct a B program with this directory structure, go to the top-level directory, and invoke C with the appropriate arguments. In the following example, we tell Cons to build the directory F. To build a directory, Cons recursively builds all known products within that directory (only if they need rebuilding, of course). If any of those products depend upon other products in other directories, then those will be built, too. % cons export Install world/world.h as export/include/world.h cc -Iexport/include -c hello/hello.c -o hello/hello.o cc -Iexport/include -c world/world.c -o world/world.o ar r world/libworld.a world/world.o ar: creating world/libworld.a ranlib world/libworld.a Install world/libworld.a as export/lib/libworld.a cc -o hello/hello hello/hello.o -Lexport/lib -lworld Install hello/hello as export/bin/hello =head2 Clean, understandable, location-independent scripts You'll note that the two F files are very clean and to-the-point. They simply specify products of the directory and how to build those products. The build instructions are minimal: they specify which construction environment to use, the name of the product, and the name of the inputs. Note also that the scripts are location-independent: if you wish to reorganize your source tree, you are free to do so: you only have to change the F file (in this example), to specify the new locations of the F files. The use of an export tree makes this goal easy. Note, too, how Cons takes care of little details for you. All the F directories, for example, were made automatically. And the installed files were really hard-linked into the respective export directories, to save space and time. This attention to detail saves considerable work, and makes it even easier to produce simple, maintainable scripts. =head1 Separating source and build trees It's often desirable to keep any derived files from the build completely separate from the source files. This makes it much easier to keep track of just what is a source file, and also makes it simpler to handle B builds, especially if you want the variant builds to co-exist. =head2 Separating build and source directories using the Link command Cons provides a simple mechanism that handles all of these requirements. The C command is invoked as in this example: Link 'build' => 'src'; The specified directories are ``linked'' to the specified source directory. Let's suppose that you setup a source directory, F, with the sub-directories F and F below it, as in the previous example. You could then substitute for the original build lines the following: Build qw( build/world/Conscript build/hello/Conscript ); Notice that you treat the F file as if it existed in the build directory. Now if you type the same command as before, you will get the following results: % cons export Install build/world/world.h as export/include/world.h cc -Iexport/include -c build/hello/hello.c -o build/hello/hello.o cc -Iexport/include -c build/world/world.c -o build/world/world.o ar r build/world/libworld.a build/world/world.o ar: creating build/world/libworld.a ranlib build/world/libworld.a Install build/world/libworld.a as export/lib/libworld.a cc -o build/hello/hello build/hello/hello.o -Lexport/lib -lworld Install build/hello/hello as export/bin/hello Again, Cons has taken care of the details for you. In particular, you will notice that all the builds are done using source files and object files from the build directory. For example, F is compiled from F, and F is installed from F. This is accomplished on most systems by the simple expedient of ``hard'' linking the required files from each source directory into the appropriate build directory. The links are maintained correctly by Cons, no matter what you do to the source directory. If you modify a source file, your editor may do this ``in place'' or it may rename it first and create a new file. In the latter case, any hard link will be lost. Cons will detect this condition the next time the source file is needed, and will relink it appropriately. You'll also notice, by the way, that B changes were required to the underlying F files. And we can go further, as we shall see in the next section. =head1 Variant builds =head2 Hello, World! for baNaNa and peAcH OS's Variant builds require just another simple extension. Let's take as an example a requirement to allow builds for both the baNaNa and peAcH operating systems. In this case, we are using a distributed file system, such as NFS to access the particular system, and only one or the other of the systems has to be compiled for any given invocation of C. Here's one way we could set up the F file for our B application: # Construct file for Hello, World! die qq(OS must be specified) unless $OS = $ARG{OS}; die qq(OS must be "peach" or "banana") if $OS ne "peach" && $OS ne "banana"; # Where to put all our shared products. $EXPORT = "#export/$OS"; Export qw( CONS INCLUDE LIB BIN ); # Standard directories for sharing products. $INCLUDE = "$EXPORT/include"; $LIB = "$EXPORT/lib"; $BIN = "$EXPORT/bin"; # A standard construction environment. $CONS = new cons ( CPPPATH => $INCLUDE, # Include path for C Compilations LIBPATH => $LIB, # Library path for linking programs LIBS => '-lworld', # List of standard libraries ); # $BUILD is where we will derive everything. $BUILD = "#build/$OS"; # Tell cons where the source files for $BUILD are. Link $BUILD => 'src'; Build ( "$BUILD/hello/Conscript", "$BUILD/world/Conscript", ); Now if we login to a peAcH system, we can build our B application for that platform: % cons export OS=peach Install build/peach/world/world.h as export/peach/include/world.h cc -Iexport/peach/include -c build/peach/hello/hello.c -o build/peach/hello/hello.o cc -Iexport/peach/include -c build/peach/world/world.c -o build/peach/world/world.o ar r build/peach/world/libworld.a build/peach/world/world.o ar: creating build/peach/world/libworld.a ranlib build/peach/world/libworld.a Install build/peach/world/libworld.a as export/peach/lib/libworld.a cc -o build/peach/hello/hello build/peach/hello/hello.o -Lexport/peach/lib -lworld Install build/peach/hello/hello as export/peach/bin/hello =head2 Variations on a theme Other variations of this model are possible. For example, you might decide that you want to separate out your include files into platform dependent and platform independent files. In this case, you'd have to define an alternative to C<$INCLUDE> for platform-dependent files. Most F files, generating purely platform-independent include files, would not have to change. You might also want to be able to compile your whole system with debugging or profiling, for example, enabled. You could do this with appropriate command line options, such as C. This would then be translated into the appropriate platform-specific requirements to enable debugging (this might include turning off optimization, for example). You could optionally vary the name space for these different types of systems, but, as we'll see in the next section, it's not B to do this, since Cons is pretty smart about rebuilding things when you change options. =head1 Signatures =head2 MD5 cryptographic signatures Whenever Cons creates a derived file, it stores a B for that file. The signature is stored in a separate file, one per directory. After the previous example was compiled, the F<.consign> file in the F directory looked like this: world.o:834179303 23844c0b102ecdc0b4548d1cd1cbd8c6 libworld.a:834179304 9bf6587fa06ec49d864811a105222c00 The first number is a timestamp--for a UNIX systems, this is typically the number of seconds since January 1st, 1970. The second value is an MD5 checksum. The B is an algorithm that, given an input string, computes a strong cryptographic signature for that string. The MD5 checksum stored in the F<.consign> file is, in effect, a digest of all the dependency information for the specified file. So, for example, for the F file, this includes at least the F file, and also any header files that Cons knows about that are included, directly or indirectly by F. Not only that, but the actual command line that was used to generate F is also fed into the computation of the signature. Similarly, F gets a signature which ``includes'' all the signatures of its constituents (and hence, transitively, the signatures of B constituents), as well as the command line that created the file. The signature of a non-derived file is computed, by default, by taking the current modification time of the file and the file's entry name (unless there happens to be a current F<.consign> entry for that file, in which case that signature is used). Notice that there is no need for a derived file to depend upon any particular F or F file--if changes to these files affect the file in question, then this will be automatically reflected in its signature, since relevant parts of the command line are included in the signature. Unrelated changes will have no effect. When Cons considers whether to derive a particular file, then, it first computes the expected signature of the file. It then compares the file's last modification time with the time recorded in the F<.consign> entry, if one exists. If these times match, then the signature stored in the F<.consign> file is considered to be accurate. If the file's previous signature does not match the new, expected signature, then the file must be rederived. Notice that a file will be rederived whenever anything about a dependent file changes. In particular, notice that B change to the modification time of a dependent (forward or backwards in time) will force recompilation of the derived file. The use of these signatures is an extremely simple, efficient, and effective method of improving--dramatically--the reproducibility of a system. We'll demonstrate this with a simple example: # Simple "Hello, World!" Construct file $CFLAGS = '-g' if $ARG{DEBUG} eq 'on'; $CONS = new cons(CFLAGS => $CFLAGS); Program $CONS 'hello', 'hello.c'; Notice how Cons recompiles at the appropriate times: % cons hello cc -c hello.c -o hello.o cc -o hello hello.o % cons hello cons: "hello" is up-to-date. % cons DEBUG=on hello cc -g -c hello.c -o hello.o cc -o hello hello.o % cons DEBUG=on hello cons: "hello" is up-to-date. % cons hello cc -c hello.c -o hello.o cc -o hello hello.o =head1 Code Repositories Many software development organizations will have one or more central repository directory trees containing the current source code for one or more projects, as well as the derived object files, libraries, and executables. In order to reduce unnecessary recompilation, it is useful to use files from the repository to build development software--assuming, of course, that no newer dependency file exists in the local build tree. =head2 Repository Cons provides a mechanism to specify a list of code repositories that will be searched, in-order, for source files and derived files not found in the local build directory tree. The following lines in a F file will instruct Cons to look first under the F directory and then under the F directory: Repository qw ( /usr/experiment/repository /usr/product/repository ); The repository directories specified may contain source files, derived files (objects, libraries and executables), or both. If there is no local file (source or derived) under the directory in which Cons is executed, then the first copy of a same-named file found under a repository directory will be used to build any local derived files. Cons maintains one global list of repositories directories. Cons will eliminate the current directory, and any non-existent directories, from the list. =head2 Finding the Construct file in a Repository Cons will also search for F and F files in the repository tree or trees. This leads to a chicken-and-egg situation, though: how do you look in a repository tree for a F file if the F file tells you where the repository is? To get around this, repositories may be specified via C<-R> options on the command line: % cons -R /usr/experiment/repository -R /usr/product/repository . Any repository directories specified in the F or F files will be appended to the repository directories specified by command-line C<-R> options. =head2 Repository source files If the source code (include the F file) for the library version of the I C application is in a repository (with no derived files), Cons will use the repository source files to create the local object files and executable file: % cons -R /usr/src_only/repository hello gcc -c /usr/src_only/repository/hello.c -o hello.o gcc -c /usr/src_only/repository/world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a Creating a local source file will cause Cons to rebuild the appropriate derived file or files: % pico world.c [EDIT] % cons -R /usr/src_only/repository hello gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a And removing the local source file will cause Cons to revert back to building the derived files from the repository source: % rm world.c % cons -R /usr/src_only/repository hello gcc -c /usr/src_only/repository/world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a =head2 Repository derived files If a repository tree contains derived files (usually object files, libraries, or executables), Cons will perform its normal signature calculation to decide whether the repository file is up-to-date or a derived file must be built locally. This means that, in order to ensure correct signature calculation, a repository tree must also contain the F<.consign> files that were created by Cons when generating the derived files. This would usually be accomplished by building the software in the repository (or, alternatively, in a build directory, and then copying the result to the repository): % cd /usr/all/repository % cons hello gcc -c hello.c -o hello.o gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a (This is safe even if the F file lists the F directory in a C command because Cons will remove the current directory from the repository list.) Now if we want to build a copy of the application with our own F file, we only need to create the one necessary source file, and use the C<-R> option to have Cons use other files from the repository: % mkdir $HOME/build1 % cd $HOME/build1 % ed hello.c [EDIT] % cons -R /usr/all/repository hello gcc -c hello.c -o hello.o gcc -o hello hello.o /usr/all/repository/libworld.a Notice that Cons has not bothered to recreate a local F library (or recompile the F module), but instead uses the already-compiled version from the repository. Because the MD5 signatures that Cons puts in the F<.consign> file contain timestamps for the derived files, the signature timestamps must match the file timestamps for a signature to be considered valid. Some software systems may alter the timestamps on repository files (by copying them, e.g.), in which case Cons will, by default, assume the repository signatures are invalid and rebuild files unnecessarily. This behavior may be altered by specifying: Repository_Sig_Times_OK 0; This tells Cons to ignore timestamps when deciding whether a signature is valid. (Note that avoiding this sanity check means there must be proper control over the repository tree to ensure that the derived files cannot be modified without updating the F<.consign> signature.) =head2 Local copies of files If the repository tree contains the complete results of a build, and we try to build from the repository without any files in our local tree, something moderately surprising happens: % mkdir $HOME/build2 % cd $HOME/build2 % cons -R /usr/all/repository hello cons: "hello" is up-to-date. Why does Cons say that the F program is up-to-date when there is no F program in the local build directory? Because the repository (not the local directory) contains the up-to-date F program, and Cons correctly determines that nothing needs to be done to rebuild this up-to-date copy of the file. There are, however, many times in which it is appropriate to ensure that a local copy of a file always exists. A packaging or testing script, for example, may assume that certain generated files exist locally. Instead of making these subsidiary scripts aware of the repository directory, the C command may be added to a F or F file to specify that a certain file or files must appear in the local build directory: Local qw( hello ); Then, if we re-run the same command, Cons will make a local copy of the program from the repository copy (telling you that it is doing so): % cons -R /usr/all/repository hello Local copy of hello from /usr/all/repository/hello cons: "hello" is up-to-date. Notice that, because the act of making the local copy is not considered a "build" of the F file, Cons still reports that it is up-to-date. Creating local copies is most useful for files that are being installed into an intermediate directory (for sharing with other directories) via the C command. Accompanying the C command for a file with a companion C command is so common that Cons provides a C command as a convenient way to do both: Install_Local $env, '#export', 'hello'; is exactly equivalent to: Install $env '#export', 'hello'; Local '#export/hello'; Both the C and C commands update the local F<.consign> file with the appropriate file signatures, so that future builds are performed correctly. =head2 Repository dependency analysis Due to its built-in scanning, Cons will search the specified repository trees for included F<.h> files. Unless the compiler also knows about the repository trees, though, it will be unable to find F<.h> files that only exist in a repository. If, for example, the F file includes the F file in its current directory: % cons -R /usr/all/repository hello gcc -c /usr/all/repository/hello.c -o hello.o /usr/all/repository/hello.c:1: hello.h: No such file or directory Solving this problem forces some requirements onto the way construction environments are defined and onto the way the C C<#include> preprocessor directive is used to include files. In order to inform the compiler about the repository trees, Cons will add appropriate C<-I> flags to the compilation commands. This means that the C variable in the construct environment must explicitly specify all subdirectories which are to be searched for included files, including the current directory. Consequently, we can fix the above example by changing the environment creation in the F file as follows: $env = new cons( CC => 'gcc', CPPPATH => '.', LIBS => 'libworld.a', ); Due to the definition of the C variable, this yields, when we re-execute the command: % cons -R /usr/all/repository hello gcc -c -I. -I/usr/all/repository /usr/all/repository/hello.c -o hello.o gcc -o hello hello.o /usr/all/repository/libworld.a The order of the C<-I> flags replicates, for the C preprocessor, the same repository-directory search path that Cons uses for its own dependency analysis. If there are multiple repositories and multiple C directories, Cons will append the repository directories to the beginning of each C directory, rapidly multiplying the number of C<-I> flags. As an extreme example, a F file containing: Repository qw( /u1 /u2 ); $env = new cons( CPPPATH => 'a:b:c', ); Would yield a compilation command of: cc -Ia -I/u1/a -I/u2/a -Ib -I/u1/b -I/u2/b -Ic -I/u1/c -I/u2/c -c hello.c -o hello.o Because Cons relies on the compiler's C<-I> flags to communicate the order in which repository directories must be searched, Cons' handling of repository directories is fundamentally incompatible with using double-quotes on the C<#include> directives in your C source code: #include "file.h" /* DON'T USE DOUBLE-QUOTES LIKE THIS */ This is because most C preprocessors, when faced with such a directive, will always first search the directory containing the source file. This undermines the elaborate C<-I> options that Cons constructs to make the preprocessor conform to its preferred search path. Consequently, when using repository trees in Cons, B use angle-brackets for included files: #include /* USE ANGLE-BRACKETS INSTEAD */ =head2 Repository_List Cons provides a C command to return a list of all repository directories in their current search order. This can be used for debugging, or to do more complex Perl stuff: @list = Repository_List; print join(' ', @list), "\n"; =head2 Repository interaction with other Cons features Cons' handling of repository trees interacts correctly with other Cons features--which is to say, it generally does what you would expect. Most notably, repository trees interact correctly, and rather powerfully, with the 'Link' command. A repository tree may contain one or more subdirectories for version builds established via C to a source subdirectory. Cons will search for derived files in the appropriate build subdirectories under the repository tree. =head1 Default targets Until now, we've demonstrated invoking Cons with an explicit target to build: % cons hello Normally, Cons does not build anything unless a target is specified, but specifying '.' (the current directory) will build everything: % cons # does not build anything % cons . # builds everything under the top-level directory Adding the C method to any F or F file will add the specified targets to a list of default targets. Cons will build these defaults if there are no targets specified on the command line. So adding the following line to the top-level F file will mimic Make's typical behavior of building everything by default: Default '.'; The following would add the F and F commands (in the same directory as the F or F file) to the default list: Default qw( hello goodbye ); The C method may be used more than once to add targets to the default list. =head1 Selective builds Cons provides two methods for reducing the size of given build. The first is by specifying targets on the command line, and the second is a method for pruning the build tree. We'll consider target specification first. =head2 Selective targeting Like make, Cons allows the specification of ``targets'' on the command line. Cons targets may be either files or directories. When a directory is specified, this is simply a short-hand notation for every derivable product--that Cons knows about--in the specified directory and below. For example: % cons build/hello/hello.o means build F and everything that F might need. This is from a previous version of the B program in which F depended upon F. If that file is not up-to-date (because someone modified F, then it will be rebuilt, even though it is in a directory remote from F. In this example: % cons build Everything in the F directory is built, if necessary. Again, this may cause more files to be built. In particular, both F and F are required by the F directory, and so they will be built if they are out-of-date. If we do, instead: % cons export then only the files that should be installed in the export directory will be rebuilt, if necessary, and then installed there. Note that C might build files that C doesn't build, and vice-versa. =head2 No ``special'' targets With Cons, make-style ``special'' targets are not required. The simplest analog with Cons is to use special F directories, instead. Let's suppose, for example, that you have a whole series of unit tests that are associated with your code. The tests live in the source directory near the code. Normally, however, you don't want to build these tests. One solution is to provide all the build instructions for creating the tests, and then to install the tests into a separate part of the tree. If we install the tests in a top-level directory called F, then: % cons tests will build all the tests. % cons export will build the production version of the system (but not the tests), and: % cons build should probably be avoided (since it will compile tests unecessarily). If you want to build just a single test, then you could explicitly name the test (in either the F directory or the F directory). You could also aggregate the tests into a convenient hierarchy within the tests directory. This hierarchy need not necessarily match the source hierarchy, in much the same manner that the include hierarchy probably doesn't match the source hierarchy (the include hierarchy is unlikely to be more than two levels deep, for C programs). If you want to build absolutely everything in the tree (subject to whatever options you select), you can use: % cons . This is not particularly efficient, since it will redundantly walk all the trees, including the source tree. The source tree, of course, may have buildable objects in it--nothing stops you from doing this, even if you normally build in a separate build tree. =head1 Build Pruning In conjunction with target selection, B can be used to reduce the scope of the build. In the previous peAcH and baNaNa example, we have already seen how script-driven build pruning can be used to make only half of the potential build available for any given invocation of C. Cons also provides, as a convenience, a command line convention that allows you to specify which F files actually get ``built''--that is, incorporated into the build tree. For example: % cons build +world The C<+> argument introduces a Perl regular expression. This must, of course, be quoted at the shell level if there are any shell meta-characters within the expression. The expression is matched against each F file which has been mentioned in a C statement, and only those scripts with matching names are actually incorporated into the build tree. Multiple such arguments are allowed, in which case a match against any of them is sufficient to cause a script to be included. In the example, above, the F program will not be built, since Cons will have no knowledge of the script F. The F archive will be built, however, if need be. There are a couple of uses for build pruning via the command line. Perhaps the most useful is the ability to make local changes, and then, with sufficient knowledge of the consequences of those changes, restrict the size of the build tree in order to speed up the rebuild time. A second use for build pruning is to actively prevent the recompilation of certain files that you know will recompile due to, for example, a modified header file. You may know that either the changes to the header file are immaterial, or that the changes may be safely ignored for most of the tree, for testing purposes.With Cons, the view is that it is pragmatic to admit this type of behavior, with the understanding that on the next full build everything that needs to be rebuilt will be. There is no equivalent to a ``make touch'' command, to mark files as permanently up-to-date. So any risk that is incurred by build pruning is mitigated. For release quality work, obviously, we recommend that you do not use build pruning (it's perfectly OK to use during integration, however, for checking compilation, etc. Just be sure to do an unconstrained build before committing the integration). =head1 Temporary overrides Cons provides a very simple mechanism for overriding aspects of a build. The essence is that you write an override file containing one or more C commands, and you specify this on the command line, when you run C: % cons -o over export will build the F directory, with all derived files subject to the overrides present in the F file. If you leave out the C<-o> option, then everything necessary to remove all overrides will be rebuilt. =head2 Overriding environment variables The override file can contain two types of overrides. The first is incoming environment variables. These are normally accessible by the F file from the C<%ENV> hash variable. These can trivially be overridden in the override file by setting the appropriate elements of C<%ENV> (these could also be overridden in the user's environment, of course). =head2 The Override command The second type of override is accomplished with the C command, which looks like this: Override , => , => , ...; The regular expression I is matched against every derived file that is a candidate for the build. If the derived file matches, then the variable/value pairs are used to override the values in the construction environment associated with the derived file. Let's suppose that we have a construction environment like this: $CONS = new cons( COPT => '', CDBG => '-g', CFLAGS => '%COPT %CDBG', ); Then if we have an override file F containing this command: Override '\.o$', COPT => '-O', CDBG => ''; then any C invocation with C<-o over> that creates F<.o> files via this environment will cause them to be compiled with C<-O >and no C<-g>. The override could, of course, be restricted to a single directory by the appropriate selection of a regular expression. Here's the original version of the Hello, World! program, built with this environment. Note that Cons rebuilds the appropriate pieces when the override is applied or removed: % cons hello cc -g -c hello.c -o hello.o cc -o hello hello.o % cons -o over hello cc -O -c hello.c -o hello.o cc -o hello hello.o % cons -o over hello cons: "hello" is up-to-date. % cons hello cc -g -c hello.c -o hello.o cc -o hello hello.o It's important that the C command only be used for temporary, on-the-fly overrides necessary for development because the overrides are not platform independent and because they rely too much on intimate knowledge of the workings of the scripts. For temporary use, however, they are exactly what you want. Note that it is still useful to provide, say, the ability to create a fully optimized version of a system for production use--from the F and F files. This way you can tailor the optimized system to the platform. Where optimizer trade-offs need to be made (particular files may not be compiled with full optimization, for example), then these can be recorded for posterity (and reproducibility) directly in the scripts. =head1 More on construction environments =head2 Default construction variables We have mentioned, and used, the concept of a B, many times in the preceding pages. Now it's time to make this a little more concrete. With the following statement: $env = new cons(); a reference to a new, default construction environment is created. This contains a number of construction variables and some methods. At the present writing, the default list of construction variables is defined as follows: CC => 'cc', CFLAGS => '', CCCOM => '%CC %CFLAGS %_IFLAGS -c %< -o %>', INCDIRPREFIX => '-I', CXX => '%CC', CXXFLAGS => '%CFLAGS', CXXCOM => '%CXX %CXXFLAGS %_IFLAGS -c %< -o %>', LINK => '%CXX', LINKCOM => '%LINK %LDFLAGS -o %> %< %_LDIRS %LIBS', LINKMODULECOM => '%LD -r -o %> %<', LIBDIRPREFIX => '-L', AR => 'ar', ARFLAGS => 'r', ARCOM => "%AR %ARFLAGS %> %<\n%RANLIB %>", RANLIB => 'ranlib', AS => 'as', ASFLAGS => '', ASCOM => '%AS %ASFLAGS %< -o %>', LD => 'ld', LDFLAGS => '', PREFLIB => 'lib', SUFLIB => '.a', SUFLIBS => '.so:.a', SUFOBJ => '.o', ENV => { 'PATH' => '/bin:/usr/bin' }, On Win32 systems (Windows NT), the following construction variables are overridden in the default: CC => 'cl', CFLAGS => '/nologo', CCCOM => '%CC %CFLAGS %_IFLAGS /c %< /Fo%>', CXXCOM => '%CXX %CXXFLAGS %_IFLAGS /c %< /Fo%>', INCDIRPREFIX => '/I', LINK => 'link', LINKCOM => '%LINK %LDFLAGS /out:%> %< %_LDIRS %LIBS', LINKMODULECOM => '%LD /r /o %> %<', LIBDIRPREFIX => '/LIBPATH:', AR => 'lib', ARFLAGS => '/nologo ', ARCOM => "%AR %ARFLAGS /out:%> %<", RANLIB => '', LD => 'link', LDFLAGS => '/nologo ', PREFLIB => '', SUFEXE => '.exe', SUFLIB => '.lib', SUFLIBS => '.dll:.lib', SUFOBJ => '.obj', These variables are used by the various methods associated with the environment, in particular any method that ultimately invokes an external command will substitute these variables into the final command, as appropriate. For example, the C method takes a number of source files and arranges to derive, if necessary, the corresponding object files. For example: Objects $env 'foo.c', 'bar.c'; This will arrange to produce, if necessary, F and F. The command invoked is simply C<%CCCOM>, which expands through substitution, to the appropriate external command required to build each object. We will explore the substitution rules further under the C method, below. The construction variables are also used for other purposes. For example, C is used to specify a colon-separated path of include directories. These are intended to be passed to the C preprocessor and are also used by the C-file scanning machinery to determine the dependencies involved in a C Compilation. Variables beginning with underscore, are created by various methods, and should normally be considered ``internal'' variables. For example, when a method is called which calls for the creation of an object from a C source, the variable C<_IFLAGS> is created: this corresponds to the C<-I> switches required by the C compiler to represent the directories specified by C. Note that, for any particular environment, the value of a variable is set once, and then never reset (to change a variable, you must create a new environment. Methods are provided for copying existing environments for this purpose). Some internal variables, such as C<_IFLAGS> are created on demand, but once set, they remain fixed for the life of the environment. The C, C, and C variables all supply a place for passing options to the compiler, loader, and archiver, respectively. Less obviously, the C variable specifies the option string to be appended to the beginning of each include directory so that the compiler knows where to find F<.h> files. Similarly, the C variable specifies the option string to be appended to the beginning of each directory that the linker should search for libraries. Another variable, C, is used to determine the system environment during the execution of an external command. By default, the only environment variable that is set is C, which is the execution path for a UNIX command. For the utmost reproducibility, you should really arrange to set your own execution path, in your top-level F file (or perhaps by importing an appropriate construction package with the Perl C command). The default variables are intended to get you off the ground. =head2 Interpolating construction variables Construction environment variables may be interpolated in the source and target file names by prefixing the construction variable name with C<%>. $env = new cons( DESTDIR => 'programs', SRCDIR => 'src', ); Program $env '%DESTDIR/hello', '%SRCDIR/hello.c'; Expansion of construction variables is recursive--that is, the file name(s) will be re-expanded until no more substitutions can be made. If a construction variable is not defined in the environment, then the null string will be substituted. =head1 Default construction methods The list of default construction methods includes the following: =head2 The C constructor The C method is a Perl object constructor. That is, it is not invoked via a reference to an existing construction environment B, but, rather statically, using the name of the Perl B where the constructor is defined. The method is invoked like this: $env = new cons(); The environment you get back is blessed into the package C, which means that it will have associated with it the default methods described below. Individual construction variables can be overridden by providing name/value pairs in an override list. Note that to override any command environment variable (i.e. anything under C), you will have to override all of them. You can get around this difficulty by using the C method on an existing construction environment. =head2 The C method The C method creates a clone of an existing construction environment, and can be called as in the following example: $env2 = $env1->clone(); You can provide overrides in the usual manner to create a different environment from the original. If you just want a new name for the same environment (which may be helpful when exporting environments to existing components), you can just use simple assignment. =head2 The C method The C method extracts the externally defined construction variables from an environment and returns them as a list of name/value pairs. Overrides can also be provided, in which case, the overridden values will be returned, as appropriate. The returned list can be assigned to a hash, as shown in the prototype, below, but it can also be manipulated in other ways: %env = $env1->copy(); The value of C, which is itself a hash, is also copied to a new hash, so this may be changed without fear of affecting the original environment. So, for example, if you really want to override just the C variable in the default environment, you could do the following: %cons = new cons()->copy(); $cons{ENV}{PATH} = ""; $cons = new cons(%cons); This will leave anything else that might be in the default execution environment undisturbed. =head2 The C method The C method arranges for the specified files to be installed in the specified directory. The installation is optimized: the file is not copied if it can be linked. If this is not the desired behavior, you will need to use a different method to install the file. It is called as follows: Install $env , ; Note that, while the files to be installed may be arbitrarily named, only the last component of each name is used for the installed target name. So, for example, if you arrange to install F in F, this will create a F file in the F directory (not F). =head2 The C method The C method arranges for the specified source file(s) to be installed as the specified target file(s). Multiple files should be specified as a file list. The installation is optimized: the file is not copied if it can be linked. If this is not the desired behavior, you will need to use a different method to install the file. It is called as follows: C works in two ways: Single file install: InstallAs $env TgtFile, SrcFile; Multiple file install: InstallAs $env ['tgt1', 'tgt2'], ['src1', 'src2']; Or, even as: @srcs = qw(src1 src2 src3); @tgts = qw(tgt1 tgt2 tgt3); InstallAs $env [@tgts], [@srcs]; Both the target and the sources lists should be of the same length. =head2 The C method The C method asks cons not to delete the specified file or list of files before building them again. It is invoked as: Precious ; This is especially useful for allowing incremental updates to libraries or debug information files which are updated rather than rebuilt anew each time. Cons will still delete the files when the C<-r> flag is specified. =head2 The C method The C method is a catchall method which can be used to arrange for any external command to be called to update the target. For this command, a target file and list of inputs is provided. In addition a construction command line, or lines, is provided as a string (this string may have multiple commands embedded within it, separated by new lines). C is called as follows: Command $env , , ; The target is made dependent upon the list of input files specified, and the inputs must be built successfully or Cons will not attempt to build the target. Within the construction command, any variable from the construction environment may be introduced by prefixing the name of the construction variable with C<%>. This is recursive: the command is expanded until no more substitutions can be made. If a construction variable is not defined in the environment, then the null string will be substituted. A doubled C<%%> will be replaced by a single C<%> in the construction command. There are several pseudo variables which will also be expanded: =over 10 =item %> The target file name (in a multi-target command, this is always the first target mentioned). =item %0 Same as C<%E>. =item %1, %2, ..., %9 These refer to the first through ninth input file, respectively. =item %E The full set of inputs. If any of these have been used anywhere else in the current command line (via C<%1>, C<%2>, etc.), then those will be deleted from the list provided by C<%E>. Consider the following command found in a F file in the F directory: Command $env 'tgt', qw(foo bar baz), qq( echo %< -i %1 > %> echo %< -i %2 >> %> echo %< -i %3 >> %> ); If F needed to be updated, then this would result in the execution of the following commands, assuming that no remapping has been established for the F directory: echo test/bar test/baz -i test/foo > test/tgt echo test/foo test/baz -i test/bar >> test/tgt echo test/foo test/bar -i test/baz >> test/tgt =back Any of the above pseudo variables may be followed immediately by one of the following suffixes to select a portion of the expanded path name: :a the absolute path to the file name :b the directory plus the file name stripped of any suffix :d the directory :f the file name :s the file name suffix :F the file name stripped of any suffix Continuing with the above example, C<%<:f> would expand to C, and C<%>:d> would expand to C. It is possible to programmatically rewrite part of the command by enclosing part of it between C<%[> and C<%]>. This will call the construction variable named as the first word enclosed in the brackets as a Perl code reference; the results of this call will be used to replace the contents of the brackets in the command line. For example, given an existing input file named F: @keywords = qw(foo bar baz); $env = new cons(X_COMMA => sub { join(",", @_) }); Command $env 'tgt', 'tgt.in', qq( echo '# Keywords: %[X_COMMA @keywords %]' > %> cat %< >> %> ); This will execute: echo '# Keywords: foo,bar,baz' > tgt cat tgt.in >> tgt After substitution occurs, strings of white space are converted into single blanks, and leading and trailing white space is eliminated. It is therefore not possible to introduce variable length white space in strings passed into a command, without resorting to some sort of shell quoting. If a multi-line command string is provided, the commands are executed sequentially. If any of the commands fails, then none of the rest are executed, and the target is not marked as updated, i.e. a new signature is not stored for the target. Normally, if all the commands succeed, and return a zero status (or whatever platform-specific indication of success is required), then a new signature is stored for the target. If a command erroneously reports success even after a failure, then Cons will assume that the target file created by that command is accurate and up-to-date. The first word of each command string, after expansion, is assumed to be an executable command looked up on the C environment variable (which is, in turn, specified by the C construction variable). If this command is found on the path, then the target will depend upon it: the command will therefore be automatically built, as necessary. It's possible to write multi-part commands to some shells, separated by semi-colons. Only the first command word will be depended upon, however, so if you write your command strings this way, you must either explicitly set up a dependency (with the C method), or be sure that the command you are using is a system command which is expected to be available. If it isn't available, you will, of course, get an error. If any command (even one within a multi-line command) begins with C<[perl]>, the remainder of that command line will be evaluated by the running Perl instead of being forked by the shell. If an error occurs in parsing the Perl or if the Perl expression returns 0 or undef, the command will be considered to have failed. For example, here is a simple command which creates a file C directly from Perl: $env = new cons(); Command $env 'foo', qq([perl] open(FOO,'>foo');print FOO "hi\\n"; close(FOO); 1); Note that when the command is executed, you are in the same package as when the F or F file was read, so you can call Perl functions you've defined in the same F or F file in which the C appears: $env = new cons(); sub create_file { my $file = shift; open(FILE, ">$file"); print FILE "hi\n"; close(FILE); return 1; } Command $env 'foo', "[perl] &create_file('%>')"; The Perl string will be used to generate the signature for the derived file, so if you change the string, the file will be rebuilt. The contents of any subroutines you call, however, are not part of the signature, so if you modify a called subroutine such as C above, the target will I be rebuilt. Caveat user. Cons normally prints a command before executing it. This behavior is suppressed if the first character of the command is C<@>. Note that you may need to separate the C<@> from the command name or escape it to prevent C<@cmd> from looking like an array to Perl quote operators that perform interpolation: # The first command line is incorrect, # because "@cp" looks like an array # to the Perl qq// function. # Use the second form instead. Command $env 'foo', 'foo.in', qq( @cp %< tempfile @ cp tempfile %> ); If there are shell meta characters anywhere in the expanded command line, such as C>, C>, quotes, or semi-colon, then the command will actually be executed by invoking a shell. This means that a command such as: cd foo alone will typically fail, since there is no command C on the path. But the command string: cd $<:d; tar cf $>:f $<:f when expanded will still contain the shell meta character semi-colon, and a shell will be invoked to interpret the command. Since C is interpreted by this sub-shell, the command will execute as expected. To specify a command with multiple targets, you can specify a reference to a list of targets. In Perl, a list reference can be created by enclosing a list in square brackets. Hence the following command: Command $env ['foo.h', 'foo.c'], 'foo.template', q( gen %1 ); could be used in a case where the command C creates two files, both F and F. =head2 The C method The C method arranges to create the object files that correspond to the specified source files. It is invoked as shown below: @files = Objects $env ; Under Unix, source files ending in F<.s> and F<.c> are currently supported, and will be compiled into a name of the same file ending in F<.o>. By default, all files are created by invoking the external command which results from expanding the C construction variable, with C<%E> and C<%E> set to the source and object files, respectively (see the C method for expansion details). The variable C is also used when scanning source files for dependencies. This is a colon separated list of pathnames, and is also used to create the construction variable C<_IFLAGS,> which will contain the appropriate list of -C options for the compilation. Any relative pathnames in C is interpreted relative to the directory in which the associated construction environment was created (absolute and top-relative names may also be used). This variable is used by C. The behavior of this command can be modified by changing any of the variables which are interpolated into C, such as C, C, and, indirectly, C. It's also possible to replace the value of C, itself. As a convenience, this file returns the list of object filenames. =head2 The C method The C method arranges to link the specified program with the specified object files. It is invoked in the following manner: Program $env , ; The program name will have the value of the C construction variable appended (by default, C<.exe> on Win32 systems, nothing on Unix systems) if the suffix is not already present. Source files may be specified in place of objects files--the C method will be invoked to arrange the conversion of all the files into object files, and hence all the observations about the C method, above, apply to this method also. The actual linking of the program will be handled by an external command which results from expanding the C construction variable, with C<%E> set to the object files to be linked (in the order presented), and C<%E> set to the target (see the C method for expansion details). The user may set additional variables in the construction environment, including C, to define which program to use for linking, C, a colon-separated list of library search paths, for use with library specifications of the form I<-llib>, and C, specifying the list of libraries to link against (in either I<-llib> form or just as pathnames. Relative pathnames in both C and C are interpreted relative to the directory in which the associated construction environment is created (absolute and top-relative names may also be used). Cons automatically sets up dependencies on any libraries mentioned in C: those libraries will be built before the command is linked. =head2 The C method The C method arranges to create the specified library from the specified object files. It is invoked as follows: Library $env , ; The library name will have the value of the C construction variable appended (by default, C<.lib> on Win32 systems, C<.a> on Unix systems) if the suffix is not already present. Source files may be specified in place of objects files--the C method will be invoked to arrange the conversion of all the files into object files, and hence all the observations about the C method, above, apply to this method also. The actual creation of the library will be handled by an external command which results from expanding the C construction variable, with C<%E> set to the library members (in the order presented), and C<%E> to the library to be created (see the C method for expansion details). The user may set variables in the construction environment which will affect the operation of the command. These include C, the archive program to use, C, which can be used to modify the flags given to the program specified by C, and C, the name of a archive index generation program, if needed (if the particular need does not require the latter functionality, then C must be redefined to not reference C). The C method allows the same library to be specified in multiple method invocations. All of the contributing objects from all the invocations (which may be from different directories) are combined and generated by a single archive command. Note, however, that if you prune a build so that only part of a library is specified, then only that part of the library will be generated (the rest will disappear!). =head2 The C method The C method is a combination of the C and C methods. Rather than generating an executable program directly, this command allows you to specify your own command to actually generate a module. The method is invoked as follows: Module $env , , ; This command is useful in instances where you wish to create, for example, dynamically loaded modules, or statically linked code libraries. =head2 The C method The C method allows you to specify additional dependencies for a target. It is invoked as follows: Depends $env , ; This may be occasionally useful, especially in cases where no scanner exists (or is writable) for particular types of files. Normally, dependencies are calculated automatically from a combination of the explicit dependencies set up by the method invocation or by scanning source files. A set of identical dependencies for multiple targets may be specified using a reference to a list of targets. In Perl, a list reference can be created by enclosing a list in square brackets. Hence the following command: Depends $env ['foo', 'bar'], 'input_file_1', 'input_file_2'; specifies that both the F and F files depend on the listed input files. =head2 The C method The C method allows you to ignore explicitly dependencies that Cons infers on its own. It is invoked as follows: Ignore ; This can be used to avoid recompilations due to changes in system header files or utilities that are known to not affect the generated targets. If, for example, a program is built in an NFS-mounted directory on multiple systems that have different copies of F, the differences will affect the signatures of all derived targets built from source files that C<#include Estdio.hE>. This will cause all those targets to be rebuilt when changing systems. If this is not desirable behavior, then the following line will remove the dependencies on the F file: Ignore '^/usr/include/stdio\.h$'; Note that the arguments to the C method are regular expressions, so special characters must be escaped and you may wish to anchor the beginning or end of the expression with C<^> or C<$> characters. =head2 The C method The C method adds a constant value to the signature calculation for every derived file. It is invoked as follows: Salt $string; Changing the Salt value will force a complete rebuild of every derived file. This can be used to force rebuilds in certain desired circumstances. For example, Salt `uname -s`; Would force a complete rebuild of every derived file whenever the operating system on which the build is performed (as reported by C) changes. =head2 The C method The C method instructs Cons to maintain a cache of derived files, to be shared among separate build trees of the same project. UseCache("cache/") || warn("cache directory not found"); =head2 The C method The C mathod returns the real source path name of a file, as opposted to the path name within a build directory. It is invoked as follows: $path = SourcePath ; =head2 The C method The C method returns true if the supplied path is a derivable file, and returns undef (false) otherwise. It is invoked as follows: $result = ConsPath ; =head2 The C method The C method looks up multiple path names in a string separated by the default path separator for the operating system (':' on UNIX systems, ';' on Windows NT), and returns the fully-qualified names. It is invoked as follows: @paths = SplitPath ; The C method will convert names prefixed '#' to the appropriate top-level build name (without the '#') and will convert relative names to top-level names. =head2 The C method The C method returns the build path name(s) of a directory or list of directories. It is invoked as follows: $cwd = DirPath ; The most common use for the C method is: $cwd = DirPath '.'; to fetch the path to the current directory of a subsidiary F file. =head2 The C method The C method returns the build path name(s) of a file or list of files. It is invoked as follows: $file = FilePath ; =head2 The C method The C method specifies help text that will be displayed when the user invokes C. This can be used to provide documentation of specific targets, values, build options, etc. for the build tree. It is invoked as follows: Help ; The C method may only be called once, and should typically be specified in the top-level F file. =head1 Extending Cons =head2 Overriding construction variables There are several ways of extending Cons, which vary in degree of difficulty. The simplest method is to define your own construction environment, based on the default environment, but modified to reflect your particular needs. This will often suffice for C-based applications. You can use the C constructor, and the C and C methods to create hybrid environments. These changes can be entirely transparent to the underlying F files. =head2 Adding new methods For slightly more demanding changes, you may wish to add new methods to the C package. Here's an example of a very simple extension, C, which installs a tcl script in a requested location, but edits the script first to reflect a platform-dependent path that needs to be installed in the script: # cons::InstallScript - Create a platform dependent version of a shell # script by replacing string ``#!your-path-here'' with platform specific # path $BIN_DIR. sub cons::InstallScript { my ($env, $dst, $src) = @_; Command $env $dst, $src, qq( sed s+your-path-here+$BIN_DIR+ %< > %> chmod oug+x %> ); } Notice that this method is defined directly in the C package (by prefixing the name with C). A change made in this manner will be globally visible to all environments, and could be called as in the following example: InstallScript $env "$BIN/foo", "foo.tcl"; For a small improvement in generality, the C variable could be passed in as an argument or taken from the construction environment--as C<%BINDIR>. =head2 Overriding methods Instead of adding the method to the C name space, you could define a new package which inherits existing methods from the C package and overrides or adds others. This can be done using Perl's inheritance mechanisms. The following example defines a new package C which overrides the standard C method. The overridden method builds linked library modules, rather than library archives. A new constructor is provided. Environments created with this constructor will have the new library method; others won't. package cons::switch; BEGIN {@ISA = 'cons'} sub new { shift; bless new cons(@_); } sub Library { my($env) = shift; my($lib) = shift; my(@objs) = Objects $env @_; Command $env $lib, @objs, q( %LD -r %LDFLAGS %< -o %> ); } This functionality could be invoked as in the following example: $env = new cons::switch(@overrides); ... Library $env 'lib.o', 'foo.c', 'bar.c'; =head1 Invoking Cons The C command is usually invoked from the root of the build tree. A F file must exist in that directory. If the C<-f> argument is used, then an alternate F file may be used (and, possibly, an alternate root, since C will cd to F file's containing directory). If C is invoked from a child of the root of the build tree with the C<-t> argument, it will walk up the directory hierarchy looking for a F file. (An alternate name may still be specified with C<-f>.) The targets supplied on the command line will be modified to be relative to the discovered F file. For example, from a directory containing a top-level F file, the following invocation: % cd libfoo/subdir % cons -t target is exactly equivalent to: % cons libfoo/subdir/target If there are any C targets specified in the directory hierarchy's F or F files, only the default targets at or below the directory from which C was invoked will be built. The command is invoked as follows: cons -- where I can be any of the following, in any order: =over 10 =item I Build the specified target. If I is a directory, then recursively build everything within that directory. =item I<+pattern> Limit the F files considered to just those that match I, which is a Perl regular expression. Multiple C<+> arguments are accepted. =item I= Sets I to value I in the C hash passed to the top-level F file. =item C<-cc> Show command that would have been executed, when retrieving from cache. No indication that the file has been retrieved is given; this is useful for generating build logs that can be compared with real build logs. =item C<-cd> Disable all caching. Do not retrieve from cache nor flush to cache. =item C<-cr> Build dependencies in random order. This is useful when building multiple similar trees with caching enabled. =item C<-cs> Synchronize existing build targets that are found to be up-to-date with cache. This is useful if caching has been disabled with -cc or just recently enabled with UseCache. =item C<-d> Enable dependency debugging. =item C<-f> Use the specified file instead of F (but first change to containing directory of I). =item C<-h> Show a help message local to the current build if one such is defined, and exit. =item C<-k> Keep going as far as possible after errors. =item C<-o> Read override file I. =item C<-p> Show construction products in specified trees. No build is attempted. =item C<-pa> Show construction products and associated actions. No build is attempted. =item C<-pw> Show products and where they are defined. No build is attempted. =item C<-q> Don't be verbose about Installing and Removing targets. =item C<-r> Remove construction products associated with . No build is attempted. =item C<-R> Search for files in I. Multiple B<-R> I directories are searched in the order specified. =item C<-t> Traverse up the directory hierarchy looking for a F file, if none exists in the current directory. Targets will be modified to be relative to the F file. =item C<-v> Show C version and continue processing. =item C<-V> Show C version and exit. =item C<-wf> Write all filenames considered into I. =item C<-x> Show a help message similar to this one, and exit. =back And I can be any arguments that you wish to process in the F file. Note that there should be a B<--> separating the arguments to cons and the arguments that you wish to process in the F file. Processing of I can be done by any standard package like B or its variants, or any user defined package. B will pass in the I as B<@ARGV> and will not attempt to interpret anything after the B<-->. % cons -R /usr/local/repository -d os=solaris +driver -- -c test -f DEBUG would pass the following to cons -R /usr/local/repository -d os=solaris +driver and the following, to the top level F file as B<@ARGV> -c test -f DEBUG Note that C is equivalent to a full recursive C, but requires no support in the F file or any F files. This is most useful if you are compiling files into source directories (if you separate the F and F directories, then you can just remove the directories). The options C<-p>, C<-pa>, and C<-pw> are extremely useful for use as an aid in reading scripts or debugging them. If you want to know what script installs F, for example, just type: % cons -pw export/include/foo.h =head1 Using and writing dependency scanners QuickScan allows simple target-independent scanners to be set up for source files. Only one QuickScan scanner may be associated with any given source file and environment. QuickScan is invoked as follows: QuickScan CONSENV CODEREF, FILENAME [, PATH] The subroutine referenced by CODEREF is expected to return a list of filenames included directly by FILE. These filenames will, in turn, be scanned. The optional PATH argument supplies a lookup path for finding FILENAME and/or files returned by the user-supplied subroutine. The PATH may be a reference to an array of lookup-directory names, or a string of names separated by the system's separator character (':' on UNIX systems, ';' on Windows NT). The subroutine is called once for each line in the file, with $_ set to the current line. If the subroutine needs to look at additional lines, or, for that matter, the entire file, then it may read them itself, from the filehandle SCAN. It may also terminate the loop, if it knows that no further include information is available, by closing the filehandle. Whether or not a lookup path is provided, QuickScan first tries to lookup the file relative to the current directory (for the top-level file supplied directly to QuickScan), or from the directory containing the file which referenced the file. This is not very general, but seems good enough--especially if you have the luxury of writing your own utilities and can control the use of the search path in a standard way. Finally, the search path is, currently, colon separated. This may not make the NT camp happy. Here's a real example, taken from a F file here: sub cons::SMFgen { my($env, @tables) = @_; foreach $t (@tables) { $env->QuickScan(sub { /\b\S*?\.smf\b/g }, "$t.smf", $env->{SMF_INCLUDE_PATH}); $env->Command( ["$t.smdb.cc","$t.smdb.h","$t.snmp.cc","$t.ami.cc", "$t.http.cc"], "$t.smf", q( smfgen %( %SMF_INCLUDE_OPT %) %< ) ); } } [NOTE that the form C<$env-EQuickScan ...> and C<$env-ECommand ...> should not be necessary, but, for some reason, is required for this particular invocation. This appears to be a bug in Perl or a misunderstanding on my part; this invocation style does not always appear to be necessary.] This finds all names of the form .smf in the file. It will return the names even if they're found within comments, but that's OK (the mechanism is forgiving of extra files; they're just ignored on the assumption that the missing file will be noticed when the program, in this example, smfgen, is actually invoked). A scanner is only invoked for a given source file if it is needed by some target in the tree. It is only ever invoked once for a given source file. Here is another way to build the same scanner. This one uses an explicit code reference, and also (unecessarily, in this case) reads the whole file itself: sub myscan { my(@includes); do { push(@includes, /\b\S*?\.smf\b/g); } while ; @includes } Note that the order of the loop is reversed, with the loop test at the end. This is because the first line is already read for you. This scanner can be attached to a source file by: QuickScan $env \myscan, "$_.smf"; =head1 SUPPORT AND SUGGESTIONS Cons is maintained by the user community. To subscribe, send mail to B with body B. Please report any suggestions through the B mailing list. =head1 BUGS Sure to be some. Please report any bugs through the B mailing list. =head1 INFORMATION ABOUT CONS Information about CONS can be obtained from the official cons web site B or its mirrors listed there. The cons maintainers can be contacted by email at B =head1 AUTHORS Originally by Bob Sidebotham. Then significantly enriched by the members of the Cons community B. The Cons community would like to thank Ulrich Pfeifer for the original pod documentation derived from the F file. Cons documentation is now a part of the program itself. =cut cons-2.2.0.orig/cons.1.gz0100644000175000017500000007753207206610267015334 0ustar jgoerzenjgoerzen‹¶:cons.1í½{wÛÖµ/ú?>¶b—RCR–’´‰â¸‘e;Ñ®_Ç’“îuÇ Š¨I€@Él›~ö;ŸkÍ…‡$·=cì{Îé½ûÄ"€õœk®ùüÍñÅN|¼©‹URgÓd¹ÜÆ—iž–IÎâÉ6~]ÌŽŽ^$y|•–UVäñÁøÁa4†Î7iü²¸Š⃯Ž~ôùA|øàÁ|H/œÕI>KÊY¼.Ód5Y¦Gôó7ÿ–ÿEãYŸ-bìg3©Òiƒ[¤É,Ë/£ñ¤ŒÆÙ<®ãñÙ:çiüE4~ý:º˜?¾¸¸wp1CŽÇÜÌ›ù!-iâjLÓx÷z‘æñuO“|PÇ›*á“=m¶ZÇã/®è¯ÿÒ¶N©­çYUÇY®d$i|q‘ïŽï=úæ³Gƒø,§Kúþuú:ÞÁíà“Cmë‡ ¶õ8½Ìhù'°C«¸N?ÔÑx^Ç'?ÂÄæ‘4wà>Jñ£§ù¬ã“7ü'£7qÝR˜Ö:®ŠLr‘”É´N˸.“¼Z&¸œU ˆ»7KçYôPÕ%¬n5Žã‹ßîŽFñu¶\RS—ÙU /Ç›|{ý>ÍŽgIµ⋯OéE~iÑoÏwÌoI¼Lç554+6øí/›¢N‡Ô=¼ü&|¹Ì.uð& èïöj ²Ä5àM$%ûäÓ ±<›Â¤O>ýžž$묆waA.“8«pÓgÔV]@­Ù¥¼@õ"-ÓyQ¦ñu¤2v¯’l‰ïIŸïðEj ÿÄé‡5}YÄï1ìn^óù0΋z Œ¿Ôü 4K¤wÕ‹øäá£h\—ñÅîoýýbwr­}Í*˜A|rq5®‹Áè`=¸¨F‡ŸÊ¿?½¨Àcó”¨2/þvÁ ãÚ€¥¦Ý°sëŒÿ:ßEþþ›Ï7{¿Ù=X}sÿˆÇþ3j÷p3°ÿá–f°E|ðÚª§‹ÞæÜÜÜ—ÜškîÐ6ŸEí츿ÞÁú¿ó âA4˜”¿â lΖ5]]üÝ.Œ`m»y÷Îv3p Û;#EÄÏ€/ u%Õ›P‘¥ ü„ÉÂ^ÏÒqšÃÑzVÕ³´,©)Üý:«—ð`w|þýÞ0bøó ÿ¬룟ðrüãt½7tTv²JÊ÷0¢©?=|„äöúÕ˜Ó«y<-6eGo[lp˜‹N éº,¦iUña€Ù›z½©ñ-x{9Ç&ˆ¬Ò$ žo–ñN lL<ò"fH ØäŸøŸõ*>ű[®KÞ¿¨‘Êîù?yßð¿Ë2~ñ+þ –IN®s ÿŠÿ²©êlG>;^^'ÛÊ?_l×Àßéé×°fÔÎ*y«ïÁŒ ø+߯+ØAúd:]äÄLfÅt³‚Õ¬`t‹-Ž‹¯…Á}ŠÿzýÁ‰ÿüzÃûãCy4ûAÀKüµýe O¨!¼‰&Ø ÿEO‹VÏŸ^\˳t k³·HMŽÆŸåÔÎO8éYŠ—1þâ»…g¦Fg#ÛçzF}^þ1†¿X .懻‹-œ1lˆ~ Göd?ѱÁÇOôッ-;|‹âÀÁj`—>„;‡‰„§Õ§§ Ið¸Ÿ]hñt°ùíáþg{(føšŸÁ¹¶Q}zˆÊ(ð›Ïö¿h[…vâ¾LàÐj ¼ý9cJÏŽÓøXŸ»gOÁL‹²ÔËOü•;çW$X´É÷·_YbÄqnþq1ÃQÛEq ÜvÀhZŸ¯à Á´>WV¹0ŒkY\ƒlPË é,³Y7Ü Ó²&r_®Ë=¹4A@zt*ƒüñワâ‹È2—£8uÿþ2ÆZþ(P –E=]úŸ‰FäçÅÖýŒP &kŒÏé—ç¯ý/¸O¾cÜ™§|xË2gàÈÀäƒ×>ù·j}x½žþ)>G©(Þ9yõò,>€ŸÎ¿ùßñÎáøpüÄ?ÔEG°9‡ Uíœ QŒâ³b^_' ©ãßu¹a­ñl bÚ Zy c=û>ÞyyüâéNDß\Œâã?ã/ž<=;ysúúüôÕKà÷$¨5Ç—›l–Ò.ƒÊjCš&Ѫh×4xÒMOŠõ–5ÝéhØ_ýîb„SŠŸ•iêGô¬Ø€’ã‚$5Ó·ç 9Ad»,“JŸsü¤’O¾FµZÂ,C•n²A9”ˆoƲ*fÙ|ÁÐ4j„ ï ReÁ¢-žøï^¾•"þŽÄØeü”±l z/pePWè ­ Ä=ãþ:N3T¡üh?ÒÚ0.Êh7ai3.ÖøÑ^ŒÒ¨©þ»î©ûÎH†ƒfÅ&´HPCg]46£@hFð&ÍïÇÓóï_½=Ç9âŸÇ/ÿKÿùãñ›7Ç/Ïñï¯I†Y8N¯Rn …ë¤EºÞÂ\¢‹ßàw/ž¾9ù><~|úü”>‡yQ‹ÏNÏ_>=;Óž½zƒÿ<¦?^¿9?=yûüø>~ýöÍëWgOáOSœJ*]ܼ)sÚ[XûYZƒzZñ‚ýCsXÎXØš‚j<Ýx $xç=’eª+)µÙ„¯ãŠÇϳ%·sòêõ¾üŽg꨽ • ¹€à}#­0ã/¾ŠÏSñ^/ÑX§ôlƒßöÙƒaô¸¨j|—Vý‡üàðàààbtðÙƒßóÏoÏð÷1ÞSЋŠŸêðô†O`‘ç‘\Ìß m%qE,€VvêXC~9„Ég úgËíлaŒ´Ÿý‚#àŒ¨Yè_—p ›Ê½nD¥:ªÆÄ€¸€°Ó*»2¢ÏqÑ.K\'4é ¹­iRÁtOMÂÆº¾*4ÑÍ¢ŽR¼Ú,ë —¶‚ãk;ËðV-Ps”îYO‚Ãí£®OákXüÉ&*ª¦e¶ÉŒN5NÂø¹IEFAÔS ®’,ZÌÙTB-màvä:-ðÓ‘(,ŸL ?!pƒJ4üY¦kÚ© [š“÷)߬&05¤eRâ~Ù`6€=ŸŠàDë™LS*’ Ñ\P­Œ_§åR§9ÄÕ@› ¬ï*ž°ÒnU é2-sFƒ…µÝFs˜ 1þnYLÐØ‡ ?‡®¯€`p!*œ%Ì)[´t`Ö°ˆEYï§ð?Ñ ¦švÅ´W-’’íFð׊N ð´ú:¦äƉäBq^ÃzAseq…¶R$ Üýd’-³zÛØþ F—&Ó…46Žå¸%Jó«¬,rÒ‰ñpÐVÉÙIgG¸8x+À¯´vÅä/@RºÇÉ´rfcve ®(=]ÐÚr)+-¤&V‰qüBh6õ‡&¼˜L¹þÀU)èöu6å7‚~Ä,ƒëÂôTWzmp—5p&¡T"kéI èpîÖ°áp¯ãB&Ëm•U®("ñE ŒKÚ{Ø… ú&;€–`ÜÏ8þ©KÒH#¡üµ,+3`B0Z\y€À¸RX­ËT[`SQ]™TѺ¨*¤Ã!¯ÍuQò/‹i²Ìþ Í#…]⡤[‚ÜR‘Ô—%4D\FGTÈ ÒUæ)£r7$KØ ÞÜB•cæÑO¾fMËíºÆ;c½€…CŠM6M%UUL3òMй†Bw^-Û‹uÓ#>ÞÊàÎ&ËSœ0‰ ™€s¾–ò4ÑÖL p5s‘ø¸@Ò¡±+—€±L`ÚÜ'ÞªC&Š2ųS9 <~Á¨Æ1É'(t I™ïˆI:%+ü2MJžPBÃ3óáÄ&g¿àûHíOrý¸ØRçˆñ_p›Æ/€Cÿ!¼Êz_bZF. Kެ[ØuÄÄ÷7œ¿âWa” íS…æL ™â}ŒÌ>ž§×‘,rû*+‰Õ;ïcxìàêd‘Ù®ÙHOäcÏp4I*²›úˆi¬Š:»":À¾@…J—Å¿À¶qF¸4‹xç1µ(×p2^ã1êxã¼Lfd­ƒa¯#ƒ\ÁØQ0œ¸MØŠ/ûG7»¶…{Fpo^fÚR Ÿã±Y]1ý߸‚³EÒñ9W¡]fÂP€;W4Ç´‹¸}¸£—E1ÃÀË?SaLÇ­÷qÄ—}^öÁ-¼ed £„Ä ÃMa\ø-Î8¢"gͦÐÒökÞújž. Iý@Á‡ _fùU®Él }n@x‡Ýƒ«ÞâKf3épèêI”'Òá3ÂG•¯î)QÁ´˜Á „ºÞ±@Ý%Â¥Bäcµµ;q—,>6†º|+“ŸÖ4~哨uä> M6©°B‚'çzQ,ép§è&Ås·!6N–xËTÔ!Òe†èMvï’êÌ[ƒÎPÜn÷.‰¬llB'=žÍÊ3Ü«o§(EÔ×”©°¦âîu4Þɺ%lN’…£‹€€ªŸ¿%›}F"âá96Ì|Ðë¡ÿÈ©ü,óãc”'摈 ¢áBϹB+1@fW@VeÍ—® «¤”l´X¥0ã:B ‡Åcât«ºy’q‡˜¸êF¨t ¨Æí’n?¼"æåµ Ö3!è7)ôžñÜÚä>½˜«Íw—HŽšÇ~½€V ŽÄŸn”Š5º$Fÿš±Jíikn0V`@õÇt«£šòÃëãóïÑš4OI+ñ¦=1ŒÑŽïâA _8_€Lª*r\ÍQeÞ¾ýqÙ9+RÚ=ä×(c€ÞCC]HJ43Zúu´†27J53G’Õ(è†#SAÏ6ò§-.23ÕR‡X– Þš.„E¹„×—ì{о¹êãóB 9îBµ2:ü0jO{à„ç=3<ÔlF9`†TËì˜w÷ÝlïnÒ3`ºXBE;÷ OX¥=‚ë¬W D),'Ñ N†vÍ+<}ðçÆG:%´M¦=vÈïóâz™¢çžKû&FR0ÍVÔÀ?¦)™’`UùöÀŽÇǨ°©>ëø“Š'¼"´°$M鼑Ú+:aFl?"Ó2дÂj?é…Õe-gb™ŽÊâøÌ糪ڰxwÿ¬DV¯LšÚÚL"|ªŠ,þÊ:Êœhæ¯D9—WG£‹ù)þ@’mÍø7:ø (6²Ì€bw•,S&Qxñ…,=~‹FüùéÊÿÂj-Z+`öhÖF-$[—HÁŽ•‘ao•ÎH¤ž€4Ïê#‘DqÀó‡­_Û*©ëód×'h6 |‹Ì!šf‡¶@‰&×CÅ–71ʯÒ¯„h‘.פK!±†H¯Aû°€J6O'ÎÒéo! ŸÓ1#¶R’4ÒhÁ9::‰¬ÉvÇ Æ ÚဖjÒŽ¼ý“2I™ôÜ8pU08gxMÒ3E6½ÖÅ ”UX)ܸÎîáþÃp sds,|Æ(šfh‰Ý{þy³%­<Ùè À0§\w8Ûìþc¾ýQ¯òaÛŠH&#Tlâñ<˜ðF¹ƒÆ4DÎö”*¡7š]ÃnA¼…e•‰rÜŸ…„Z”©lï¬ëdâc5ùˆ(ϤH˜¬<)|)¤p•éÎ 3ýéé“Óó?ßH(ÿcHA”M24/I_nUD×;†ï»wïP¸¯ƒÆªò¦×\ÃÕEkjwm(ñ2—wä‘‘%d˜¬&eM¶"Ié1v¢G5âû“}âM4 ¦ƒÖVÚa5FÌ ÐÌi´n‡›_4˜6ŠŒ$‘æE†y‰~"†VSZÛ­uPF&¤! ±'ÑdŽG>´ŸôEC¹ øÔÓÜBU ôL+!ÅÝÉî&þt ¡|3Dcƒ×óÂ9ºéãº%÷¯´¡¨z[Yáõ€þó1ŠkLOA“YAÑæ‰‰zöIDùH û$]Ðw¾(k=ª‹úD‘g9•°$ÏÓ’¼‡Ö·0ÝÃ)Šæ-ïÃżYQNåbÓ*]…Y4Ù6և]TÝeFn çò<϶ ¥+ª4"·=-Æ áÍ"ìŽý¢GÞ¦éè°÷o8Ý…D§ASœ1Ÿ’=ŠeЖslô¤Ï&yuMÉId¯Èj=G$„°\tK5¸‘;”œâSŠžÝËGK16M¨QØšX€{šÊ%º 'S‘í Y'+Ð[iµ©ØÉ‰•LðÈfuä}'3ÌfjBÐ|:tô±„¼»É p— ¢¶<ÅU§`=ò¾FÒGÂ&ÁÿØcõŸ|ë Ÿ‹î­‡±pdã×PÌ‡ß ›¤Ð_ÿϵ*‘Ö'×}Ó¼ô9vHÕŽ‚oºë{?y,LžfBL‚¶Û†ÖuÉÌè} GÑp&v´5C™†NcTÃ.ß+D)>Ü u4XˈF 6|:ì‡M6·¹÷ÀëÁv/Ð%…æ5Ô r«£÷¢Œß ìzÐ2.iðÂ0bˆ Ñ8C×°™äÄÕt3Õeº#Ù° &ûrÈžè ^·c¾Ô+x<´ñFÞ%Nìoy|ŠÍÖdä:ŠÂ‘(,ôa]&¹øÜÄݡ¸~ƃ@—´È€–Ó…ÄÎó@gKŽòÌ‘T 1ñ4½_<€@ÙC°Ø¦«g†=ú‹¸õªÆÓÎ&ƪÓnÝõiÏWtRbÅŽ1©‡äÆ·¹Owj³\v7XÛÉÖËwIØ.}ª1¤l a¶.Öºµ8' (f£ÊÓiË$ÍŒH£Ö +܉c(ò+áµ#„ÉX‘Ø&ê’À~z ¨Â"ÇÌŠC¼½¼LYM+%·^:hŒTÀ‰Xñ QUBw’J|2Õmñàkoz}#¦¶!íEéþ"aaB®ÜÔH]QMÙfçah!Ýn9]3ÃÝ›Ö%ÁÔÈG[¡Ýé¨kªSÏ8LõD¢……sLÆÚ¢1šËm§Šwœ£œp$_ò2CcP˜8=Sù'] J#Ê÷µÜ•pO³cCdòBó$Å×0!¤ÄÏW0@ð ÂØÆïÞ}2 #:<î@iœU 4Š”ß\‚®#…ÀO¥ËªìºáAGÃÅÝ‘QÕ¯u™Q,/Ç#úaS’)ŒöÝ»}7@^c5¥3 ÒÈ=`\NÕ/>-9%šbRšDõsdjÔ 1‹¸ZâšXw]‚ût@\ü úâ­’®"ìJ¥ï·äÛ÷ º[¨ÛÞW^£Òì_£ˆOxçöþÅÍKй4˜¯“î)þFˆ'Ï»ÕãÕ HÀ%{ Û•U EOãÛ££4PIxœÝ5E)&”UÀ6F¼™û0—¶€1; 2gåmåU bô;åmí‰Ñ4!6OòÏ Ú­vG;„£¶2Ë"• ÁœM1Ú¦/¶‰¤ëó[uæŠËe÷låta\2W%ÄfðÆqQ²ðg‚2b“»+æ%žMµ]M Ðiè’€“qT%}¦.èaç›6'’.ØÕº†ÓŸ~˜¦ Õª-¬kjC9· hW,êŒ<į—(´9^:ŽŽ½¨&E¶hŒ*ÅÿíÌòC÷ŒU÷È1x̽!ë ².£p6¯ —\g*ˆù‹”C7Ó΋£4¸ »Eqt}«»hO“‰CQÚäPð¡†Ð4'ºº¦yS¨12Þ¡ÓE,¥»tÍ\g1д,Už)І¹gÌ l˜¸Ù®Ð|'4,„¦ ·]|@ÑÍd¹–Ià‹›¢î¾¤–\Lã:z‹õF¢ðnÛŠˆwy/¯³´ O“eRâ«n2C‘2à~‚×賡2"#Á°¾ã{Úç8~E·¬iO2› ÔÓ±93Dʈ„á`?èü¡††NSÌiðôŠÓ1ð¦æVU;@ã#ý`"JZ¢ Í#Z9õ‡Zv/ ‹:ƒzð¼NË`$‡‚މ³_ˆ›$ˆÁ zS­Œ}ai }u[WIÃL§›Úmf:ãFó-2”·Qçñï>Ø•³@Ãרíø³™Fq㲪¼¤ª„Z+ýñ&OÆb°pˆ"Waôæ!˜›„MØ7ß$Íwü]jÂÍ,T×V‰^·,@eXOt«½nS;ÓC‹#h2$wc²]òÏ<Ú ¡Ã*„M˜xG¢„Æ¥Çʵ,IóÈ5,Ҝߓ^;Êæææ×aúå+BÆ mº©6z±zæ­Ñ(^ö`‹½Ôqe)‡/Fú.«$©ãÕ)!HD½öŠ¢œKªI=â –ã¦31K½ëB•µ™9oòn!¯“6=µ¯$A\ h¥»c9Ž®ü0ȶÀB—éHû²\œ:fÜT®Ü>Ç­¶,žbΨŸ£UÚfŒ°Y¯ñ»y«ãþî°Ö±§átN{R‰m7S­Ÿ·˜¬|'£¤H—fP÷-ëàn yüqËó²%ŒplͰÁèPëžH€Ú…(´€RÏ9V <ˆsÒÏÙ%ó©EÕAJ¶žI<bN˜ë¾ÔìÞ×ÏIC^nÅD¦s½! ùyÁ…b(7 °z¤<.Ü4C/õÜr.‘+*ÚcôaqƒRHø mŸnŸ¡Ð„¬çˆ×ÐîI|n@2‚ñ[B9³š…ÃA-ÄÜ"E´š6ÌAíÖóÕz•ÜyÞ³ËQË9õ?×eùc:@Œ Ð)`d- %DÅ–P´.׋d’2ò·[ºñy!ÿâ@ÍU-ÞóŒð¸(âƒÒ3)~½)×Ð#ÂB¡¬ª¶)Þ‚@§D¹¼pCSg`té:ßc¬Á r\YÄ&2e„ñ7½FTÖE-ó>ÿ.«Kß‹§9é^šnèQÀ$+Ñ„;‡¸Ä¨êÁŠ=V¨ØƇPNdA¦+é]n<3¹¼°W Ž¡°)M“õyDìÅâà*²¥u©W’ÃârÃâ1ù؈Ǧºæ†ÍÝêy›PŽb‰m¶žÐcÍü S²fë™t×™Cpl‰qÜ]äÀNøÑ`ˆ!;3NÂL É D_¡.c J¦jyVÙ„¤jƒ+Iýˆ¥@c¨—Ù„Ržˆ¥M²œ~GÍT°J[3Öì=%ns×Ë÷ÙÊÙaוtar¡†&š†Œ¿Q.ÍP\Ñ7´ÌÞ§ g4‹˜Ÿxð6?‚ s— s(üÈæ'qµ)uç•rƬð=ýÓëWoÎAçˆùeÐÌ­1Z%!’ZµòñéK›Œó¹ôï eؽ°g¯1c"’õ˜ˆô©7ÝÃ!˜'@;~@¿“Ç•©/³G†BSôp$.½~)Ϩë‡øû'âH9)”»NY–’ÒÃéñan“kþ=Ôß?q)+®¥…JBI£ÑÊ—šØâZáP 7_ÎWÁÎFæT¨… w^gÔY_ôMêb­ƒx‹AÕí$@ª¯ºÈþнX_•ìÏûiª`-ú4ÓýLZ=uI—ò§Å„¸ey•‰‚·Ëæ¢}Ýõ–# É4Z˜uèÆ –i@˜³D ÙŽZiK&üÅg üï]pÂß\ðÐÖžªL²±Zð­æéÝy\²¯SO¶ßyázÃÖ7ñÚ—1Œ/ Eqíð®Jè-§Y†Ùc&±KàJM¢eä}!m8:§å^§&Ï3H¼µ”jïcrI—¥`ù9t2í£”žû5$'’8Œ™Õ~RóEÀ—àw VqÙm{‚ZD@Á”.à™PP^ »Ò-&eˆ¯• Vs‚|;ò¼ýEùÀ&›¢¨¥6æ[r cg`Ð[fßæ¯` ÊiøÜe¬ì7óVö›9Ì_ÚÎ}¢Õ~+ÝŠn&]5Þ3©WÍ÷mVç³p5LG~AàÇæWÓælÙǣçþÛX;û©ï .jþͤy/Ó$oÂê]XÇ1#h«ßÿO5ò_\ÆgóÑi¿.z<µ(V’@|9YæêbߌÖE–Sú‹f𻮯ecÑåƒØ$îÆ±a}¾‰”ÂQ³<[%Ë#>–Ú{ß{Ô9eí+9!C’]Ô–>$æ—å MVÏ*Q‚n¥¬‘¢k¡4€íÔi5*s‰zº,ygô¹ª`S8ñ IŽŒ£¥SRo÷ÈÁ£KDÓ)OÇ[E.7£cËÄ ‚d£;b9c†zº,¤z«Éxu*AB¸¿ÌÅéíiBpKî)ætåƒ3ý»wyi"c»yž"áŠ\%·n)C\'͵ä¬R {` ±i¥±š]3!}’.)éxh¢Eš›qü_~‡„:´Á×Y½©Sçlr.{ÞxŒ"{fäã»ÄéÓÇû]Šª}ޥ⪂‚RýÂÕ(oõÞ 4ç9ÕÀSÑÈ¡±ûÒ„špÁ.¤µ»þPîJªªW‡K ‰tÖJt›ðkWä_íÒA¸ýóâp{8¦•ÑØ>4rkÿ7FLîþÖ Ë7¼Ñµ†'8·V§k½n¢ÛÍô‰ÒÇ—pCŠÂGyÂpäNà ɳ)m€ŠEö,/)=õE¹9 j3ñðýœ{ÌŒØÞhÄ 8 ÇÜ‘¿Í¡rЦÛÞQq\j6>7Æà6%¹'85 \š3Ñ í-|ž}V™â‚àŽ~[Ö¨$6cfÀ,øöwïPÄî­=°´Jƒ8ü²Á #'“Yœ¦æJ:ö!ªËÖñÙ²8LöBä…ÎN7+h³9—ô.è& уdx˜oMަ-5„köî]–G”»“'<z\rd}V³Ÿ16fw%îlŦQjÁ¸.ج —©Ú–°c“Õ‚ËSÉ®Èà÷"ñ¨ITr¤éLW$pºá(f]úåÖÕrØD|P€A™xœF§"xËÞËr›8±íåx.ÝŒæ(ÉO¦ ×ÁRV©f¾E4K©D>îB[Va¸õ¬…ÅÇ`´ÉKøÿ¸„rz<ýžãÓ°ÊÓ C¿Ã÷¯ÎðÃF]uxq.´€®;˜ð¥äm”à˜« A©æNŽ£ˆ>¼9µuG!æ¸æ€"D>¥@Ë… aשT à\0[‹‹“uØá9¬ÐËgT‹«wPtÇÿ»õNJ‚fÅâ#9EÑHD³\9ð<RáERŸ Óm”Ãî#4"Ý_Ö·ˆg‚Ó«VSŽÌ¦ì²¨Fp€íÆbS=°Ïø—_v_q<ÓÄH½{ñ&§@‘{¯ÐYsïøÍw{uöë×=ßí¬‘±îà¢îL’þ¿=#‚P†íäþÅßüÆý"ï·mÓÿŒÍ…8Cë;ÿÏÏö‡ŸM æÞã·§ÏŸp )’Îuª—UIV4(R«¬¿ŽDÃrIH3ÚêyÊè"ÚjÓ¼@3”¶€6Ç^ÕE´©[ÙmÙá/šºÓΰã†þ%ï8 ‹4!*´ƒ)/”!ÌW‚2C¹ë\çvn#J$ܸ ೂwÏ7tØ;„lú½W_â§·iMÁ[^ëáŸ{4¨ööuc»=:UûaK³²¯´ô«þï[ZVw;m]ë†÷ú÷¢SïânѾZ+Û¿â^s-ß µnެÃÍA[ð©2IÉ÷=õ/Ú¯™ y(¾kH\µ¡õŠäw 8aL©È™8®«ÞY*1`„z¥4…ãJ"tÀ|Õr€¸Ÿ­ƒE¤×Pˆ‚†3ç)ùü Á%Kòöj:ë °xص¿Q«7ªgÓ£üÜ—ÃÃ(8ôƒºv츃i? ã9*ªqÞåu%]àºø_¸ámDŒ@™„ÕjVÏÑ n½”éXÜ™Jµ=×€žRB·dužlÜúœ’ðÙà 8 5K˜a±­cLЬmY‰‚2šª*.cò5]æ Tké1—Kˆƒ‘2`Þ*#·àÑ—Ÿ}~ðû¯>{ðY|øÙ—Ÿ>}09xp˜NgðÏ¿øüËÙÁþÿÉìË)ËÇþ†r_~5™ÿî‹/?Oü.~þÕìËß}þåÁArðà‹ÃÃÃéƒÎ?À¶©–GîWȉRÐ$0k¿<ý’‚#òZî]8'Y \á= ‰aݘ€ÿ3É7xž0üà«ß?½£—$»J ÎM­ËE:}_mVšÔøøE?É.aññòÔ“z±ûUv“ŸbÎH8º<"O3FàÄÝ0íÁOX’¹‡h½ ÈŸ:Ž92JC{½äRQT+ÇèIÙ§‰pÁµˆí¨FÖÿ¹äç¤ápV4Xº¼¬ÚÂ‡ç   À˜"¦všTÎáíM÷ê/˜Qð­í±¦Vãñ%×ZÁ¿†>)Šô/ÁõR›*x㤜_œLkª“hoNŒL*EµŽä6“P /~nмõÓ#r»pè„»Üjh°Ý¥ ½ÄÐÇI£A’t0pÛè¹v‚IÞ¤»fõ†.0ª´à M‚Á*I¡wG G9’“ïÙSÝ’*mÔͼx.,å×ÈÝ“-MP¸ŸU&È‹|d¹°ZÇ7”Ð31 GhŽtÅœ£é¦¤KhDo¢|Hñ 8Œýc€xï5°ºVwÙè‚kFˆÙXŒE-QI¬­7ÎïN¦Ü‚A!‹¡ÁÍ€´²çB(œ ¢6EN\U¶ðFb‰ÒG™áa0¶¶¦A«(; ® 9÷e© ‘$8x"ñ•5õfÁúÛ,)HÈZV9ët3ël¾dÔ¢ 8æ‘êÌU€ÐÄ2ºp¡B ´ÏžŒ·9áÂxÐUSëÖŽyš–V”Zͨ|¶n¡Æ‰Ä,•=¥$ñ½9.]s O1 \]/J8¤eâ Œ±xPEXZ°ƒD] %ý…àÌ嬗Ñ!j Y7¹æ€–"¨¸ Œ¸¨§ “çÇ\Ì;®‡´˜uUä 9g¸ïÂ!‘Ñ(4iرr6EÐøØ¨ù*‘­s"¸BJîM’6I¦ƒcÁ)Ó|$^—ãš—®RvÃ3h‚Àë ïÛÐØ»w‘²ÓC¼«è=䘾Çs༼ÇÆW¦ƒ[RÆÔ{é»ç‘šÅl8:‹ Ë£>âå‡x²iÆ=°¤µ¦â´;ÝZQžØÁ¬LÜ!–”ÿ´Y—•gÊb—/—MWœX§VckÂÛ7¢>ÁŸ7 5 ébO•Lýx0ºIâ¤àý§¿Äƒ"—xõ¦éÕ$ùK+ÍRF7ÆY á¹ø6‡›^iʽUéèÓÜ¿«TPw+ðçQ¼C?ì„°ëcû…ªÁ\þócèjñŽƒù—ë%}E@¡n—£Tõ±ãñ Â.Ѽ¹ ü8jбÉÖŒ&/•w–Ê“QgåI §S¸Mõ~«´ ¦j*lHŽ%j5â¢Ñe!Ø# ÙI9€1z<ãó2teÑlç+'?Ä ÂM®h«ÛÑh•U`)óÍR Ý7£üÌT]\¾]6]K®WÎ!aò‘@X<‡mŸ“¹c_Ø“|à¦L$®g³âhGáhû¬g¢á|@¬‡ËLfªBPHÈÄÒ¤¬¿‘$£’Ó©Hâm7§rmr±v×¾”é [÷qOFÆ‚N·$Q¦†G»„ J­c …œì*-ìoªr?%lVÜ®}¿¡%ÀÁWº¯Y ïÅ»Ö÷q“Ÿû¡Txã5éMó¥v—ͬŽ3˜pŠ¡à·vˆá~E»úÇ,¾*ýÁÚ£jóèOÇ#6t_^4Ú•n¨†2u±ç×Ó,¶S h÷ üŠ—"¶‚L‹µ^¶ ™ŒP=™©W)Œ[OºKá*‚“ÖYÒ8Ó­Œ9X&EÆÓTÄ¥¤ž™à¸µË}\Jä`çØÈ% ŠÇñ„y#ðZP´ô1¸ÏPYÆÚp²Óy¹‰M|Ì·>À†Tuætô›'Ò”m¤NÈÙ·ìcèQ?Ä{Bár—‘JŽAðŸæ#hs”^^‚ÐT3†BO"îæ ³‚ì¨KF÷žºÐÄÝÞÁYM}¿˜%0;ÎÅjϪ?¶!†j&ZKÄ$!àþb¶·Ä_ŒÞx”еóú4¿fP…È £77°÷´Í;â1G æÛÛن״nÑ #SÁeBÒŸ« ]ß…?M¶ê¿‘®{ã"µ.€¥Ýt+6^ÍÊ#»¶Ò\5ï9SŸ…8a“JÒJ: Ë‹fU‹XwI9ÈU÷žñzîYÔf­cUvÏÆÔšöƒ ¬ùÛÙóñ8’þ}iUåôg4ùYêiUuê{³•67¾ÛgÿcŠ“ør¼š6r´}Ý1ƒ©Î1˜ª 1ж ¹­³iqk)·Ú™ÿ1+‰Œ Š“^m¨óÖõ„“U30Btx/ZÚ—†Ó:m/WÁ"ÿ[)þÌ‚7ùc°\72ÈÆ›˜ÕÛºTE¬»°»©6dí T´È¨h䱂¤çnp‡q ¢²0^²3èD@,h‹òÙeÉà š!ª¥Î«Ú±µGÖ &…<TJåUšä•¸92£C¦9Õ'’¸jcØ3ãv¬uFÒÓÔÕŸhš/#ã¶¡paõ Hì6;M$C똚 ìË×í`#¥bŸH¸+1­€¶˜¶[”u®! ž»Ú !VÌ»ë­:8m¥-ìµ¹ßtÆÇ FÜÔsþ^îtW3 ªd.ÈçÙ¼S¬bÀ2X5\›ºUÕX…%[Yk[%MÝWXOÌh%¦á.–JÚÇž¯8IÑ|.òEàTK“[0¬©Ž¡0;”%”š“ׂœ¦°FF˜AÕËÛmZ "™{m±‘3"¯dÈt\/œ¿´c¾í»cõk)ÜûþÕ‹§û4ßO½¿¦3¥Ñ[ïópÿµÒ™ î7hÒ:\ÎÁhÓšðh9Z'ÿ´®"GÀ&¼ 8tV“Ÿ—¡Ö÷¤Ô3W¶Æ`:—ºÐ#æpõ9:öDj¼˜²®ÆÑo<~f˜¿~“Ë_Xräâ**'àìµáùÍûÄßO‡/˜FsÖ+­aüΑD°<3F; ¡ÌX$.ìœaÒ4,Rãæƒk˜t)Ç„WÃ8_Ž÷XÏBï1Y-Óæ=`V–/ 2>•z¹so_ÍÜ}¹ú,¢ÓTÒF雦–ëYÙÏgÙåÏç8éŸ_ýQ+™Se%]åEcn Ý™$-Øú¬IàwŽxõãx÷¥‚H®ŠL¶oçèæ¡0'Pî½(ê§%•É¡rWm¡¤X!ˆB(Aa Ò‚}ïÙ&zë6뙿÷tìÀ{,ù=§ã D Xk=b_Ïk¢÷J}¤AE·%3•írR`0á¼¥8ƒ»ºî›+KÒ\ÌnQýÉm‰k’JM*_¨DCšv×f}ØÉ¬?ŠßèÐášK[S¨Ø –]séÓŠBÊ©×bðT{j–ˆvµýîâþC[îhe9 ­Èç¾Ø wÖ §£oÞ\Švôé…˜yW®$sÛ¬µõÁ(m4P’á虾¬Ø8yK |0xí+$öæ;ŽÆ¾›ÀçžµDÞ†Ìq/ ­Áñ±”Ì ˜T8Äßá@äÉááÁˆ‰)+·ÄPPVQ¶×X'=ÔÔ¸Q1Nåþ+æÑÊÅéTÝÐö×&·Ë¬íKø9Äì§%ü)ã‰M®mí ŠAÅÛƒ§P˜Î30gˆŽCe_:è3êð°˜yP]ÃÅ‘EÞ,ÓQ¹‘à “HouI2U¡ÅV¤¨‹Ñ{»Tð¶„ö rƒ(kVp¾ó^søwåσ±°¼F£èÖ>‚ÝA.¬vŸLÉCá)Ì.‡àhÕ šç;AÕ}³ãË{.À27³¶šuY·k Ý(N£²× å.Ɉ¿u^‹ •cçÂrš²‚Q/|)˜¸8V6ê´ØÙß›'F’;Úèòñ1iÍÀnü½{ËG"ë‰E Ã="nïÛÛ)Š…²¬sáv ûçþçÔ£c²'G­£„ß´·mQ=ˆ¡‡m@õ1Ô ñý ðt <]1ÿ¼«1,ôé¾íç±&ìö-[ˆ—~ãrT,>YËx‡õ¥²OR|ü;ÔbJRLÐ@#Dbò©÷mWEG%ò[ rï?ÙÐ ‡Æ2²`Э+ì-'o]&ó ‡i0&¸âÂ,Lzü–³nUêË\ˆŒYa`q˽Gºê†\<Ž=C›\ócæYÞì–àìùÖl¸IÈá܈§¶,ʇFûjçãã7æ ûm:dÿYoÃ] ½­½ßðêÑÁ‘¼º8Š_œÆ£7®G‰ÀƒrV,¯œ®t Ë»âP:)Êdæf}(H½ÔQ5éº!†2稛ÏÈmlÎÞ' جÇnM!r¨‚ÀģŀÒÌ„‡¹VRÞÓXZ9²¾AsQ¾MhÆI=›EöøZ‹Ð©Ï%[&—á›( S¦©i nÝ’°ìÚô%nÔ…«Žu‹ÂGr’)?æq–KÄ_²ÞQ.Î ß‚žW1N¸Ú³4ÎQKDW©A+®äÚγÌñ&Å•‹I¤:ìOƒHÑ‹ÙíV|¤VaÒî*F6ü„ÿ‹ÉÉp˜6ÅØd‡ÆÁÊÖ–¯,z®•…]²Â=§ªk¾`èí;*iÛ,%ä4ÒÒ®‘ß$VźìÿEö1:Ãÿu½úŒåŸ5 b(ŸÁÖÚ´O4CÆ]Å%d®`þÐIéæny‘Mn'ÊÉ÷òىĥڳ¿#½mä¡Þ(Ðz~Aš/6Ù³Í!¢á$ÃpSø‚VGÀúj’Æ5J05IÊhne²ÎfX_›Gè„LŸ¡uÃê£cãì¯Ãž`‰yc€ZŸvüuØÄ8øìÖmmr49š†§òGòRÑ©"aµÅ…a š’*¥Å'tBöù¿‡ô߉ü6‘ßð¿Sùm*¿õYÒIÜ´î„Jm,Ý@ƒê¶ÛǽÉéP°ßÏRä¬=däN\ˆ%bÀx…BV=Ÿb-èM>K33”cNúFMTžåJ³b¿Œ~ÙµŸØ 7¸»´ÉÈB)Ì'6N¦ô¢úo;L4¿Ÿ¼z98ßž=…½}üüéè½}uþô ØöŸÆçߟžÅ¿ÝÊf©KÚaÈO”÷Γ©XF%½ÙwÈ1wb×á¸A#{ÞÓM6>%µZS8¡Zµà„-áŽ,ûE àòŒÌ Dd”`#„Ÿ¾Ã’ ók*ˆ ¹á>Ñ\ßžZ16`( á†ÚI+y)p„T‡7¿2˜ ´(&˵%‰Þ}ÈûH7·ôøåw°£ßŸüñé9‚ל?=~¢;ÚÐ|~Fô”›Ty¡·ÜåݤW[J/™1¥Ü„5£\Õwxèd¥Kç5§S« Ô¬#¹Í.Ÿ¢X¥æpx©CC…:«z3Ÿ75âoiLßÄi°& â*Œà/E–ïbP“éå½a¼s‘:̆&I†dj¼«dð õ“ ñ7«–wk ÅŒº(Žª æ\£ÁH¤Øx±Úú¢_YmÊ@‘ýÜÔq0çHñQ ¬‡¼À(¼GûFãÃVk©&¾.®AOß@GÃÈiüDÇXÛO;ŠÃ„P›<ˆ¦ÔŽGJ˜b0•… ¾,ÙeÚÕ4ºµÑÄ6þ6ô‰8Þ‚ lŽÏÇ`7&(ðOØ×I‰ù´aIëáÛ¼ÎBý5 ,ke²žf®:›Ž b¹Ó~¤ç#ê‰sõ@*/).h©… \:‹åf·ƒÀ„%Ò>ÙÕÔø1ŒÐí½Ž¨MĻْ”Æ{ ¦žQ6þ÷Ißèø_òõ¸ùµRŽëÖì[GA ޲›ÍÚ†JÙ1Gd’éVSÊø½Nµ-IBL¹Êqg!Ô«H÷ÐÁïU #’|^¸V}?AÑãè¬ÀéŒÃœÍo)ÍÎÞ‚l•M£p=XBì\ÖZ¬*tøÝð.ó&=èÂòù¦Î"q3x˶.áó—E1›lÓÀz¹«Ù¾èð0y*.Nè.›¸gT`!nXSÐÑ÷9dø2ÆfÊ­T'n(ºLW\Ë0AfÊ•æhQDÕÛ#´f 5n ¨‡ÅG²ß@ü‰YH'µ¨e¨2QgI¾ ’ßîíÖS¿ިɎР/0«´\Y.q ø6ã[%ÛHØÒ¤fh¬q+»^J÷¯"[‡¨wDR\ØÕÎn¬Œ±Kr²Ÿ%Ë÷z,#qsN^[?Mø½-¨æj3r¤"a˜ÒZZÒü9‚Db Ú¬€Åð„IÄâÈ7†ºÔÜxåÂkg´L-Ĭ¥q¥_³ÎjlG§èýÉÿ²ÉYR• Öb0Ç ^-‰Mß‘¶l,µðƒioZ¬]¥¨u§aÉ"A¢&០Ørœ´x’¦€p¡9%Ÿ³z=¢RñŠZ­j]c0dü—:v˹f ¯‹š1E•H¯’lɹ¹ÿ´?YY‰‹¨¶;l„MÉ?`ŠM]•îX ¾ ¦„;lPBœÑ‘ÛÏø03Ü¢wïHD .F¬~aN<:¥Êu¡6A[T^¨×@ßt%ÄŸ2&sÛàð© “«ØW’Ã5 «<°VÂ2Ž^ ¤’å±A¾o`Z`ëÈU6S8£ åè˜oÞÀ^E:ô”ûd4]$hz†K02¼9ìÐþMqÈ$±+T#0:$i/¶¨Ñ>,¨fÐÅŠwNáTýj<¶ò-¢Õ×éÊ¡^‰8ŽåU5ΘN ƒÂ¶“•äy¸½½a#ÞÁêggœÖ¹åf¨:Æf $rýè¼¶OÅÅWe“äZ%*‘C'Wƒº®ŒÚg]áÓˆYŘ8.§[¨Zg°|ì<4€ˆu阈ŒB›Ý·[¥`¨a®T„×ʲ’¶NR%¥¸fè;Ù .-ÊѲ‰k -¤Hô‹dMØA°m&yW‚_¦üIâsÎ˱\ .2ÛÑZŽ©ú&§©@b:×Ðz]fžˆF¶Èòb¾~mž-°2Lˆ—BÞ¢R“ÇjO#t¦¢ŒÂu`á=™r®*q÷4× 1a€·£‰cg­$Â['§·±Ê”3Šj¨<‰·säUéÐ @ 4Bb.i?äÄ*Ø>`p%°{QãÎô3Iæ8=Nxâx0Ú_£GHL'0DëM‰J[56*%¾y•¥×L.|{]&—„yÇ&ÙU&%•¨|°HµŒûº¦ì®3iÙÊKèkk¹QxÁm†ˆ¨÷z@è9`¡ Ô–þÝ;ö§ÀmƒÈ%-—ïõnÂ{䔜¼æ62=>cH™Uïeö¨a‘ÈgH[º‚…ÈP2Ÿñm…xÈÄ~‰‡ˆëš’&ȵ($·â "_m‹¬gUÚèa—ÀÆ1‚˜Ý§TféRC^f›’EÖ:½,%ÕÜ1ÒE1?d‹j–ÖÓqüŸaÂé‡Ñ 'ÛÆ \-Œ‡Ãµ‰dµÓlM§{"ѧX“U0Ý­Dq#”ë:_húöÉxÛªÖI˜püÁoPQZ›bÆ ÏÉ>Mý’Ô_f˜°š»N›áOëÖ†s¾’Z±ãÌ Ip‰0YÑß!»Ã®­[Q­ϧàÌÁ~5»ÓF³•ƒ˜, V±Qh‡¸m@¶X±„¤Íâ3é\Ù7 ð+® º/FE#êeȆ>sº}¶6‰â”hNõ’Ü –_1™½òÛnƒN5³Ãxvûx7È!É=Ãuá+¸á…~«BFÚÙ½\©ÿRô"®ûEqXÞ&kýt,Ò¹T$¿¾÷Ÿ¾üÁ-0{ דv„ã‡õ*S|÷Œ”ºïà £ß*õçÙ “ÈðyßvÉZ±9„4Œf—| T#g–Èh¡{²Á¸¬zÔôð´7µû­sïN“ÛÈO³Y©Ñ]N·ŸpàÕ$”"ŠXÕ…ÿÏÇÔç!èp ㇰ10¾ñ!!äÈo‡æ·Cøm<[¬¾¦‚”Ám (zK#ðN.y×'9§xïRv‹F᪊ۆΕ–+ƒ­§´µÏÿëh^êݱîê–˜|É›ôåÄMÓns,âv-e›µÖî:öôE7Æ­ E}õúœâPèò'¿ãJ_—áïwpŸ¾¼o†Ñ«ç 6†¸7ïÃVƒ`Cn_uX?… .Òqqo0ôý¿t¸Y@{Èa]÷ŒµÐ.,œFëc<=Õ}ö5 *T."ö§™]0xM¤°5j¶ú{k_ì ^ÚZ4þ¦bËh`Ê¢ ƒµ ÍdÛ­7¹2» ²@œéˆ…ˆCmWG$juu©í°Â¥ËŠVÑ•hDI[ûÎø¡$ºX¢{š¬xá­é²À£€¹!”JJðÙ|Zê鈗ú7à«@6øêßÚàMù­ÿ¶9áY9EV“¡HZs™û¤)Öô]l011  U‰h;ŒŠ!ÁGó¥sðÃaz Øfèú÷Å{×]MËb÷o¶dÓxÁÖyJÄÝéÇêå$ RB‡³EÅŒ›NÌèÎ,iž‘ÄLòÚϧViäd`‡¬"þƒ;Ƈ*CA˜‘Ô}ÂØHÔP«~y÷tÁuo¼ØÔ!õžÍ‚º Ž:´šð±$¶V]uÞElÖMKñO}±KAÓóyå0‹Ð]‘ Šd£rLj±)d’¤÷×¾ò7µÈ}ä—B´8 øÃÍ K»Ëp,8ýž¯ï¢NT¡Vã^ ¶GÝ·mC©»Ëë(åiYðÞ Bümïÿ(—¬³x:à'©ý4Å ²5G|£w o€èB qLSÒ!Ö •@ |+*6Œµ |ø>I\°°õ’SúôŒ…UÐñÁ~ÎòÚ¼é›I3"SàÍDEË$ ŽËC̺ÇÞÅ™–ê’ˆ¡Èz$&›¨{mcA÷‡ÅããZ×µÄUjŠw¯±ƒ74Š š=Ú‘ xÈ›&üù´?—õ'˜y¤NN^½¾¹ÍÜ—÷ïÿ|Êÿ€ âþC¼ î?’O_ž<9}óúÍÓg@{,Sj›úScЦf†ÂÏèÿÜ ˆŸCs÷Ýw7 éùéË?6ûýÓŸÌCß2=¤÷ï?"íaKØäýŸŸÃÌ 'L…4_¿xõäíó§Øý$•î+÷Þãæª<—GÇo𻓔î‘Ý|äŸ4¶gç>´s_? ®/Òüþ›ã—Xuøþ#)+Û pO›=k¥ršcqZ”ÍÜ×÷›{ñ¤Ùþ•G==à²ùQKª<;{û¬1!—”Êl{ãª8²_=þÏàËBRòoñ3ìô–=¢,OøÇ þuH<åUœ)ÿìЉíŸ3tòQòó‹ê_2†ouŸêˆ¢ÂZj+úðAßé^öžîý¼X—ÅÇò}:QûÏ ·‹w?íoÛ bÿ´÷ bùè»ÔýbS}ìQÝ/á»[Žê¾Ô»>ê=±æµŽ¬,y|·“ë'Ñ{bûO’Y¬ÖYj £}¦ü±xú§§Á±H?¤ý§­yƒó6[.Â7ZGnò—ÁPý*µ¦ÚFĬ‡O0‚@ÃÊ;lQ`Ël(éïš”5úQWŒrFé3©æù"ëÒUyåd ¡°ÚÇ £tNÚ9i±ÎA’TV1n n(D#ÒJIhLF²œ%2ž”¥ss±Ù§´É2[ "ƺ`·‘…*¿1 @†%À2ó« ãÁ$)]y¢sW‰MF": F4Gª¶aS0°©ÂY#<íÆÌD¥s)q,]©¡x¹ÓkJߨ%Å3»íâšqEÔ²ÅÊ.;ÕÓ…õº@& àµÂd)1ña½,´¼é$.7„mº)Éi‹¡¸uÉC w%Yú„GòîW…×9ãÏùúº>Àƒøâ4Sà ùHãŠf €UºØñ)Þþ*-?€øŒp$}-‚0§Xè5ü4ä\ŸŒúG`u@Àðˆ”]òN®@A(€qDH'Ë+ Ê8‰O¼;nÿà–Σpš6ùPáH gõ ÿÜ`0¬úH`¤à E±ä%“Ò`?EpÚ¯9CŽ5åºR ©†h ¶¦š”õ=÷hé„oNKÐÌð¡ã‡Ää÷Z.]»×2¿#6—yVà\ãÝYÖ,°[ï'(†aïºÂâæ;©ÛŽn[g-]04+¾Ð ‡Ëf]ejPôWÂPj8ÆtKaÑåî¡3|҆˜£—®êáÍ­L TyXìQÁx_Êë‚)Õnµâp‹jF[ ÚõÜE‘ØÉ9Æc¶(èxh³hŸ4…Ð`"´`](å‰Ü”a´^&x‰¡Iø]IšÒoq‚ÐD¶,*€Å¼ƒâzJ sY§z2Žžcªhд6ÛBJ¤•³LÁÒr}ãHªN6 ¬X ¾#4´ÖÀ­& D gFI›NP®ql‹î†Ó°2©Ë§¸Ë4âÛ§ÁP*¦F˜šlQœLKå€&!ÚU`ò9Îù2ò‡Êܸ7‡öê ùºX­­…Ã0‰ÀÔ`™·nïqüØ+Ó %Ù´æ—JBPð¸àü@-WR¹"Ó‚«Ú?]‹­Rà‘Ç39m›šÂ„6Bf6²œp”É{íå&ä_[nû:p[m |ÍÑhXê‡íðÛ‘P f0«å£ N¿4l fš“©î±ß^­T¡ b¥ VÅYsæ=—TÃI|Ô§È ×Å’S;îjüÛW'}F7J"¼ÑÈ-AÑ´¼±9¡€7௮Þb×8‰¡áŸ»^|w‚ëzòôì?ªKô¿Æá®Ô³7ÈèbÿVU Òø´â(Éó÷¥Ù}Wyô>7¡XW"â?EÙZ½ýVH`›²–ìB3‰}“2ª@¯¸"0÷‹¿ïV{&.eÄò;†" Êç"X Û„ Ù]îQҳ쒠fQ n6·A4¸Z Ly<^×›5‚þDHì†mè~ó|‘¶’÷»Þ 0½µèÈvx‚‚1~4i«(»#=”ºÜKýmyaUbºE5Ÿ# mfç.»(oÊ6­ì¹”zMìóÇî#âÄ à #AA3?× &mb⋬ çz,, gâÊÊEfìÆpΜÌÏWÕÏÞø”æQ}èŒL޾2@ä!Ta “[7Ãx+îŽ/ë(2‰šÈ CH¦÷LbJU0Ài™M`s$íú4Ÿe Ìn’eßÁ–3gl‘ÄûPÆËW>gAMÌ„gš€ñÊÛ ò5µ½‹=ǻٺÁÖ)XHª_ÓODÁpì.Y`IÞ¬&ÂÆ\ÃbQPƒ'-»ëmãp€²q'šwÛ)èÛi£ÁîCKoº7noÅE—$ò© M· “Ó•eÇECMœãÎÛ¬{¬Ex*áXàF¨ëŽ£¡; yŸÞ9ï‚Û5Î8±qR*žcÉÅù§5ŽÄEURÆ!%§‘JÇlBÕ!ÂJ´íì²Ä'"\kô°“æÎ¡ -õïeMIÊ/0ø¹2ik4Jýå¸[Tv.o ˆ6…õ"‹÷ÐC» B.$È(eYt¦b|Aöø Øä2g¶”!ðWÅÁHIeðh<7ˆˆŒ]œ3 hTº‘¦2´ÁÈj8.òrvLé óo}Ê£*æ—ø¥#<écÀ¯sï‘.ê£Å'µsê`È@ŽÙz£bÄê½µBŒùnÐCÛÑs=8Fªä¬®Ò%š:xøx!s³u¦YHÝô4"Í3s’-›;|m“9¦q#;˜cð»29=9¡Ñ㬙†Áùˆª·hÖ§ãçDíuº÷FÜ^½ŸŅ̃ÏŠ–øâ½RÀ’õíË‚‹¸{ü,ê¯Ãaü ¯î<$Š´8” í˜WmSÔö^ÓÆÍqÚîFJ—L)À황°¸u€F7å$u2‚&äýM¼@Áß»ÙAwKÞkม³ÕY ÑÀ£ ²ŸÝ›®èª ñ9r €È‡‘Ð-SæŽ)Ù4ŽÖ§òòJVl|tÉ0zÓG Äéjîv0À[6µ_îkc~í ˜`úYÒ{è&‹AǤ>úZkB¨ÙŽ«äê^A­ RN²ºä”p*«mÚ±L¨ô¶\$μD×–Ntà C»á[ýgÖtMس?IJ`ewÌ_)l§vï._vÖXHw‰ ãolyܺ õ½›¨ü¸éöº‘Ϋ›)½ÕÚM´.:>ŽæäôRںȯpR5¾ƒ@ã;“™)Y!мâá–ø¾tÕÿO:<3Ìf`|$Kx×…¹7Ï8¢XHŠ>ê9ŠÇâ†<¿¬ŸQi”³rŠÿàƒ¬ü[úiP_Öh÷€ÿþ<„_ªrJ¿ÀánûU)ùøIÕ¼¾…÷ðÚøåz¿Œñ3ü>“è[hYžc_1v„ÿ>oŽˆÞÇPÃn”ç²spgD½•”±ô0?Šlc™æ—kÛ>~¯á¼¢ÅüN§O_î9|=mÁнgp^¦H4úR)³ö•2H¤b$ŸɳõSWœ¦äµUÔ“ÿvã%,]e× ‘ Ïrk‹â$ªßdgÈ8ÏR̈́ع³yGj<Ù\Jñ…Š[•øK‡…J°šx™ œÌ=â\[‹:Jñ½f´Ý´‰$}1òIˆ‹ ovny·º{ÇO´žUç†w·K–L b§Iœ€À4è…„‰Åhhùã%ËAî¸Â–•ás ¶vŸä¢}`«ådA¦©,çê”.'’)Ö’æÚ£0×9Á‚øï™•‹ép—ËòÄ!â;xÉt2ßLrBÆÞç>Ù’J}ÁuÒ¿ÜèǽM„Ñ d†×å^ üW ÷ÉyÅcºR µ“'»Ú˜2©I½ô<² 0ªHÖ>,]m(kÓqý @V‘Ô3¢HÉÞ²ÝüGùÐ5!Y—œ¾ál½iTÞÀ±+³¶ßZƒv\7½¦~EôFò#õsj=ªn xt£¼™»u³<ê·€Ç7XÀ ™1í­§÷¾›œQÏɯK+çr…:V£+¥Í9ò€ ºï+¬«tê ·K2SáÀ#QÑuá œ®ãûv⃖’‡ð¤.1’‡>¯]¶J€í™—U­¬ÅÆ£TK= Ž‰ž¢ˆÎ°›Àÿ‘ñüÐwâ&½H ÿx<Ö¾‚šÂÛ÷å­û_íH°™ÊÕ§ËI´•1/ÌÙlºÍyÝ*QzzE»d˵¨_%©„ˆBŒž±q4a!3…wÐovC4îû]ѰîW̦÷)˜µ¨tÆwã,r‡šØcËA°Éý‡!jO¦aY6WëÉmrŽ$êÉj1:‚=JzfÖÆ™0_”/A¨!T¢T"ø¿¿ÂÜ~ùÅ:ùÒ颠Pé,¾?Šï?êyx?ºáégæ©Ø/Nç4æK2Êý.\Kd·Ì® ½”“w :Þýu®^%åŒ*EN¹èÉzÍ…cFdzãÍûr²œMÄ·P¡ÔüçKÿÆe}Äÿ„Y6¾À‡­/°™Gwù¤4Ÿü5ø„ÐÅ)öÁa§ÒN-^%w /²ã•ÔT\’³-ôu% ˜,xrJ%ÆÍ Ú'Hº+ƒÌYÈØ¼ ð;šÒQ‚äÁCcT6)¤üB"¿<Ñ—½.¿^nªðUº*ÖkŒAbÞÀ#åf­ø÷¹þÞè±jý4÷¬ãqwï¸p`ë,߸8?¿#ÎbÃÑÜ(§‚>(‚§ÍXÇš¢0æÉÝkG³G7´D4¸þY‡Y]Àq³èŸ_©Ž£lXèJåW“-\èÓeQq.¿¡„U_ãY3#üÉÇ…Cÿ³çޱ·rN ¯?ªÁEøÊífM£ñw-}%Î…Œà¡Þ%ü5½fÊZ³0½»†¬.D"[¨ÜP+Œ‚íÇË™) ‹e”7ë ó÷£LŒYå8Ëü>Oß¾O·8WÑé-;ÿºÓ3ü§ŸO^½xqŒ1 SÅã²#;ÃaüíÏ{ñ¯òY×u1àQ :ï‰Á'ñe(GñýŸ´?¾û4ïÐËð’h]~ã¥ÆZ³šJ»G˜ø&ÿ÷WìF9(öÀ£&N)Lòx^£çÞKSƒ‡"vÒN‚@‡ÙãkÜeŠtD¸¼Ò ±8M–Iþ^âr—i 5xÔÊ$£š)¶”ª—ÜIœ¯Çç •q²,Pލ=}Nö÷¤Îæ“°ÑÜZ‚¹Z® Õׄ-^C?Æ}âßÅO(v{O”¦ ¤‚È!‚¹Ñ;‡–È+I¦}›ER¶(ãú×ðæ¿˜ÃR),E.îa9ˆŒ¬Íyd}¯ â¢!ìs'?°§žèÞpE-_¯je´w ûUálî€qÝ@IEÔa¸rCMË‚â@àfÂ`7‡ ªéÃ#ÅÔÇänƒ¬/'«aÒ¬fcäéò]£V¥‹7'-AJ)êÔ•gÖ^Ðb%Dú -ù¦L¥7¯êÚ¢æf9LJ¥‰Ñ<>d–ÇÌÜFèó+«!$(XSZª±]ìW¤á¨p˜äB tW-Èä¶Y·K›¾¾îˆujæ-nè0 ´È8wÑž·zûå‰X’—áqð¤Óve…¯ÙÈFeÉœ‘Õ¡FN6rÇ+h96uáofÁL*‹NOÈ–¥ÐíÍf ]ÑŽ¶K@VP5Œ@UºÊF”AÙÓµP7@Ü\¯™`cq±!¸ßs£l¬–T} /0z¶xjŠvôø"É».éÊÚþŸÐ MSðYÑ0š¼Î†}T©Þ‘#‚ý|ÕÎå©<•*’+‘…¡!²ûÙ»<†‘A7aC:àÒs`‘ï’ÉÙ£˜êºØó‡Q3¬g° ?­AÞù³Wg™ÁbX¾«ù™4TdÝÌ#üÑO£rÃÚ$B F ã€ûµnâ¡—ߘ‘ïî?ìíxã¹Ô& ¬ªêûàò{êsâ’Š‚Þ±ÍγpI]¢äwºqâ/&DzjJ‘S–'%H¦Th“ɸ¢¿-´´…Õ6Ýð(´HFBh¬”4ÈNßnG“Y%O%„5/d /ˆ ƒ©œSI(®ô:}>mveÄ2ñŠÕ ï“Zõ[Wl,ãò¥„áÎ2뀥›w-ò­QÝù'X[4÷¨wÝá´‡§Æ}l=Ü {€oVÓdMxõ ](º®iaºš… `ÆEÙÞDzØ(’H‘0¶ñÜÀhŠR@x•A\>EajY|IGàƒÏÜWÖH…‡òªÂ'í|;]ïX˜?¼'(Ú/ûûŽWå…·Ý$ „2L.Ò{2)E¹Ce†Á æ‹.%IEpObx¤¿·Í¦µwD8àqO•·{ôq¶Èþè}(zÉÀÿøÈÿÈÕƒIfò cU†c-B±Ã'^DyÀÕ„LT,`vDžxa‡Ø*Kϰ^ɲP!EJ÷á= W½‚y×DXçoØ€'W#™ãÇÎ õ“ŽqÜ{x4ûÙE<Ç÷Íñ—9ƒ·J.¯½q£;äÑE÷‹K)†¿¦üT# Ø<‡. »ãøŒæÞ9Ϭòa̽ÀÙ·Q/&X`XTέ.ÌÕÖOL^ÌR¡È‡ûÂ, háêFžæt‡Ëëß×hp¯Œz££4„‘_6xHÔ7Ž¿O…"Ú®‚¦ÄäŸè/ô,O1*‡þ…g£oñlÛ£ wi|ÿÀÛ©ÆÁl*Í ' |—ÒAœÐŠW4EÅ@%risá0v Ñè ~ ’-ºíp‹nT‡VZè·èÁ‡°à¨‡óÓ”q_`Ë'}+B+NP›5*MÑÍCù-÷ðMˆñPDÅà D Þ’þó6Ï> ƒž¬fR'sb+³Æ´ÀldÏârË¥±Jg£òʈà½I%ë²'i—ØÌ@î’’ˆŽbdÂýU·öÌ„õx„ ô–)þu… 2 >7¬£¹SÜç®à¬¯ ˜G^_3ùa½yw¨L*­ïª»0b?Özä:â1¢Œ«q¸Î{²O6E>B(Køýjëÿ–< vÑÒpIb›¿0iÅ„hþ±Iè q ü˜¶lvrâ]}ÁØÛÐC21kåó~Œ ¶ rÅ c¯VÄÓh¥ƒ4FÊy @6­JC´lÄÚLnŽ»å ¦Íë‰%×ãƒA9¸/´«¥à®Âˆå8¼…|©1)š’Ôæi^P­Ü4Nk‡ƒšÄç‡Ú®ÆÁÝxaáäì6ÉÐn[ÈDBëfï!v'æöñ ê–šp¹a¬eÄà Þ§‘<ØSm–]ãê´ÑšZ‚ÖÂRå™e½ZñÄ-< Éëꈈ© åÌð¾AB‰4ö³-G»T„…:ù­ 6gX­Èns÷I–új”Y^%Ìd©èp|ã¸t)’ŠE !’Ôš4%¤ çÔûRX»"šÐÚ„NË%/%ö­O}„l w7;>†m‹³‘ŒÊ‚…ÏÐJçR HÈ]bÍø†NñBÌ6ÿ¤ü9ù®´ìùa’káqnCÁ·õú•4øBìŸrOèJÄœÐiÀÆý3\¿…”¥|–ã÷ “‚z5$”›M%±¡î[õÛ2‘DÒ‹Ñ^ì€×Al=7Ùòè‹´1ŒÌpZ­m`ªhÚ^"ê†É:+/Oã7zíâ‰RîY# Ý5ls=ß.ÁÄ]LÔ+ÁdLÜ/ÀD †¬Š¡—¯Â°’Í:@‹ ®àÓ‚âpÝÓFç"Æ0úÍ «A‹ÉÒšv²J3{ºîëçLIwº¯åÝžûº»¥>ÝÒßJËjÁ¼ãݰ®ÃäKZ›¿ó%m?¸ó% {ô¯^Òxˆú/ióbb_û7öÿö[ôtã•@n¾²ÿuž]o¹°£~uÞ\«KwTñ³úg.éFK’G£|ñ®÷uÔ£ï·ïë;_ÒAÌ>åŸóÔÈ'aöÉ™VÏ©’¢«ÙåýêZï1¼Ñ‡Á ÝÍæ$9C¸µˆ —Ée%5a¡EZï5´ƒj^€ àZMS‰>†|Pç ¡riÉ)X wß®p ŽH®&¬mËÒ3 `Mé\+T=Oê‡õÑ®æè”©úŠëBÚU‹pß´öºû.®}ëÌr!±üLÍÛÊŸ|¥ÄÏwÐ.dž)³ ;üï# á<æË(ûàòË.Ö€XîI,âjÂûpèÔ{ËŒ#’<ÝDG¼è J} f»O±aªö+ÓÀÕõš¸%É‚”*ÚÆ6˜5!'Ë©`Ú„z  wS CàJP“$­Å©)ôoÛmñQ—x†¦8Šö5-ñ7âèÁVi—swF½îÎøcÜQËÝÙA6?iàÂ$)ÉÕIÙ+ó3ᘿŽÑB½Â0)\õEôF¿™ë¨4ÖÕ$îÂz™¼Ýn„ªz|§sʯöastµÓ8¥\aÙ†œ@W$ièY>Gɘbv+dËqfe”×$åÁùl&¨_Ù,,e]IujWZIqk¹º´àò£Å§F¼W·IH-ÿ›«8eä_O+}J8j«¶^^ÇDVÛ{ùìl´*6¨ë ÈÞîÀh= iÅ^Ø!D)IzZÕ³ŒÜRƒH^›* —·‹ «TÔ²“z>yÄKì‰hž:>Q ÿ¡tïeì$‚*!aŽŠXJ/ sÚ’Ö( 97‘Lœãa-8 £Dà¡QP¶jdáù¸\)¬K‹Ò0ÉáókGÓì!½ÁSÙ™ò>}t‘Ž÷ÍPRÖl.7k®¾Ë™*»ªžV¼'wŒ_RqŸƒÍf®ü2rV @\X°÷22˜ÑÊ>3—‚† 7´Üÿö6 ÝxÏ+Ën0è,YÖwâ?øb÷éheJ³çÊPXnœ Ò‡dꉚ,ÆVuØ`>4Ä{ßÄ›}¢tZë xDqÐß”«³âù¯•Ä9£ÃŽ#’qtq1nÄ•ÁDá"-)$J ¢iVN7+‘ ›érx·!ÍuT½ã9üHá6w¬„«ÂQ#`~¤*1¸ãŠgÊ~¹™Œ ¶ƒ(CÆÝJÒmšJøFRK/FN&ÜSnÝI_o«ô$™.îvÇéË=tÖÓÉÜ(§Wla¦ wÇ¥ah6HpÀ~3«¡jÂp:¦V®ŽÆ’ "F™¦¾Ü$…’—+il›Žkw‡:ÛHß“‚±³ÿýïñuRæòÐ\ È&)“fG«ºŽ'±õ× yÝíº×ûŽjG{ -¢ql¡Þ(.õ™µfŠ™Æ„ÓbMÕܽÏ¥sð2 »ð$Gý'ùµö/*þþ¨wµpûï¼Vúr/Og[~Jà#ά,miؤ€µáý'Qæד’=âÝy²¬@» ¤K¸ ÒqtWFwOоñ“xxóÊœÁøê»“‘¾ÝGE=­I<2(*N2rdÁÈűïœ{‘¨4ŠçHïËS`”ÖÓba»ƒ£ò1[#Àù_ÄSÐ*×ÀW庭#P7–l“Ž»îÄ·ä„DJuKF;ò¿qÜaé4 srcY1†Ç >h Wå+ðQcèMÅéÁ‡{>€Oš®:lÙµÔÁò$+ïLAònŸU¡³¥`_xBD MT{âCF÷¸fA ¥;‹ ÷¦×3ØA2íŸõ{Q T6¹f¯£É[çÓkv5kõs$ÒZ.d ¡P/fºTP–;›ehìÊ%êÜ>Äò»óþéË=ØÓÖGí`'(ÝÇl›ä.¹¡Þ¿O—ë;Í_ì™wG¦2 BZÇuúAJr»ÜѬZ/“­†eâ!&ÿޖijØÒ°v#·¬=2§{ÏŠ))/®š•ËËv‘ê ð<”Í`IÆþqt쥻ó=Z£‡8aœo'§ëZ+T{´¹¤6ùrR°ès\Oñ}Ý\œEªI<ýPK´3¾Öh=CJí)‘r‡OÎ[([x…{0õ–]%”1K/a3²Æƒæ³_;«Úsuc¬è}A<“¤Jª‘¹]@(EPÎ N1W…<#/Q1Çè`Ÿ2<qwèPΜGIÐé#-ï ¦§Î†I0%åoäÁ´ÚMŒE´ØN`Ûœy-·§V'9nh'åõe’W0÷4W7rD¥–[‡êªÏÎtu<£MƬ̠˜‰%¤®wPG¬–y ÝS¡.FFÔÆ£¶í³™mE‡jý¤¶ä…&Æ¢qµLRš“ôR)ü†Z¥ëÁŒA íO¾áüX”†ëéRе³ˆNÔ”´‡¥X°‰ò¢t– о¼-€kž©¶£Px¼D¾1  ɲ…/”‹o¦Ç|BÄvtÌ%Å' ̬€F¡ÁyOÒ” {²•Ðc®œN"ï»wŸüž¢Žv„¬a0`kŽë@ù·4FÓº÷øôåÏON1ï5öï}foÇÀ¾»hÆ÷fî{U9݃+óÛŸmZnä_læøéÿðHWŸ†3ùTGù)à„°8.+xŒ%.6—Ÿ~_ØÓ„`±ÏeSg¡ó(¦ »KRÏš]ôqEQÐdÃÆké}íñ±æ8g&èmR5CîöèrYL论Ê\Ð9ZP-{a沦šu?¢[ë~„LÛ´ƒ Ž0j;Ãx‡Ò·¦KIŽ~FµÓ*tIÅpxAVH9xO½¶\'-\8h vÏ- ÎÒQGЇPYsg8E³#VÑÍ»!@íBPˆTR…1µaÇãÖuÜË6»Þ9õÐ ÉÌ…yÀ툅±Ï€((À7k!_u5Ü„»á#·r@WÊwÝBÜB—è÷µYp×ÐfJæ½Yå&šab(ûÞЭ6¨´w´)F« 5«V•Á§iÒ”L¥$žîjHGG\“Ô •§ê‰äŠ–ÌYRa³A(,²˜j#²b*w¶DUDÎémáœ5äB‚?0 ¥Ya Í=âðS[>FŒ’òUÁwa#V$]aœ_ËVÄ×E>06?†ìÒ´ËÆlìñÓïN_ÆûöôìXí_üê˜÷çŽyã<,³na0P +èðíÏŸôùãØ˜†v†ÜŸ˜ÿ^¾>ƒéö=û¶˜ü¥Úk&Þtm@cØ>käêÿ°ü¨ô•äM³¸á ú"«< 5úÉíµ“:aKtË`¦JÜÒûxö°ªaÅJFú ¾uºYQö´t1òÀŽÍÃ)CõSá¼°n…sÖm€î“CHeÝ¥D.ÉËÚâuù¯q6šàÞ³(l»¸Çv_ƒJS1*ÀÍlÏ¡õ4k] ÇXˆG¥™9-¤¹`ukÁ†®~Üu²|&Ñ:0þ/2¸‰«m¬GÉtoá8ŽwíšÓuE¢µ"J;M¹!ؘíï´æÊ›­‹6®£³^XeÜÀ’Né² ª)!Æuìx€Q$ íÑKýΡšp£r?lœtþÒÄS¼ÔG ¤w„Þôg4°Œj-é‚g—ðƒ'1²ó*Y²š×*;Å­î›v0 L_°AT¤hûb]ÿ¦q£ƒ<UIôÀaĘb™©r¯}%5ÃÁÚEaW´%|Ñ·,Q~Ì׉?-Af…—9 ïºÝ€DæÑ] 9jcË!Ý‚UŽ˜£´-Üoþ}'z¬0ø­:4Ä#m ââÑ%¦è ç—Ûˆ9f9‡ðZáôæë÷©Ì´‡§Ov¢çÙ*ÓXþŽÍo…I±°ÂТø¥ïi9Ì7mG6˜š;ž:>m±;ÎäO¦Ót͈2/1û~óÎÕ£ÆÌð™<ˆÎFýû8öÔÃoð jo«¿sãÀÚl ª tÊLúxªÒ3ÊéTl žÀŵP% / $²Ý‘]3A€æŸ^sÇCF`ŒŒ n¦ E¡kÓyË·)' “ÿÚÅ×ø’*‘‰Ãe’[— I!À"ÅjM>nºȳëßìZƒYïÌv¢'YÅI;/4EÒÇO4‚‡l&‹Rñ|¹‘øWšG—eo—¥žË &H¡„M(V|Æ=„¬ ­¾«kãÜ+.ý.Ž}Z™¨©8­Y×àªÞÁ(w¶Í§ P8²¿¦^iAç4>Q3^1EY ÒhÌ#›»:˜ñè5”é2 «0x‡Ì†_Ñ脎¹õn5ìôSjâQREžKÔm¶ä%FªÔ<꣹þ9Ü©°:‘A]lÖÝÉF ŠÐVX¹!ðù’a{Ã\ôMx!Ç;a'Î G¡#´N.›¾8 ˜™v%¡xÛ§¦%Ô½¿ïëýýNôÇ4]Ç—ÉAˆ \Rž¨B–,‚MvÙâ–Õ/Üê¿¡5Ö‚’©+ÖÑòºoÈkÇm•{‚wf ps‰âqC¶çC¤úLç['½}&7vJðR‡#™ŠÇáÎ_÷v|-}9¼£-oWëú®ýýÒ×Ý/ÀgÑæ€Œ¶j‚w8'ò‰%PA¹ß`Ì$™¹\`k³“^ÖZ"1PÈeÏj6*^K~õ(˜`tÓ½=ñ!F“U-ºô@ œnT‚4×)“³'ÔÈ!±mhÖ¼`}ÿzÏIâ^¶æ?[¾«9ܺo©@8R„‰¿v#WuïÅ›ò†ÑÄ!gµ¹"Þ%Mûñw)Hùª£1žäsÁC˜o9“VGÎã ŸÑ©q 5‡X §:‡EøE¿•ãõ8w•Ró†ZÂXê¹aYôfü¨Ú%ÞĺNBÇ>1¶ŒXÌ®ìꛪ²‚út†q¬%*Ì£)UˉA¼zòôñÛïXa¦m¦…öÙÑFç×+U I•Ûë\q¿§±ŠÍtMÆqË6QÆckÊ Œ7^´¡¤QÉÍç«ä=pÜeš8DÂa4Ù¸Ä^B×ô»›(Z ¬+þÊ€ªQð˜ç š“^è¶p™ýqÙ›lW¾iÀÜžÒýkR‘@a‡ñ6îAIEV´JÑ|š…ÉÂõVT딇q·¤ÖÄQ±Â”Ш±âgX”ÒMØ›˜a®Am“ÇœV“|bNéÀ!¯ÈC¬Gj£cΨo"tàÅ¥‚(´d#í‡ %ÛuËíªÇr}wµä-ÿo+È0gT¯5IraÃ#pÇOþ×&›¾?›’¹r¹$ƒe±&´kH‚~¤í\Ng*d%„IÕ»ú¶5ÿQ+h7$?¤rNÔ7-ñ•fªÙÓâù6ïdô¯Ÿ¼zyöôåðß'Oß<}6$hñ—Ç/žÆ? cľù³‹03XØ.­³ªÑ|Q=hV2påM$Þ0òò€l« ,ÖpДÆù/¨ÚAì*l` ­¡ú<×’‹rŒÇ¹RÄìŽF7a—(ЃE_F΄ËMºÇ5 ¯óÙ¾“y.¾&^r#gÍ÷ $(åf‘îq#…SѬÞˆÆ42y ŒüÈaËËÇ TÕ µžb°•‰áö`ÀAÛÈæÜ¶í¹“Ù|_(CùLc†ÎÎMÍâ|Þûù"€q¥¹”Õ©†ó»N\<® ÑM޳T¬Å5.µÇVÖšÑ4~Ì».0Ÿ’β9bg‚88ô?ø2æÐôÏNŽ_žÖ:à’KœiZë¡Ô+Gn(ò\bóM‰’ŸÃ÷°µ”Q5õ<øQrpuÕxR v‘’KV˜ä€lƒÂIþ@Kà–ƒÉ¢/"gçlª/í8ç]WÈ9…èK%óÈVhÂõËUPœ«Ô7gìEndýŽ Ñ'þúÆùRœ„Îp€d•b²çeQÌ¢4ÇR ÚbsÌù–rÑË͇ Û§”Ý»`MŸÈŠ–#ª/@ÅRO¶Ë@ò¨gšJ!rñu ã³,ç$tŠÑ°ï:\ßaãU!Ë“­Tžz/2†>8d«u´HÖë-SƒF ²-Ù]£&¨[ù¥ú$æð‘ì?{ñ ±¦}ƒÄ/ ão)¿jF®! ø{u¼ë^ù[z€ß9êØåqûéä"=ûí.ÒqµšÃ_û—ñ¯Ãxç^ï ;šmþ †ûóéË“çoŸ<ý™ë¯A© }M‚$Ú?qg³Éx:ݺ?òï|µv’UFÿ¦ñ-êšžü¹=ÈþáwÇñ­æ„í½ß·syõú<¾¿ßØúd/øÅL÷W¶ñΫsº´¼oƒ÷\v*¬Í…Ù ºpâbC””we!¡'û®è´H¹“ÔŠ¢S:ágEùl•TàÒÂdÑ\k›› fïQ–c!ÅV´RÌ%éµ²O”Ä«¬¢P`:‹‚ºb”õ×x†ªÞ¢E]‘„¸þqÄ}H¾ÖÕŸåÎ#àYªC·”äœÍð!¥â¾Û;. I«&±™LU—8om;p^ qO¢w‹C'Îbð=ü긟»N'áf\ ¬ÑPzò9¯£úÚµOâ5c¸@*ƒ¶Ü_¬d¥ú3zå ê' b“Äc:þÃä<$›¤BLjºç 0DÐ%³Œ ­!ÙÆÊ¸â”zWZÓ!6€ž´Pµ¦$Pô gNP³”x«m;ᤣÏPéBÎ)à ÙyP–sOeìB˜H—FäCÂâŸRo%†ÝÒUF¥eíe¤‘ŠnæëE±Ô™“`â¹µ9^mq$!›þVDŒÊ0…YÑàÅëMµðo;¹°e*x=ÃH¢ ôÈÿîZc†MN?[z´ Ä#>÷d¿•ê‘þ™„Ó|æo~-´ÂG …­É¢°$ÆáÎÂ].ßÉ&EjÈ«ë„lÐd‘°”5Ù6u!« QÌÙEÊK|ÿgbì&ØììíëׯޜÇÇ/ŸÄgo¿ûîéÙù)¨P¡’ÙûÃ|T.M:Ô$ˆ lò¬&3tAÉf \Oð!&|…>1¶ÀáÝ=  MU$¬ÿÛË|3ÆÀæ5XíI1CCÉc×’3‚½^¦ Y's —Pº¼„6È¡…ÖÅzfû²}¬¸4)e’IÐã·ß5ƒ9£âx¬!ÃIÇíoõ ¿°wÓk$½Æ¦×Ó—Ï^½yqŒ«?~õöœôÚF,`Ï;§F:ggë³/ÏX#‚*&²]NÂ-0߆@'¨æe:‰+¬ˆH{ƒ²ÂÑþþõõõxV­²z »ïí‹é*#¯¢ Ö°­ØÆôä†LÊÊGÀ/SEð!ŠHjCæ»f´JÇoÏ¿õ¦±,îÇWevÉ¢,6ý¸˜Äg [ O²"í:'ø÷Í)j+/3:d‰½žhÝQô dä'Ýø„ ªTd‰´•$¿]b§ñëyšÍS“¦,ƒ× éyžÁG‡“$ºÕÒ[í©ïðKRC®Aذ`tª†1ë£ÿùÆQ¥5¤cons-2.2.0.orig/cons.bat0100555000175000017500000056135507206610267015325 0ustar jgoerzenjgoerzen@rem = '--*-PERL-*--'; @rem = ''; @rem = 'Copyright (C) 1996-2000 Free Software Foundation, Inc.'; @rem = ''; @rem = 'This program is free software; you can redistribute it and/or modify'; @rem = 'it under the terms of the GNU General Public License as published by'; @rem = 'the Free Software Foundation; either version 2 of the License, or'; @rem = '(at your option) any later version.'; @rem = ''; @rem = 'This program is distributed in the hope that it will be useful,'; @rem = 'but WITHOUT ANY WARRANTY; without even the implied warranty of'; @rem = 'MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the'; @rem = 'GNU General Public License for more details.'; @rem = ''; @rem = 'You should have received a copy of the GNU General Public License'; @rem = 'along with this program; see the file COPYING. If not, write to'; @rem = 'the Free Software Foundation, Inc., 59 Temple Place - Suite 330,'; @rem = 'Boston, MA 02111-1307, USA.'; @rem = ' @echo off rem setlocal set ARGS= :loop if .%1==. goto endloop set ARGS=%ARGS% %1 shift goto loop :endloop rem ***** This assumes PERL is in the PATH ***** rem $Id: cons.bat.proto,v 1.4 2000/06/14 22:33:01 rv Exp $ perl.exe -S cons.bat %ARGS% goto endofperl @rem '; #!/usr/bin/env perl # NOTE: Cons intentionally does not use the "perl -w" option or # "use strict." Because Cons "configuration files" are actually # Perl scripts, enabling those restrictions here would force them # on every user's config files, wanted or not. Would users write # "better" Construct and Conscript files if we forced "use strict" # on them? Probably. But we want people to use Cons to get work # done, not force everyone to become a Perl guru to use it, so we # don't insist. # # That said, Cons' code is both "perl -w" and "use strict" clean. # Regression tests keep the code honest by checking for warnings # and "use strict" failures. # $Id: cons.pl,v 1.129 2000/11/16 12:22:37 knight Exp $ use vars qw( $ver_num $ver_rev $version ); $ver_num = "2.2"; $ver_rev = ".0"; $version = sprintf "This is Cons %s%s " . '($Id: cons.pl,v 1.129 2000/11/16 12:22:37 knight Exp $)'. "\n", $ver_num, $ver_rev; # Cons: A Software Construction Tool. # Copyright (c) 1996-2000 Free Software Foundation, Inc. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; see the file COPYING. If not, write to # the Free Software Foundation, Inc., 59 Temple Place - Suite 330, # Boston, MA 02111-1307, USA. require 5.002; # See the NOTE above about why Cons doesn't "use strict". use integer; use Cwd; use File::Copy; use vars qw( $_WIN32 $_a $_exe $_o $_so ); #------------------------------------------------------------------ # Determine if running on win32 platform - either Windows NT or 95 #------------------------------------------------------------------ use vars qw( $PATH_SEPARATOR $iswin32 $_WIN32 $usage $indent @targets ); BEGIN { use Config; # if the version is 5.003, we can check $^O if ($] < 5.003) { eval("require Win32"); $_WIN32 = (!$@); } else { $_WIN32 = ($^O eq "MSWin32") ? 1 : 0; } # Fetch the PATH separator from Config; # provide our old defaults in case it's not set. $PATH_SEPARATOR = $Config{path_sep}; $PATH_SEPARATOR = $_WIN32 ? ';' : ':' if ! defined $PATH_SEPARATOR; # Fetch file suffixes from Config, # accomodating differences in the Config variables # used by different Perl versions. $_exe = $Config{_exe}; $_exe = $Config{exe_ext} if ! defined $_exe; $_exe = $_WIN32 ? '.exe' : '' if ! defined $_exe; $_o = $Config{_o}; $_o = $Config{obj_ext} if ! defined $_o; $_o = $_WIN32 ? '.obj' : '.o' if ! defined $_o; $_a = $Config{_a}; $_a = $Config{lib_ext} if ! defined $_a; $_a = $_WIN32 ? '.lib' : '.a' if ! defined $_a; $_so = ".$Config{so}"; $_so = $_WIN32 ? '.dll' : '.so' if ! defined $_so; } # Flush stdout each time. $| = 1; # Seed random number generator. srand(time . $$); # this works better than time ^ $$ in perlfunc manpage. $usage = q( Usage: cons -- Arguments can be any of the following, in any order: Build the specified targets. If is a directory recursively build everything within that directory. + Limit the cons scripts considered to just those that match . Multiple + arguments are accepted. = Sets to value in the ARG hash passed to the top-level Construct file. -cc Show command that would have been executed, when retrieving from cache. No indication that the file has been retrieved is given; this is useful for generating build logs that can be compared with real build logs. -cd Disable all caching. Do not retrieve from cache nor flush to cache. -cr Build dependencies in random order. This is useful when building multiple similar trees with caching enabled. -cs Synchronize existing build targets that are found to be up-to-date with cache. This is useful if caching has been disabled with -cc or just recently enabled with UseCache. -d Enable dependency debugging. -f Use the specified file instead of "Construct" (but first change to containing directory of ). -h Show a help message local to the current build if one such is defined, and exit. -k Keep going as far as possible after errors. -o Read override file . -p Show construction products in specified trees. -pa Show construction products and associated actions. -pw Show products and where they are defined. -q Be quiet about Installing and Removing targets. -r Remove construction products associated with -R Search for files in . Multiple -R directories are searched in the order specified. -t Traverse up the directory hierarchy looking for a Construct file, if none exists in the current directory. (Targets will be modified to be relative to the Construct file.) -v Show cons version and continue processing. -V Show cons version and exit. -wf Write all filenames considered into . -x Show this message and exit. Please report any suggestions through the cons-discuss@gnu.org mailing list. To subscribe, send mail to cons-discuss-request@gnu.org with body 'subscribe'. If you find a bug, please report it through the bug-cons@gnu.org mailing list. Information about CONS can be obtained from the official cons web site http://www.dsmit.com/cons/ or its mirrors (listed there). The cons maintainers can be contacted by email at cons-maintainers@gnu.org User documentation of cons is contained in cons and can be obtained by doing 'perldoc /path/to/cons'. ); # Simplify program name, if it is a path. { my ($vol, $dir, $file) = File::Spec->splitpath(File::Spec->canonpath($0)); $0 = $file; } # Default parameters. $param::topfile = 'Construct'; # Top-level construction file. $param::install = 1; # Show installations $param::build = 1; # Build targets ### $param::show = 1; # Show building of targets. $param::sigpro = 'md5'; # Signature protocol. $param::depfile = ''; # Write all deps out to this file $param::salt = ''; # Salt derived file signatures with this. $param::rep_sig_times_ok = 1; # Repository .consign times are in sync # w/files. $param::conscript_chdir = 0; # Change dir to Conscript directory $param::quiet = 0; # should we show the command being executed. # $indent = ''; # Display a command while executing or otherwise. This # should be called by command builder action methods. sub showcom { print($indent . $_[0] . "\n"); } # Default environment. # This contains only the completely platform-independent information # we can figure out. Platform-specific information (UNIX, Win32) # gets added below. @param::defaults = ( 'SUFEXE' => $_exe, # '' on UNIX systems 'SUFLIB' => $_a, # '.a' on UNIX systems 'SUFLIBS' => "$_so:$_a", # '.so:.a' on UNIX 'SUFOBJ' => $_o, # '.o' on UNIX systems 'SUFMAP' => { '.c' => 'build::command::cc', '.s' => 'build::command::cc', '.S' => 'build::command::cc', '.C' => 'build::command::cxx', '.cc' => 'build::command::cxx', '.cxx'=> 'build::command::cxx', '.cpp'=> 'build::command::cxx', '.c++'=> 'build::command::cxx', '.C++'=> 'build::command::cxx', }, ); if ($_WIN32) { # Defaults for Win32. # Defined for VC++ 6.0 by Greg Spencer . # Your mileage may vary. my @win = ( 'CC' => 'cl', 'CFLAGS' => '/nologo', 'CCCOM' => '%CC %CFLAGS %_IFLAGS /c %< /Fo%>', 'CXX' => '%CC', 'CXXFLAGS' => '%CFLAGS', 'CXXCOM' => '%CXX %CXXFLAGS %_IFLAGS /c %< /Fo%>', 'INCDIRPREFIX' => '/I', 'LINK' => 'link', 'LINKCOM' => '%LINK %LDFLAGS /out:%> %< %_LDIRS %LIBS', 'LINKMODULECOM' => '%LD /r /o %> %<', 'LIBDIRPREFIX' => '/LIBPATH:', 'AR' => 'lib', 'ARFLAGS' => '/nologo ', 'ARCOM' => "%AR %ARFLAGS /out:%> %<", 'RANLIB' => '', 'LD' => 'link', 'LDFLAGS' => '/nologo ', 'PREFLIB' => '', ); push(@param::defaults, @win); } else { # Defaults for a typical (?) UNIX platform. # Your mileage may vary. my @unix = ( 'CC' => 'cc', 'CFLAGS' => '', 'CCCOM' => '%CC %CFLAGS %_IFLAGS -c %< -o %>', 'CXX' => '%CC', 'CXXFLAGS' => '%CFLAGS', 'CXXCOM' => '%CXX %CXXFLAGS %_IFLAGS -c %< -o %>', 'INCDIRPREFIX' => '-I', 'LINK' => '%CXX', 'LINKCOM' => '%LINK %LDFLAGS -o %> %< %_LDIRS %LIBS', 'LINKMODULECOM' => '%LD -r -o %> %<', 'LIBDIRPREFIX' => '-L', 'AR' => 'ar', 'ARFLAGS' => 'r', # rs? 'ARCOM' => "%AR %ARFLAGS %> %<\n%RANLIB %>", 'RANLIB' => 'ranlib', 'AS' => 'as', 'ASFLAGS' => '', 'ASCOM' => '%AS %ASFLAGS %< -o %>', 'LD' => 'ld', 'LDFLAGS' => '', 'PREFLIB' => 'lib', 'ENV' => { 'PATH' => '/bin:/usr/bin' }, ); push(@param::defaults, @unix); } # Handle command line arguments. while (@ARGV) { $_ = shift @ARGV; last if /^--$/; # Argument passing to Construct. &option, next if s/^-//; push (@param::include, $_), next if s/^\+//; &equate, next if /=/; push (@targets, $_), next; } sub option { my %opt = ( 'cc' => sub { $param::cachecom = 1; }, 'cd' => sub { $param::cachedisable = 1; }, 'cr' => sub { $param::random = 1; }, 'cs' => sub { $param::cachesync = 1; }, 'd' => sub { $param::depends = 1; }, 'h' => sub { $param::localhelp = 1; }, 'k' => sub { $param::kflag = 1; }, 'p' => sub { $param::pflag = 1; $param::build = 0; }, 'pa' => sub { $param::pflag = 1; $param::aflag = 1; $indent = "... "; $param::build = 0; }, 'pw' => sub { $param::pflag = 1; $param::wflag = 1; $param::build = 0; }, 'q' => sub { $param::quiet = 1; }, 'r' => sub { $param::rflag = 1; $param::build = 0; }, 't' => sub { $param::traverse = 1; }, 'v' => sub { print($version); }, 'V' => sub { print($version), exit(0); }, 'x' => sub { print($usage), exit 0; }, ); my %opt_arg = ( 'f' => sub { $param::topfile = $_[0]; }, 'o' => sub { $param::overfile = $_[0]; }, 'R' => sub { script::Repository($_[0]); }, 'wf' => sub { $param::depfile = $_[0]; }, ); if (defined $opt{$_}) { &{$opt{$_}}(); return; } $_ =~ m/(.)(.*)/; if (defined $opt_arg{$1}) { if (! $2) { $_ = shift @ARGV; die("$0: -$1 option requires an argument.\n") if ! $_; } &{$opt_arg{$1}}($2 || $_); return; } $_ =~ m/(..)(.*)/; if (defined $opt_arg{$1}) { if (! $2) { $_ = shift @ARGV; die("$0: -$1 option requires an argument.\n") if ! $_; } &{$opt_arg{$1}}($2 || $_); return; } if ($_) { die qq($0: unrecognized option "-$_". Use -x for a usage message.\n); } } # Process an equate argument (var=val). sub equate { my($var, $val) = /([^=]*)=(.*)/; $script::ARG{$var} = $val; } # Define file signature protocol. 'sig'->select($param::sigpro); # Cleanup after an interrupt. $SIG{INT} = $SIG{QUIT} = $SIG{TERM} = sub { $SIG{PIPE} = $SIG{INT} = $SIG{QUIT} = $SIG{TERM} = 'IGNORE'; $SIG{HUP} = $SIG{INT} if ! $main::_WIN32; warn("\n$0: killed\n"); # Call this first, to make sure that this processing # occurs even if a child process does not die (and we # hang on the wait). sig::hash::END(); wait(); exit(1); }; $SIG{HUP} = $SIG{INT} if ! $main::_WIN32; # Cleanup after a broken pipe (someone piped our stdout?) $SIG{PIPE} = sub { $SIG{PIPE} = $SIG{HUP} = $SIG{INT} = $SIG{QUIT} = $SIG{TERM} = 'IGNORE'; warn("\n$0: broken pipe\n"); sig::hash::END(); wait(); exit(1); }; if ($param::depfile) { open (main::DEPFILE, ">".$param::depfile) || die ("$0: couldn't open $param::depfile ($!)\n"); } # If the supplied top-level Conscript file is not in the # current directory, then change to that directory. { my ($vol, $dir, $file) = File::Spec->splitpath(File::Spec->canonpath($param::topfile)); if ($vol || $dir) { my($cd) = File::Spec->catpath($vol, $dir, undef); chdir($cd) || die("$0: couldn't change to directory $cd ($!)\n"); $param::topfile = $file; } } # Walk up the directory hierarchy looking for a Conscript file (if -t set). my($target_top); my(@targetdir) = (); if ($param::traverse && ! -f $param::topfile) { my($vol, $dirs, $file) = File::Spec->splitpath(cwd()); my(@dirs) = (File::Spec->splitdir($dirs), $file); while (! -f File::Spec->catpath($vol, File::Spec->catdir(@dirs), $param::topfile)) { die("$0: unable to find $param::topfile.\n") if ! @dirs; unshift(@targetdir, pop(@dirs)); } my($cwd) = File::Spec->catpath($vol, File::Spec->catdir(@dirs), ''); print "$0: Entering directory `$cwd'\n"; chdir($cwd); @targets = map {File::Spec->catdir(@targetdir, $_)} @targets; } # Set up $dir::top and $dir::cwd, now that we are in the right directory. dir::init(); # if (@targetdir) { $target_top = $dir::top->lookupdir(File::Spec->catdir(@targetdir)); } # Now handle override file. package override; if ($param::overfile) { my($ov) = $param::overfile; die qq($0: can\'t read override file "$ov" ($!)\n) if ! -f $ov; #' do $ov; if ($@) { chop($@); die qq($0: errors in override file "$ov" ($@)\n); } } # Provide this to user to setup override patterns. sub Override { my($re, @env) = @_; return if $param::overrides{$re}; # if identical, first will win. $param::overrides = 1; $param::overrides{$re} = \@env; push(@param::overrides, $re); } package main; use vars qw( %priority $errors ); # Check script inclusion regexps my $re; for $re (@param::include) { if (! defined eval {"" =~ /$re/}) { my($err) = $@; $err =~ s/in regexp at .*$//; die("$0: error in regexp $err"); } } # Read the top-level construct file and its included scripts. doscripts($param::topfile); # Status priorities. This lets us aggregate status for directories # and print an appropriate message (at the top-level). %priority = ('none' => 1, 'handled' => 2, 'built' => 3, 'unknown' => 4, 'errors' => 5); # If no targets were specified, supply default targets (if any). @targets = @param::default_targets if ! @targets; $errors = 0; # Build the supplied target patterns. my $tgt; for $tgt (map($dir::top->lookup($_), @targets)) { if ($target_top && ! $tgt->is_under($target_top)) { # A -t option was used, and this target is not underneath # the directory where we were invoked via -t. # If the target is a directory and the -t directory # is underneath it, then build the -t directory. if (ref $tgt ne "dir" || ! $target_top->is_under($tgt)) { next; } $tgt = $target_top; } buildtoptarget($tgt); } exit 0 + ($errors != 0); sub buildtoptarget { my($tgt) = @_; return if ! $tgt; my($status) = buildtarget($tgt); if ($status ne 'built') { my($path) = $tgt->path; if ($status eq "errors") { print qq($0: "$path" not remade because of errors.\n); $errors++; } elsif ($status eq "handled") { print qq($0: "$path" is up-to-date.\n); } elsif ($status eq "unknown") { # cons error already reported. $errors++; } elsif ($status eq "none") { # search for targets that may be linked to the given path. my @linked = dir::linked_targets($tgt) if $target_top; if (@linked) { my @names = map($_->path, @linked); print "Linked targets: @names\n" unless ($param::quiet); map(buildtoptarget($_), @linked); } else { print qq($0: nothing to be built in "$path".\n) if $param::build; } } else { print qq($0: don\'t know how to construct "$path".\n); #' $errors++; } } } # Build the supplied target directory or files. Return aggregated status. sub buildtarget { my($tgt) = @_; if (ref($tgt) eq "dir") { my($result) = "none"; my($priority) = $priority{$result}; if (exists $tgt->{member}) { my($members) = $tgt->{member}; my $entry; for $entry (sort keys %$members) { next if $entry eq $dir::CURDIR || $entry eq $dir::UPDIR; my($tgt) = $members->{$entry}; next if ref($tgt) ne "dir" && !exists($tgt->{builder}); my($stat) = buildtarget($members->{$entry}); my($pri) = $priority{$stat}; if ($pri > $priority) { $priority = $pri; $result = $stat; } } } return $result; } if ($param::depends) { my($path) = $tgt->path; if ($tgt->{builder}) { my(@dep) = (@{$tgt->{dep}}, @{$tgt->{sources}}); my($dep) = join(' ',map($_->path, @dep)); print("Target $path: $dep\n"); } else { print("Target $path: not a derived file\n"); } } if ($param::build) { return build $tgt; } elsif ($param::pflag || $param::wflag || $param::aflag) { if ($tgt->{builder}) { if ($param::wflag) { print qq(${\$tgt->path}: $tgt->{script}\n); } elsif ($param::pflag) { print qq(${\$tgt->path}:\n) if $param::aflag; print qq(${\$tgt->path}\n) if !$param::aflag; } if ($param::aflag) { $tgt->{builder}->action($tgt); } } } elsif ($param::rflag && $tgt->{builder}) { my($path) = $tgt->path; if (-f $path) { if (unlink($path)) { print("Removed $path\n") unless ($param::quiet); } else { warn("$0: couldn't remove $path\n"); } } } return "none"; } package NameSpace; # Return a hash that maps the name of symbols in a namespace to an # array of refs for all types for which the name has a defined value. # A list of symbols may be specified; default is all symbols in the # name space. sub save { my $package = shift; my(%namerefs, $var, $type); no strict 'refs'; @_ = keys %{$package."::"} if ! @_; foreach $var (@_) { $namerefs{$var} = []; my $fqvar = $package."::".$var; # If the scalar for this variable name doesn't already # exist, *foo{SCALAR} will autovivify the reference # instead of returning undef, so unlike the other types, # we have to dereference to find out if it exists. push(@{$namerefs{$var}}, *{$fqvar}{SCALAR}) if defined ${*{$fqvar}{SCALAR}}; foreach $type (qw(ARRAY HASH CODE IO)) { push(@{$namerefs{$var}}, *{$fqvar}{$type}) if defined *{$fqvar}{$type}; } } return \%namerefs; } # Remove the specified symbols from the namespace. # Default is to remove all. sub remove { my $package = shift; my(%namerefs, $var); no strict 'refs'; @_ = keys %{$package."::"} if ! @_; foreach $var (@_) { delete ${$package."::"}{$var}; } } # Restore values to symbols specified in a hash as returned # by NameSpace::save. sub restore { my($package, $namerefs) = @_; my($var, $ref); no strict 'refs'; foreach $var (keys %$namerefs) { my $fqvar = $package."::".$var; foreach $ref (@{$namerefs->{$var}}) { *{$fqvar} = $ref; } } } # Support for "building" scripts, importing and exporting variables. # With the exception of the top-level routine here (invoked from the # main package by cons), these are all invoked by user scripts. package script; use vars qw( $ARG $caller_dir_path %special_var ); BEGIN { # We can't Export or Import the following variables because Perl always # treats them as part of the "main::" package (see perlvar(1)). %special_var = map {$_ => 1} qw(ENV INC ARGV ARGVOUT SIG STDIN STDOUT STDERR); } # This is called from main to interpret/run the top-level Construct # file, passed in as the single argument. sub main::doscripts { my($script) = @_; Build($script); # Now set up the includes/excludes (after the Construct file is read). $param::include = join('|', @param::include); # Save the original variable names from the script package. # These will stay intact, but any other "script::" variables # defined in a Conscript file will get saved, deleted, # and (when necessary) restored. my(%orig_script_var) = map {$_ => 1} keys %script::; $caller_dir_path = undef; my $cwd = Cwd::cwd(); my(@scripts) = pop(@priv::scripts); while ($priv::self = shift(@scripts)) { my($path) = $priv::self->{script}->rsrcpath; if (-f $path) { $dir::cwd = $priv::self->{script}->{dir}; # Handle chdir to the Conscript file directory, if necessary. my ($vol, $dir, $file); if ($param::conscript_chdir) { ($vol, $dir, $file) = File::Spec->splitpath(File::Spec->canonpath($path)); if ($vol ne '' || $dir ne '') { $caller_dir_path = File::Spec->catpath($vol, $dir, undef); chdir($caller_dir_path) || die "Could not chdir to $caller_dir_path: $!\n"; } } else { $file = $path; } # Actually process the Conscript file. do $file; # Save any variables defined by the Conscript file # so we can restore them later, if needed; # then delete them from the script:: namespace. my(@del) = grep(! $orig_script_var{$_}, keys %script::); if (@del) { $priv::self->{script}->{pkgvars} = NameSpace::save('script', @del); NameSpace::remove('script', @del); } if ($caller_dir_path) { chdir($cwd); $caller_dir_path = undef; } if ($@) { chomp($@); my $err = ($@ =~ /\n/ms) ? ":\n$@" : " ($@)"; print qq($0: error in file "$path"$err\n); $run::errors++; } else { # Only process subsidiary scripts if no errors in parent. unshift(@scripts, @priv::scripts); } undef @priv::scripts; } else { my $where = ''; my $cref = $priv::self->{script}->creator; if (defined $cref) { my($_foo, $script, $line, $sub) = @$cref; $where = " ($sub in $script, line $line)"; } warn qq(Ignoring missing script "$path"$where); } } die("$0: script errors encountered: construction aborted\n") if $run::errors; } # Return caller info about the method being invoked. # This is everything from the Perl "caller" builtin function, # including which Construct/Conscript file, line number, # subroutine name, etc. sub caller_info { my($lev) = 1; my(@frame); do { @frame = caller ++$lev; if (defined($frame[3]) && $frame[3] eq '(eval)') { @frame = caller --$lev; if ($caller_dir_path) { $frame[1] = File::Spec->catfile($caller_dir_path, $frame[1]); } return @frame; } } while ($frame[3]); return; } # Link a directory to another. This simply means set up the *source* # for the directory to be the other directory. sub Link { dir::link(@_); } # Add directories to the repository search path for files. # We're careful about stripping our current directory from # the list, which we do by comparing the `pwd` results from # the current directory and the specified directory. This # is cumbersome, but assures that the paths will be reported # the same regardless of symbolic links. sub Repository { my($my_dir) = Cwd::cwd(); my $dir; foreach $dir (@_) { my($d) = `$^X -e "use Cwd; chdir('$dir') && print cwd"`; next if ! $d || ! -d $d || $d eq $my_dir; # We know we can get away with passing undef to lookupdir # as the directory because $dir is an absolute path. push(@param::rpath, dir::lookupdir(undef, $dir)); push @INC, $d; } } # Return the list of Repository directories specified. sub Repository_List { map($_->path, @param::rpath); } # Specify whether the .consign signature times in repository files are, # in fact, consistent with the times on the files themselves. sub Repository_Sig_Times_OK { $param::rep_sig_times_ok = shift; } # Specify whether we should chdir to the containing directories # of Conscript files. sub Conscript_chdir { $param::conscript_chdir = shift; } # Specify files/targets that must be present and built locally, # even if they exist already-built in a Repository. sub Local { my(@files) = map($dir::cwd->lookupfile($_), @_); map($_->local(1), @files); } # Export variables to any scripts invoked from this one. sub Export { my(@illegal) = grep($special_var{$_}, @_); if (@illegal) { die qq($0: cannot Export special Perl variables: @illegal\n); } @{$priv::self->{exports}} = grep(! defined $special_var{$_}, @_); } # Import variables from the export list of the caller # of the current script. sub Import { my(@illegal) = grep($special_var{$_}, @_); if (@illegal) { die qq($0: cannot Import special Perl variables: @illegal\n"); } my($parent) = $priv::self->{parent}; my($imports) = $priv::self->{imports}; @{$priv::self->{exports}} = keys %$imports; my($var); foreach $var (grep(! defined $special_var{$_}, @_)) { if (!exists $imports->{$var}) { my($path) = $parent->{script}->path; die qq($0: variable "$var" not exported by file "$path"\n); } if (!defined $imports->{$var}) { my $path = $parent->{script}->path; my $err = "$0: variable \"$var\" exported but not " . "defined by file \"$path\"\n"; die $err; } ${"script::$var"} = $imports->{$var}; } } # Build an inferior script. That is, arrange to read and execute # the specified script, passing to it any exported variables from # the current script. sub Build { my(@files) = map($dir::cwd->lookupfile($_), @_); my(%imports) = map {$_ => ${"script::$_"}} @{$priv::self->{exports}}; my $file; for $file (@files) { next if $param::include && $file->path !~ /$param::include/o; my($self) = {'script' => $file, 'parent' => $priv::self, 'imports' => \%imports}; bless $self; # may want to bless into class of parent in future push(@priv::scripts, $self); } } # Set up regexps dependencies to ignore. Should only be called once. sub Ignore { die("Ignore called more than once\n") if $param::ignore; $param::ignore = join("|", map("($_)", @_)) if @_; } # Specification of default targets. sub Default { push(@param::default_targets, map($dir::cwd->lookup($_)->path, @_)); } # Local Help. Should only be called once. sub Help { if ($param::localhelp) { print "@_\n"; exit 2; } } # Return the build name(s) of a file or file list. sub FilePath { wantarray ? map($dir::cwd->lookupfile($_)->path, @_) : $dir::cwd->lookupfile($_[0])->path; } # Return the build name(s) of a directory or directory list. sub DirPath { wantarray ? map($dir::cwd->lookupdir($_)->path, @_) : $dir::cwd->lookupdir($_[0])->path; } # Split the search path provided into components. Look each up # relative to the current directory. # The usual path separator problems abound; for now we'll use : sub SplitPath { my($dirs) = @_; if (ref($dirs) ne "ARRAY") { $dirs = [ split(/$main::PATH_SEPARATOR/o, $dirs) ]; } map { DirPath($_) } @$dirs; } # Return true if the supplied path is available as a source file # or is buildable (by rules seen to-date in the build). sub ConsPath { my($path) = @_; my($file) = $dir::cwd->lookup($path); return $file->accessible; } # Return the source path of the supplied path. sub SourcePath { wantarray ? map($dir::cwd->lookupfile($_)->rsrcpath, @_) : $dir::cwd->lookupfile($_[0])->rsrcpath; } # Search up the tree for the specified cache directory, starting with # the current directory. Returns undef if not found, 1 otherwise. # If the directory is found, then caching is enabled. The directory # must be readable and writable. If the argument "mixtargets" is provided, # then targets may be mixed in the cache (two targets may share the same # cache file--not recommended). sub UseCache($@) { my($dir, @args) = @_; # NOTE: it's important to process arguments here regardless of whether # the cache is disabled temporarily, since the mixtargets option affects # the salt for derived signatures. for (@args) { if ($_ eq "mixtargets") { # When mixtargets is enabled, we salt the target signatures. # This is done purely to avoid a scenario whereby if # mixtargets is turned on or off after doing builds, and # if cache synchronization with -cs is used, then # cache files may be shared in the cache itself (linked # under more than one name in the cache). This is not bad, # per se, but simply would mean that a cache cleaning algorithm # that looked for a link count of 1 would never find those # particular files; they would always appear to be in use. $param::salt = 'M' . $param::salt; $param::mixtargets = 1; } else { die qq($0: UseCache unrecognized option "$_"\n); } } if ($param::cachedisable) { warn("Note: caching disabled by -cd flag\n"); return 1; } my($depth) = 15; while ($depth-- && ! -d $dir) { $dir = File::Spec->catdir($dir::UPDIR, $dir); } if (-d $dir) { $param::cache = $dir; return 1; } return undef; } # Salt the signature generator. The salt (a number of string) is added # into the signature of each derived file. Changing the salt will # force recompilation of all derived files. sub Salt($) { # We append the value, so that UseCache and Salt may be used # in either order without changing the signature calculation. $param::salt .= $_[0]; } # Mark files (or directories) to not be removed before building. sub Precious { map($_->{precious} = 1, map($dir::cwd->lookup($_), @_)); } # These methods are callable from Conscript files, via a cons # object. Procs beginning with _ are intended for internal use. package cons; use vars qw( %envcache ); # This is passed the name of the base environment to instantiate. # Overrides to the base environment may also be passed in # as key/value pairs. sub new { my($package) = shift; my ($env) = {@param::defaults, @_}; @{$env->{_envcopy}} = %$env; # Note: we never change PATH $env->{_cwd} = $dir::cwd; # Save directory of environment for bless $env, $package; # any deferred name interpretation. } # Clone an environment. # Note that the working directory will be the initial directory # of the original environment. sub clone { my($env) = shift; my $clone = {@{$env->{_envcopy}}, @_}; @{$clone->{_envcopy}} = %$clone; # Note: we never change PATH $clone->{_cwd} = $env->{_cwd}; bless $clone, ref $env; } # Create a flattened hash representing the environment. # It also contains a copy of the PATH, so that the path # may be modified if it is converted back to a hash. sub copy { my($env) = shift; (@{$env->{_envcopy}}, 'ENV' => {%{$env->{ENV}}}, @_) } # Resolve which environment to actually use for a given # target. This is just used for simple overrides. sub _resolve { return $_[0] if !$param::overrides; my($env, $tgt) = @_; my($path) = $tgt->path; my $re; for $re (@param::overrides) { next if $path !~ /$re/; # Found one. Return a combination of the original environment # and the override. my($ovr) = $param::overrides{$re}; return $envcache{$env,$re} if $envcache{$env,$re}; my($newenv) = {@{$env->{_envcopy}}, @$ovr}; @{$newenv->{_envcopy}} = %$env; $newenv->{_cwd} = $env->{_cwd}; return $envcache{$env,$re} = bless $newenv, ref $env; } return $env; } # Substitute construction environment variables into a string. # Internal function/method. sub _subst { my($env, $str) = @_; if (! defined $str) { return undef; } elsif (ref($str) eq "ARRAY") { return [ map($env->_subst($_), @$str) ]; } else { # % expansion. %% gets converted to % later, so expand any # %keyword construction that doesn't have a % in front of it, # modulo multiple %% pairs in between. # In Perl 5.005 and later, we could actually do this in one regex # using a conditional expression as follows, # while ($str =~ s/($pre)\%(\{)?([_a-zA-Z]\w*)(?(2)\})/"$1".$env->{$3}/ge) {} # The following two-step approach is backwards-compatible # to (at least) Perl5.003. my $pre = '^|[^\%](?:\%\%)*'; while (($str =~ s/($pre)\%([_a-zA-Z]\w*)/$1.($env->{$2}||'')/ge) || ($str =~ s/($pre)\%\{([_a-zA-Z]\w*)\}/$1.($env->{$2}||'')/ge)) {} return $str; } } sub Install { my($env) = shift; my($tgtdir) = $dir::cwd->lookupdir($env->_subst(shift)); my $file; for $file (map($dir::cwd->lookupfile($env->_subst($_)), @_)) { my($tgt) = $tgtdir->lookupfile($file->{entry}); $tgt->bind(find build::install, $file); } } sub InstallAs { my $env = shift; my $tgt = shift; my $src = shift; my @sources = (); my @targets = (); if (ref $tgt) { die "InstallAs: Source is a file and target is a list!\n" if (!ref($src)); @sources = @$src; @targets = @$tgt; } elsif (ref $src) { die "InstallAs: Target is a file and source is a list!\n"; } else { push @sources, $src; push @targets, $tgt; } if ($#sources != $#targets) { my $tn = $#targets+1; my $sn = $#sources+1; die "InstallAs: Source file list ($sn) and target file list ($tn) " . "are inconsistent in length!\n"; } else { foreach (0..$#sources) { my $tfile = $dir::cwd->lookupfile($env->_subst($targets[$_])); my $sfile = $dir::cwd->lookupfile($env->_subst($sources[$_])); $tfile->bind(find build::install, $sfile); } } } # Installation in a local build directory, # copying from the repository if it's already built there. # Functionally equivalent to: # Install $env $dir, $file; # Local "$dir/$file"; sub Install_Local { my($env) = shift; my($tgtdir) = $dir::cwd->lookupdir($env->_subst(shift)); my $file; for $file (map($dir::cwd->lookupfile($env->_subst($_)), @_)) { my($tgt) = $tgtdir->lookupfile($file->{entry}); $tgt->bind(find build::install, $file); $tgt->local(1); } } sub Objects { my($env) = shift; map($dir::cwd->relpath($_), _Objects($env, map($dir::cwd->lookupfile($env->_subst($_)), @_))) } # Called with multiple source file references (or object files). # Returns corresponding object files references. sub _Objects { my($env) = shift; my($suffix) = $env->{SUFOBJ}; map(_Object($env, $_, $_->{dir}->lookupfile($_->base_suf($suffix))), @_); } # Called with an object and source reference. If no object reference # is supplied, then the object file is determined implicitly from the # source file's extension. Sets up the appropriate rules for creating # the object from the source. Returns the object reference. sub _Object { my($env, $src, $obj) = @_; return $obj if $src eq $obj; # don't need to build self from self. my($objenv) = $env->_resolve($obj); my($suffix) = $src->suffix; my($builder) = $env->{SUFMAP}{$suffix}; if ($builder) { $obj->bind((find $builder($objenv)), $src); } else { die("don't know how to construct ${\$obj->path} from " . "${\$src->path}.\n"); } $obj } sub Program { my($env) = shift; my($tgt) = $dir::cwd->lookupfile(file::addsuffix($env->_subst(shift), $env->{SUFEXE})); my($progenv) = $env->_resolve($tgt); $tgt->bind(find build::command::link($progenv, $progenv->{LINKCOM}), $env->_Objects(map($dir::cwd->lookupfile($env->_subst($_)), @_))); } sub Module { my($env) = shift; my($tgt) = $dir::cwd->lookupfile($env->_subst(shift)); my($modenv) = $env->_resolve($tgt); my($com) = pop(@_); $tgt->bind(find build::command::link($modenv, $com), $env->_Objects(map($dir::cwd->lookupfile($env->_subst($_)), @_))); } sub LinkedModule { my($env) = shift; my($tgt) = $dir::cwd->lookupfile($env->_subst(shift)); my($progenv) = $env->_resolve($tgt); $tgt->bind(find build::command::linkedmodule ($progenv, $progenv->{LINKMODULECOM}), $env->_Objects(map($dir::cwd->lookupfile($env->_subst($_)), @_))); } sub Library { my($env) = shift; my($lib) = $dir::cwd->lookupfile(file::addsuffix($env->_subst(shift), $env->{SUFLIB})); my($libenv) = $env->_resolve($lib); $lib->bind(find build::command::library($libenv), $env->_Objects(map($dir::cwd->lookupfile($env->_subst($_)), @_))); } # Simple derivation: you provide target, source(s), command. # Special variables substitute into the rule. # Target may be a reference, in which case it is taken # to be a multiple target (all targets built at once). sub Command { my($env) = shift; my($tgt) = $env->_subst(shift); my($com) = pop(@_); my(@sources) = map($dir::cwd->lookupfile($env->_subst($_)), @_); if (ref($tgt)) { # A multi-target command. my(@tgts) = map($dir::cwd->lookupfile($_), @$tgt); die("empty target list in multi-target command\n") if !@tgts; $env = $env->_resolve($tgts[0]); my $builder = find build::command::user($env, $com, 'script'); my($multi) = build::multiple->new($builder, \@tgts); for $tgt (@tgts) { $tgt->bind($multi, @sources); } } else { $tgt = $dir::cwd->lookupfile($tgt); $env = $env->_resolve($tgt); my $builder = find build::command::user($env, $com, 'script'); $tgt->bind($builder, @sources); } } sub Depends { my($env) = shift; my($tgt) = $env->_subst(shift); my(@deps) = map($dir::cwd->lookup($env->_subst($_)), @_); if (! ref($tgt)) { $tgt = [ $tgt ]; } my($t); foreach $t (map($dir::cwd->lookupfile($_), @$tgt)) { push(@{$t->{dep}}, @deps); } } # Setup a quick scanner for the specified input file, for the # associated environment. Any use of the input file will cause the # scanner to be invoked, once only. The scanner sees just one line at # a time of the file, and is expected to return a list of # dependencies. sub QuickScan { my($env, $code, $file, $path) = @_; $dir::cwd->lookup($env->_subst($file))->{'srcscan',$env} = find scan::quickscan($code, $env, $env->_subst($path)); } # Generic builder module. Just a few default methods. Every derivable # file must have a builder object of some sort attached. Usually # builder objects are shared. package build; # Null signature for dynamic includes. sub includes { () } # Null signature for build script. sub script { () } # Not compatible with any other builder, by default. sub compatible { 0 } # Builder module for the Install command. package build::install; use vars qw( @ISA $installer ); BEGIN { @ISA = qw(build); bless $installer = {} # handle for this class. } sub find { $installer } # Caching not supported for Install: generally install is trivial anyway, # and we don't want to clutter the cache. sub cachin { undef } sub cachout { } # Do the installation. sub action { my($self, $tgt) = @_; my($src) = $tgt->{sources}[0]; main::showcom("Install ${\$src->rpath} as ${\$tgt->path}") if ($param::install && !$param::quiet); return unless $param::build; futil::install($src->rpath, $tgt); return 1; } # Builder module for generic UNIX commands. package build::command; use vars qw( @ISA %com ); BEGIN { @ISA = qw(build) } sub find { my ($class, $env, $com, $package) = @_; $com = $env->_subst($com); $package ||= ''; $com{$env,$com,$package} || do { # Remove unwanted bits from signature -- those bracketed by %( ... %) my $comsig = $com; $comsig =~ s/^\@\s*//mg; while ($comsig =~ s/%\(([^%]|%[^\(])*?%\)//g) { } my $self = { env => $env, com => $com, 'package' => $package, comsig => $comsig }; $com{$env,$com,$package} = bless $self, $class; } } # Default cache in function. sub cachin { my($self, $tgt, $sig) = @_; if (cache::in($tgt, $sig)) { if ($param::cachecom) { map { if (! s/^\@\s*//) { main::showcom($_) } } $self->getcoms($tgt); } else { printf("Retrieved %s from cache\n", $tgt->path) unless ($param::quiet); } return 1; } return undef; } # Default cache out function. sub cachout { my($self, $tgt, $sig) = @_; cache::out($tgt, $sig); } # internal routine to process variable options. # f: return file part # F: return file part, but strip any suffix # d: return directory part # b: return full path, but strip any suffix (a.k.a. return basename) # s: return only the suffix (or an empty string, if no suffix is there) # a: return the absolute path to the file # no option: return full path to file sub _variant { my($opt, $file) = @_; $opt = '' if ! defined $opt; if ($opt eq 'f') { return $file->{entry}; } elsif ($opt eq 'd') { return $file->{dir}->path; } elsif ($opt eq 'F') { my $subst = $file->{entry}; $subst =~ s/\.[^\.]+$//; return $subst; } elsif ($opt eq 'b') { my $subst = $file->path; $subst =~ s/\.[^\.]+$//; return $subst; } elsif ($opt eq 's') { my $subst = $file->{entry}; $subst =~ m/(\.[^\.]+)$/; return $1; } elsif ($opt eq 'a') { my $path = $file->path; if (! File::Spec->file_name_is_absolute($path)) { $path = File::Spec->catfile(Cwd::cwd(), $path); } return $path; } else { return $file->path; } } # For the signature of a basic command, we don't bother # including the command itself. This is not strictly correct, # and if we wanted to be rigorous, we might want to insist # that the command was checked for all the basic commands # like gcc, etc. For this reason we don't have an includes # method. # Call this to get the command line script: an array of # fully substituted commands. sub getcoms { my($self, $tgt) = @_; my(@coms); my $com; for $com (split(/\n/, $self->{com})) { my(@src) = (undef, @{$tgt->{sources}}); my(@src1) = @src; next if $com =~ /^\s*$/; # NOTE: we used to have a more elegant s//.../e solution # for the items below, but this caused a bus error... # Remove %( and %) -- those are only used to bracket parts # of the command that we don't depend on. $com =~ s/%[()]//g; # Deal with %n, n=1,9 and variants. while ($com =~ /%([1-9])(:([fdbsFa]?))?/) { my($match) = $&; my($src) = $src1[$1]; my($subst) = _variant($3, $src1[$1]->rfile); undef $src[$1]; $com =~ s/$match/$subst/; } # Deal with %0 aka %> and variants. while ($com =~ /%[0>](:([fdbsFa]?))?/) { my($match) = $&; my($subst) = _variant($2, $tgt); $com =~ s/$match/$subst/; } # Deal with %< (all sources except %n's already used) while ($com =~ /%<(:([fdbsFa]?))?/) { my($match) = $&; my @list = (); foreach (@src) { push(@list, _variant($2, $_->rfile)) if $_; } my($subst) = join(' ', @list); $com =~ s/$match/$subst/; } # Deal with %[ %]. $com =~ s{%\[(.*?)%\]}{ my($func, @args) = grep { $_ ne '' } split(/\s+/, $1); die("$0: \"$func\" is not defined.\n") unless ($self->{env}->{$func}); &{$self->{env}->{$func}}(@args); }gex; # Convert left-over %% into %. $com =~ s/%%/%/g; # White space cleanup. XXX NO WAY FOR USER TO HAVE QUOTED SPACES $com = join(' ', split(' ', $com)); next if $com =~ /^:/ && $com !~ /^:\S/; push(@coms, $com); } @coms } # Build the target using the previously specified commands. sub action { my($self, $tgt) = @_; my($env) = $self->{env}; if ($param::build) { futil::mkdir($tgt->{dir}); unlink($tgt->path) if ! $tgt->precious; } # Set environment. map(delete $ENV{$_}, keys %ENV); %ENV = %{$env->{ENV}}; # Handle multi-line commands. my $com; for $com ($self->getcoms($tgt)) { if ($com !~ s/^\@\s*//) { main::showcom($com); } if ($param::build) { if ($com =~ /^\[perl\]\s*/) { my $perlcmd = $'; my $status; { # Restore the script package variables that were defined # in the Conscript file that defined this [perl] build, # so the code executes with the expected variables. my($package) = $self->{'package'}; my($pkgvars) = $tgt->{conscript}->{pkgvars}; NameSpace::restore($package, $pkgvars) if $pkgvars; # Actually execute the [perl] command to build the target. $status = eval "package $package; $perlcmd"; # Clean up the namespace by deleting the package variables # we just restored. NameSpace::remove($package, keys %$pkgvars) if $pkgvars; } if (!defined($status)) { warn "$0: *** Error during perl command eval: $@.\n"; return undef; } elsif ($status == 0) { warn "$0: *** Perl command returned $status (this indicates an error).\n"; return undef; } next; } #--------------------- # Can't fork on Win32 #--------------------- if ($main::_WIN32) { system($com); if ($?) { my ($b0, $b1) = ($? & 0xFF, $? >> 8); my $err = $b1 || $?; my $path = $tgt->path; my $warn = qq($0: *** [$path] Error $err); $warn .= " (executable not found in path?)" if $b1 == 0xFF; warn "$warn\n"; return undef; } } else { my($pid) = fork(); die("$0: unable to fork child process ($!)\n") if !defined $pid; if (!$pid) { # This is the child. We eval the command to suppress -w # warnings about not reaching the statements afterwards. eval 'exec($com)'; $com =~ s/\s.*//; die qq($0: failed to execute "$com" ($!). ) . qq(Is this an executable on path "$ENV{PATH}"?\n); } for (;;) { do {} until wait() == $pid; my ($b0, $b1) = ($? & 0xFF, $? >> 8); # Don't actually see 0177 on stopped process; is this necessary? next if $b0 == 0177; # process stopped; we can wait. if ($b0) { my($core, $sig) = ($b0 & 0200, $b0 & 0177); my($coremsg) = $core ? "; core dumped" : ""; $com =~ s/\s.*//; my $path = $tgt->path; my $err = "$0: *** \[$path\] $com terminated by signal " . "$sig$coremsg\n"; warn $err; return undef; } if ($b1) { my($path) = $tgt->path; warn qq($0: *** [$path] Error $b1\n); # trying to be like make. return undef; } last; } } } } # success. return 1; } # Return script signature. sub script { $_[0]->{comsig} } # Create a linked module. package build::command::link; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } # Find an appropriate linker. sub find { my($class, $env, $command) = @_; if (!exists $env->{_LDIRS}) { my($ldirs) = ''; my($wd) = $env->{_cwd}; my($pdirs) = $env->{LIBPATH}; if (! defined $pdirs) { $pdirs = [ ]; } elsif (ref($pdirs) ne 'ARRAY') { $pdirs = [ split(/$main::PATH_SEPARATOR/o, $pdirs) ]; } my $dir; for $dir (map($wd->lookupdir($env->_subst($_)), @$pdirs)) { my($dpath) = $dir->path; $ldirs .= " ".$env->{LIBDIRPREFIX}.$dpath; next if File::Spec->file_name_is_absolute($dpath); if (@param::rpath) { my $d; if ($dpath eq $dir::CURDIR) { foreach $d (map($_->path, @param::rpath)) { $ldirs .= " ".$env->{LIBDIRPREFIX}.$d; } } else { foreach $d (map($_->path, @param::rpath)) { $ldirs .= " ".$env->{LIBDIRPREFIX}.File::Spec->catfile($d, $dpath); } } } } $env->{_LDIRS} = "%($ldirs%)"; } # Introduce a new magic _LIBS symbol which allows to use the # Unix-style -lNAME syntax for Win32 only. -lNAME will be replaced # with %{PREFLIB}NAME%{SUFLIB}. 1998-06-18 if ($main::_WIN32 && !exists $env->{_LIBS}) { my $libs; my $name; for $name (split(' ', $env->_subst($env->{LIBS} || ''))) { if ($name =~ /^-l(.*)/) { $name = "$env->{PREFLIB}$1$env->{SUFLIB}"; } $libs .= ' ' . $name; } $env->{_LIBS} = $libs ? "%($libs%)" : ''; } bless find build::command($env, $command); } # Called from file::build. Make sure any libraries needed by the # environment are built, and return the collected signatures # of the libraries in the path. sub includes { return $_[0]->{sig} if exists $_[0]->{sig}; my($self, $tgt) = @_; my($env) = $self->{env}; my($ewd) = $env->{_cwd}; my $ldirs = $env->{LIBPATH}; if (! defined $ldirs) { $ldirs = [ ]; } elsif (ref($ldirs) ne 'ARRAY') { $ldirs = [ split(/$main::PATH_SEPARATOR/o, $ldirs) ]; } my @lpath = map($ewd->lookupdir($_), @$ldirs); my(@sigs); my(@names); if ($main::_WIN32) { # Pass %LIBS symbol through %-substituition # 1998-06-18 @names = split(' ', $env->_subst($env->{LIBS} || '')); } else { @names = split(' ', $env->{LIBS} || ''); } my $name; for $name (@names) { my ($lpath, @allnames); if ($name =~ /^-l(.*)/) { # -l style names are looked up on LIBPATH, using all # possible lib suffixes in the same search order the # linker uses (according to SUFLIBS). # Recognize new PREFLIB symbol, which should be 'lib' on # Unix, and empty on Win32. TODO: What about shared # library suffixes? 1998-05-13 @allnames = map("$env->{PREFLIB}$1$_", split(/:/, $env->{SUFLIBS})); $lpath = \@lpath; } else { @allnames = ($name); # On Win32, all library names are looked up in LIBPATH # 1998-05-13 if ($main::_WIN32) { $lpath = [$dir::top, @lpath]; } else { $lpath = [$dir::top]; } } my $dir; DIR: for $dir (@$lpath) { my $n; for $n (@allnames) { my($lib) = $dir->lookup_accessible($n); if ($lib) { last DIR if $lib->ignore; if ((build $lib) eq 'errors') { $tgt->{status} = 'errors'; return undef; } push(@sigs, 'sig'->signature($lib)); last DIR; } } } } $self->{sig} = 'sig'->collect(@sigs); } # Always compatible with other such builders, so the user # can define a single program or module from multiple places. sub compatible { my($self, $other) = @_; ref($other) eq "build::command::link"; } # Link a program. package build::command::linkedmodule; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } # Always compatible with other such builders, so the user # can define a single linked module from multiple places. sub compatible { my($self, $other) = @_; ref($other) eq "build::command::linkedmodule"; } # Builder for a C module package build::command::cc; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } sub find { $_[1]->{_cc} || do { my($class, $env) = @_; my($cpppath) = $env->_subst($env->{CPPPATH}); my($cscanner) = find scan::cpp($env->{_cwd}, $cpppath); $env->{_IFLAGS} = "%(" . $cscanner->iflags($env) . "%)"; my($self) = find build::command($env, $env->{CCCOM}); $self->{scanner} = $cscanner; bless $env->{_cc} = $self; } } # Invoke the associated C scanner to get signature of included files. sub includes { my($self, $tgt) = @_; $self->{scanner}->includes($tgt, $tgt->{sources}[0]); } # Builder for a C++ module package build::command::cxx; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } sub find { $_[1]->{_cxx} || do { my($class, $env) = @_; my($cpppath) = $env->_subst($env->{CPPPATH}); my($cscanner) = find scan::cpp($env->{_cwd}, $cpppath); $env->{_IFLAGS} = "%(" . $cscanner->iflags($env) . "%)"; my($self) = find build::command($env, $env->{CXXCOM}); $self->{scanner} = $cscanner; bless $env->{_cxx} = $self; } } # Invoke the associated C scanner to get signature of included files. sub includes { my($self, $tgt) = @_; $self->{scanner}->includes($tgt, $tgt->{sources}[0]); } # Builder for a user command (cons::Command). We assume that a user # command might be built and implement the appropriate dependencies on # the command itself (actually, just on the first word of the command # line). package build::command::user; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } # XXX Optimize this to not use ignored paths. sub comsig { return $_[0]->{_comsig} if exists $_[0]->{_comsig}; my($self, $tgt) = @_; my($env) = $self->{env}; $self->{_comsig} = ''; my $com; com: for $com (split(/[\n;]/, $self->script)) { # Isolate command word. $com =~ s/^\s*//; $com =~ s/\s.*//; next if !$com; # blank line my($pdirs) = $env->{ENV}->{PATH}; if (! defined $pdirs) { $pdirs = [ ]; } elsif (ref($pdirs) ne 'ARRAY') { $pdirs = [ split(/$main::PATH_SEPARATOR/o, $pdirs) ]; } my $dir; for $dir (map($dir::top->lookupdir($_), @$pdirs)) { my($prog) = $dir->lookup_accessible($com); if ($prog) { # XXX Not checking execute permission. if ((build $prog) eq 'errors') { $tgt->{status} = 'errors'; return undef; } next com if $prog->ignore; $self->{_comsig} .= 'sig'->signature($prog); next com; } } # Not found: let shell give an error. } $self->{_comsig} } sub includes { my($self, $tgt) = @_; my($sig) = ''; # Check for any quick scanners attached to source files. my $dep; for $dep (@{$tgt->{dep}}, @{$tgt->{sources}}) { my($scanner) = $dep->{'srcscan',$self->{env}}; if ($scanner) { $sig .= $scanner->includes($tgt, $dep); } } # Add the command signature. return &comsig . $sig; } # Builder for a library module (archive). # We assume that a user command might be built and implement the # appropriate dependencies on the command itself. package build::command::library; use vars qw( @ISA ); BEGIN { @ISA = qw(build::command) } sub find { my($class, $env) = @_; bless find build::command($env, $env->{ARCOM}) } # Always compatible with other library builders, so the user # can define a single library from multiple places. sub compatible { my($self, $other) = @_; ref($other) eq "build::command::library"; } # A multi-target builder. # This allows multiple targets to be associated with a single build # script, without forcing all the code to be aware of multiple targets. package build::multiple; sub new { my($class, $builder, $tgts) = @_; bless { 'builder' => $builder, 'tgts' => $tgts }; } sub script { my($self, $tgt) = @_; $self->{builder}->script($tgt); } sub includes { my($self, $tgt) = @_; $self->{builder}->includes($tgt); } sub compatible { my($self, $tgt) = @_; $self->{builder}->compatible($tgt); } sub cachin { my($self, $tgt, $sig) = @_; $self->{builder}->cachin($tgt, $sig); } sub cachout { my($self, $tgt, $sig) = @_; $self->{builder}->cachout($tgt, $sig); } sub action { my($self, $invoked_tgt) = @_; return $self->{built} if exists $self->{built}; # Make sure all targets in the group are unlinked before building any. my($tgts) = $self->{tgts}; my $tgt; for $tgt (@$tgts) { futil::mkdir($tgt->{dir}); unlink($tgt->path) if ! $tgt->precious; } # Now do the action to build all the targets. For consistency # we always call the action on the first target, just so that # $> is deterministic. $self->{built} = $self->{builder}->action($tgts->[0]); # Now "build" all the other targets (except for the one # we were called with). This guarantees that the signature # of each target is updated appropriately. We force the # targets to be built even if they have been previously # considered and found to be OK; the only effect this # has is to make sure that signature files are updated # correctly. for $tgt (@$tgts) { if ($tgt ne $invoked_tgt) { delete $tgt->{status}; 'sig'->invalidate($tgt); build $tgt; } } # Status of action. $self->{built}; } # Generic scanning module. package scan; # Returns the signature of files included by the specified files on # behalf of the associated target. Any errors in handling the included # files are propagated to the target on whose behalf this processing # is being done. Signatures are cached for each unique file/scanner # pair. sub includes { my($self, $tgt, @files) = @_; my(%files, $file); my($inc) = $self->{includes} || ($self->{includes} = {}); while ($file = pop @files) { next if exists $files{$file}; if ($inc->{$file}) { push(@files, @{$inc->{$file}}); $files{$file} = 'sig'->signature($file->rfile); } else { if ((build $file) eq 'errors') { $tgt->{status} = 'errors'; # tgt inherits build status return (); } $files{$file} = 'sig'->signature($file->rfile); my(@includes) = $self->scan($file); $inc->{$file} = \@includes; push(@files, @includes); } } 'sig'->collect(sort values %files) } # A simple scanner. This is used by the QuickScanfunction, to setup # one-time target and environment-independent scanning for a source # file. Only used for commands run by the Command method. package scan::quickscan; use vars qw( @ISA %scanner ); BEGIN { @ISA = qw(scan) } sub find { my($class, $code, $env, $pdirs) = @_; if (! defined $pdirs) { $pdirs = [ ] ; } elsif (ref($pdirs) ne 'ARRAY') { $pdirs = [ split(/$main::PATH_SEPARATOR/o, $pdirs) ]; } my(@path) = map { $dir::cwd->lookupdir($_) } @$pdirs; my($spath) = "@path"; $scanner{$code,$env,$spath} || do { my($self) = { code => $code, env => $env, path => \@path }; $scanner{$code,$env,$spath} = bless $self; } } # Scan the specified file for included file names. sub scan { my($self, $file) = @_; my($code) = $self->{code}; my(@includes); # File should have been built by now. If not, we'll ignore it. return () unless open(SCAN, $file->rpath); while() { push(@includes, grep($_ ne '', &$code)); } close(SCAN); my($wd) = $file->{dir}; my(@files); my $name; for $name (@includes) { my $dir; for $dir ($file->{dir}, @{$self->{path}}) { my($include) = $dir->lookup_accessible($name); if ($include) { push(@files, $include) unless $include->ignore; last; } } } @files } # CPP (C preprocessor) scanning module package scan::cpp; use vars qw( @ISA %scanner ); BEGIN { @ISA = qw(scan) } # For this constructor, provide the include path argument (colon # separated). Each path is taken relative to the provided directory. # Note: a particular scanning object is assumed to always return the # same result for the same input. This is why the search path is a # parameter to the constructor for a CPP scanning object. We go to # some pains to make sure that we return the same scanner object # for the same path: otherwise we will unecessarily scan files. sub find { my($class, $dir, $pdirs) = @_; if (! defined $pdirs) { $pdirs = [ ]; } elsif (ref($pdirs) ne 'ARRAY') { $pdirs = [ split(/$main::PATH_SEPARATOR/o, $pdirs) ]; } my @path = map($dir->lookupdir($_), @$pdirs); my($spath) = "@path"; $scanner{$spath} || do { my($self) = {'path' => \@path}; $scanner{$spath} = bless $self; } } # Scan the specified file for include lines. sub scan { my($self, $file) = @_; my($angles, $quotes); if (exists $file->{angles}) { $angles = $file->{angles}; $quotes = $file->{quotes}; } else { my(@anglenames, @quotenames); return () unless open(SCAN, $file->rpath); while () { next unless /^\s*#/; if (/^\s*#\s*include\s*([<"])(.*?)[>"]/) { if ($1 eq "<") { push(@anglenames, $2); } else { push(@quotenames, $2); } } } close(SCAN); $angles = $file->{angles} = \@anglenames; $quotes = $file->{quotes} = \@quotenames; } my(@shortpath) = @{$self->{path}}; # path for <> style includes my(@longpath) = ($file->{dir}, @shortpath); # path for "" style includes my(@includes); my $name; for $name (@$angles) { my $dir; for $dir (@shortpath) { my($include) = $dir->lookup_accessible($name); if ($include) { push(@includes, $include) unless $include->ignore; last; } } } for $name (@$quotes) { my $dir; for $dir(@longpath) { my($include) = $dir->lookup_accessible($name); if ($include) { push(@includes, $include) unless $include->ignore; last; } } } return @includes } # Return the include flags that would be used for a C Compile. sub iflags { my($self, $env) = @_; my($iflags) = ''; my($dpath); for $dpath (map($_->path, @{$self->{path}})) { $iflags .= " ".$env->{INCDIRPREFIX}.$dpath; next if File::Spec->file_name_is_absolute($dpath); if (@param::rpath) { my $d; if ($dpath eq $dir::CURDIR) { foreach $d (map($_->path, @param::rpath)) { $iflags .= " ".$env->{INCDIRPREFIX}.$d; } } else { foreach $d (map($_->path, @param::rpath)) { $iflags .= " ".$env->{INCDIRPREFIX}.File::Spec->catfile($d, $dpath); } } } } $iflags } package File::Spec; use vars qw( $_SEP $_MATCH_SEP $_MATCH_VOL ); # Cons is migrating to using File::Spec for portable path name # manipulation. This is the right long-term direction, but there are # some problems with making the transition: # # For multi-volume support, we need to use newer interfaces # (splitpath, catpath, splitdir) that are only available in # File::Spec 0.8. # # File::Spec 0.8 doesn't work with Perl 5.00[34] due to # regular expression incompatibilities (use of \z). # # Forcing people to use a new version of a module is painful # because (in the workplace) their administrators aren't # always going to agree to install it everywhere. # # As a middle ground, we provide our own versions of all the File::Spec # methods we use, supporting both UNIX and Win32. Some of these methods # are home brew, some are cut-and-pasted from the real File::Spec methods. # This way, we're not reinventing the whole wheel, at least. # # We can (and should) get rid of this class whenever 5.00[34] and # versions of File::Spec prior to 0.9 (?) have faded sufficiently. # We also may need to revisit whenever someone first wants to use # Cons on some platform other than UNIX or Win32. BEGIN { if ($main::_WIN32) { $_SEP = '\\'; $_MATCH_SEP = "[\Q/$_SEP\E]"; $_MATCH_VOL = "([a-z]:)?$_MATCH_SEP"; } else { $_SEP = '/'; $_MATCH_SEP = "\Q$_SEP\E"; $_MATCH_VOL = $_MATCH_SEP; } } sub canonpath { my ($self, $path) = @_; if ($main::_WIN32) { $path =~ s/^([a-z]:)/\u$1/s; $path =~ s|/|\\|g; $path =~ s|([^\\])\\+|$1\\|g; # xx////xx -> xx/xx $path =~ s|(\\\.)+\\|\\|g; # xx/././xx -> xx/xx $path =~ s|^(\.\\)+||s unless $path eq ".\\"; # ./xx -> xx $path =~ s|\\$|| unless $path =~ m#^([A-Z]:)?\\$#s; # xx/ -> xx } else { $path =~ s|/+|/|g unless($^O eq 'cygwin'); # xx////xx -> xx/xx $path =~ s|(/\.)+/|/|g; # xx/././xx -> xx/xx $path =~ s|^(\./)+||s unless $path eq "./"; # ./xx -> xx $path =~ s|^/(\.\./)+|/|s; # /../../xx -> xx $path =~ s|/$|| unless $path eq "/"; # xx/ -> xx } return $path; } sub catdir { my $self = shift; my @args = @_; foreach (@args) { # append a slash to each argument unless it has one there $_ .= $_SEP if $_ eq '' || substr($_,-1) ne $_SEP; } return $self->canonpath(join('', @args)); } sub catfile { my $self = shift; my $file = pop @_; return $file unless @_; my $dir = $self->catdir(@_); $dir .= $_SEP unless substr($dir,-1) eq $_SEP; $file = '' if ! defined($file); return $dir.$file; } sub catpath { my $path = $_[1] . $_[0]->catfile(@_[2..$#_]); $path =~ s/(.)$_MATCH_SEP*$/$1/; $path; } sub curdir { '.' } sub file_name_is_absolute { my ($self, $file) = @_; return scalar($file =~ m{^$_MATCH_VOL}is); } sub splitdir { my @dirs = split(/$_MATCH_SEP/, $_[1], -1); push(@dirs, '') if $dirs[$#dirs]; @dirs; } sub splitpath { my ($self, $path) = @_; my $vol = ''; my $sep = $_SEP; if ($main::_WIN32) { if ($path =~ s#^([A-Za-z]:|(?:\\\\|//)[^\\/]+[\\/][^\\/]+)([\\/])#$2#) { $vol = $1; $sep = $2; } } my(@path) = split(/$_MATCH_SEP/, $path, -1); my $file = pop @path; my $dirs = join($sep, @path, ''); return ($vol, $dirs, $file); } sub updir { '..' } sub case_tolerant { return $main::_WIN32; } # Directory and file handling. Files/dirs are represented by objects. # Other packages are welcome to add component-specific attributes. package dir; use vars qw( $SEPARATOR $MATCH_SEPARATOR $CURDIR $UPDIR $cwd_vol %root $top $cwd ); BEGIN { # A portable way of determing our directory separator. $SEPARATOR = File::Spec->catdir('', ''); # A fast-path regular expression to match a directory separator # anywhere in a path name. if ($SEPARATOR eq '/') { $MATCH_SEPARATOR = "\Q$SEPARATOR\E"; } else { $MATCH_SEPARATOR = "[\Q/$SEPARATOR\E]"; } # Cache these values so we don't have to make a method call # every time we need them. $CURDIR = File::Spec->curdir; # '.' on UNIX $UPDIR = File::Spec->updir; # '..' on UNIX # $cwd_vol = ''; } # Annotate a node (file or directory) with info about the # method that created it. sub creator { my($self, @frame) = @_; $self->{'creator'} = \@frame if @frame; $self->{'creator'}; } # Handle a file|dir type exception. We only die if we find we were # invoked by something in a Conscript/Construct file, because # dependencies created directly by Cons' analysis shouldn't cause # an error. sub _type_exception { my($e) = @_; my($line, $sub); (undef, undef, $line, $sub) = script::caller_info; if (defined $line) { my $err = "\"${\$e->path}\" already in use as a " . ref($e) . " before $sub on line $line"; if ($e->{'creator'}) { my $script; (undef, $script, $line, $sub) = @{$e->{'creator'}}; $err = "\t" . $err . ",\n\t\tdefined by $sub in $script, line $line"; } $err .= "\n"; die $err; } } # This wraps up all the common File::Spec logic that we use for parsing # directory separators in a path and turning it into individual # subdirectories that we must create, as well as creation of root # nodes for any new file system volumes we find. File::Spec doesn't have # intuitively obvious interfaces, so this is heavily commented. # # Note: This is NOT an object or class method; # it's just a utility subroutine. sub _parse_path { my($dir, $path) = @_; # Convert all slashes to the native directory separator. # This allows Construct files to always be written with good # old POSIX path names, regardless of what we're running on. $path = File::Spec->canonpath($path); # File::Spec doesn't understand the Cons convention of # an initial '#' for top-relative files. Strip it. my($toprel) = $path =~ s/^#//; # Let File::Spec do the heavy lifting of parsing the path name. my($vol, $directories, $entry) = File::Spec->splitpath($path); my @dirs = File::Spec->splitdir($directories); # If there was a file entry on the end of the path, then the # last @dirs element is '' and we don't need it. If there # wasn't a file entry on the end (File::Spec->splitpath() knew # the last component was a directory), then the last @dirs # element becomes the entry we want to look up. my($e) = pop @dirs; $entry = $e if $entry eq ''; if (File::Spec->file_name_is_absolute($path)) { # An absolute path name. If no volume was supplied, # use the volume of our current directory. $vol = $cwd_vol if $vol eq ''; $vol = uc($vol) if File::Spec->case_tolerant; if (! defined $root{$vol}) { # This is our first time looking up a path name # on this volume, so create a root node for it. # (On UNIX systems, $vol is always '', so '/' # always maps to the $root{''} node.) $root{$vol} = {path => $vol.$SEPARATOR, prefix => $vol.$SEPARATOR, srcpath => $vol.$SEPARATOR, 'exists' => 1 }; $root{$vol}->{'srcdir'} = $root{$vol}; bless $root{$vol}; } # We're at the top, so strip the blank entry from the front of # the @dirs array since the initial '/' it represents will now # be supplied by the root node we return. shift @dirs; $dir = $root{$vol}; } elsif ($toprel) { $dir = $dir::top; } ($dir, \@dirs, $entry); } # Common subroutine for creating directory nodes. sub _create_dirs { my ($dir, @dirs) = @_; my $e; foreach $e (@dirs) { my $d = $dir->{member}->{$e}; if (! defined $d) { bless $d = { 'entry' => $e, 'dir' => $dir, }, 'dir'; $d->creator(script::caller_info); $d->{member}->{$dir::CURDIR} = $d; $d->{member}->{$dir::UPDIR} = $dir; $dir->{member}->{$e} = $d; } elsif (ref $d eq 'entry') { bless $d, 'dir'; $d->{member}->{$dir::CURDIR} = $d; $d->{member}->{$dir::UPDIR} = $dir; } elsif (ref $d eq 'file') { # This clause is to supply backwards compatibility, # with a warning, for anyone that's used FilePath # to refer to a directory. After people have using # 1.8 have had time to adjust (sometime in version # 1.9 or later), we should remove this entire clause. my($script, $line, $sub); (undef, $script, $line, $sub) = @{$d->{'creator'}}; if ($sub eq 'script::FilePath') { print STDERR "$0: Warning: $sub used to refer to a directory\n" . "\tat line $line of $script. Use DirPath instead.\n"; bless $d, 'dir'; } else { _type_exception($d); } } elsif (ref $d ne 'dir') { _type_exception($d); } $dir = $d; } $dir; } # Look up an entry in a directory. This method is for when we don't # care whether a file or directory is returned, so if the entry already # exists, it will simply be returned. If not, we create it as a # generic "entry" which can be later turned into a file or directory # by a more-specific lookup. # # The file entry may be specified as relative, absolute (starts with /), # or top-relative (starts with #). sub lookup { my($dir, $entry) = @_; if ($entry !~ m#$MATCH_SEPARATOR#o) { # Fast path: simple entry name in a known directory. if ($entry =~ s/^#//) { # Top-relative names begin with #. $dir = $dir::top; } } else { my $dirsref; ($dir, $dirsref, $entry) = _parse_path($dir, $entry); $dir = _create_dirs($dir, @$dirsref) if @$dirsref; return if ! defined $dir; return $dir if $entry eq ''; } my $e = $dir->{member}->{$entry}; if (! defined $e) { bless $e = { 'entry' => $entry, 'dir' => $dir, }, 'entry'; $e->creator(script::caller_info); $dir->{member}->{$entry} = $e; } $e; } # Look up a file entry in a directory. # # The file entry may be specified as relative, absolute (starts with /), # or top-relative (starts with #). sub lookupfile { my($dir, $entry) = @_; if ($entry !~ m#$MATCH_SEPARATOR#o) { # Fast path: simple entry name in a known directory. if ($entry =~ s/^#//) { # Top-relative names begin with #. $dir = $dir::top; } } else { my $dirsref; ($dir, $dirsref, $entry) = _parse_path($dir, $entry); $dir = _create_dirs($dir, @$dirsref) if @$dirsref; return undef if $entry eq ''; } my $f = $dir->{member}->{$entry}; if (! defined $f) { bless $f = { 'entry' => $entry, 'dir' => $dir, }, 'file'; $f->creator(script::caller_info); $dir->{member}->{$entry} = $f; } elsif (ref $f eq 'entry') { bless $f, 'file'; } elsif (ref $f ne 'file') { _type_exception($f); } $f; } # Look up a (sub-)directory entry in a directory. # # The (sub-)directory entry may be specified as relative, absolute # (starts with /), or top-relative (starts with #). sub lookupdir { my($dir, $entry) = @_; my $dirsref; if ($entry !~ m#$MATCH_SEPARATOR#o) { # Fast path: simple entry name in a known directory. if ($entry =~ s/^#//) { # Top-relative names begin with #. $dir = $dir::top; } } else { ($dir, $dirsref, $entry) = _parse_path($dir, $entry); } _create_dirs($dir, @$dirsref, $entry); } # Look up a file entry and return it if it's accessible. sub lookup_accessible { my $file = $_[0]->lookupfile($_[1]); return ($file && $file->accessible) ? $file : undef; } # Return the parent directory without doing a lookupdir, # which would create a parent if it doesn't already exist. # A return value of undef (! $dir->up) indicates a root directory. sub up { $_[0]->{member}->{$dir::UPDIR}; } # Return whether this is an entry somewhere underneath the # specified directory. sub is_under { my $dir = $_[0]; while ($dir) { return 1 if $_[1] == $dir; $dir = $dir->up; } return undef; } # Return the relative path from the calling directory ($_[1]) # to the object. If the object is not under the directory, then # we return it as a top-relative or absolute path name. sub relpath { my ($dir, $obj) = @_; my @dirs; my $o = $obj; while ($o) { if ($dir == $o) { if (@dirs < 2) { return $dirs[0] || ''; } else { return File::Spec->catdir(@dirs); } } unshift(@dirs, $o->{entry}); $o = $o->up; } # The object was not underneath the specified directory. # Use the node's cached path, which is either top-relative # (in which case we append '#' to the beginning) or # absolute. my $p = $obj->path; $p = '#' . $p if ! File::Spec->file_name_is_absolute($p); return $p; } # Return the path of the directory (file paths implemented # separately, below). sub path { $_[0]->{path} || ($_[0]->{path} = $_[0]->{dir}->prefix . $_[0]->{entry}); } # Return the pathname as a prefix to be concatenated with an entry. sub prefix { return $_[0]->{prefix} if exists $_[0]->{prefix}; $_[0]->{prefix} = $_[0]->path . $SEPARATOR; } # Return the related source path prefix. sub srcprefix { return $_[0]->{srcprefix} if exists $_[0]->{srcprefix}; my($srcdir) = $_[0]->srcdir; $srcdir->{srcprefix} = $srcdir eq $_[0] ? $srcdir->prefix : $srcdir->srcprefix; } # Return the related source directory. sub srcdir { $_[0]->{'srcdir'} || ($_[0]->{'srcdir'} = $_[0]->{dir}->srcdir->lookupdir($_[0]->{entry})) } # Return if the directory is linked to a separate source directory. sub is_linked { return $_[0]->{is_linked} if defined $_[0]->{is_linked}; $_[0]->{is_linked} = $_[0]->path ne $_[0]->srcdir->path; } sub link { my(@paths) = @_; my($srcdir) = $dir::cwd->lookupdir(pop @paths)->srcdir; map($dir::cwd->lookupdir($_)->{'srcdir'} = $srcdir, @paths); # make a reverse lookup for the link. $srcdir->{links} = [] if ! $srcdir->{links}; push @{$srcdir->{links}}, @paths; } use vars qw( @tail ); # TODO: Why global ???? sub linked_targets { my $tgt = shift; my @targets = (); my $dir; if (ref $tgt eq 'dir') { $dir = $tgt; } else { push @tail, $tgt; $dir = $tgt->{dir}; } while ($dir) { if (defined $dir->{links} && @{$dir->{links}}) { push(@targets, map(File::Spec->catdir($_, @tail), @{$dir->{links}})); #print STDERR "Found Link: ${\$dir->path} -> @{\$dir->{links}}\n"; } unshift @tail, $dir->{entry}; $dir = $dir->up; } return map($dir::top->lookupdir($_), @targets); } sub accessible { my $path = $_[0]->path; my $err = "$0: you have attempted to use path \"$path\" both as a file " . "and as a directory!\n"; die $err; } sub init { my $path = Cwd::cwd(); # We know we can get away with passing undef to lookupdir # as the directory because $dir is an absolute path. $top = lookupdir(undef, $path); $top->{'path'} = $top->{srcpath} = $dir::CURDIR; $top->{'prefix'} = ''; $top->{'srcdir'} = $top; $cwd = $top; ($cwd_vol, undef, undef) = File::Spec->splitpath($path); $cwd_vol = '' if ! defined $cwd_vol; $cwd_vol = uc($cwd_vol) if File::Spec->case_tolerant; } package file; use vars qw( @ISA $level ); BEGIN { @ISA = qw(dir); $level = 0 } # Return the pathname of the file. # Define this separately from dir::path because we don't want to # cache all file pathnames (just directory pathnames). sub path { $_[0]->{dir}->prefix . $_[0]->{entry} } # Return the related source file path. sub srcpath { $_[0]->{dir}->srcprefix . $_[0]->{entry} } # Return if the file is (should be) linked to a separate source file. sub is_linked { $_[0]->{dir}->is_linked } # Repository file search. If the local file exists, that wins. # Otherwise, return the first existing same-named file under a # Repository directory. If there isn't anything with the same name # under a Repository directory, return the local file name anyway # so that some higher layer can try to construct it. sub rfile { return $_[0]->{rfile} if exists $_[0]->{rfile}; my($self) = @_; my($rfile) = $self; if (@param::rpath) { my($path) = $self->path; if (! File::Spec->file_name_is_absolute($path) && ! -f $path) { my($dir); foreach $dir (@param::rpath) { my($t) = $dir->prefix . $path; if (-f $t) { $rfile = $_[0]->lookupfile($t); $rfile->{is_on_rpath} = 1; last; } } } } $self->{rfile} = $rfile; } # returns the "precious" status of this file. sub precious { return $_[0]->{precious}; } # "Erase" reference to a Repository file, # making this a completely local file object # by pointing it back to itself. sub no_rfile { $_[0]->{'rfile'} = $_[0]; } # Return a path to the first existing file under a Repository directory, # implicitly returning the current file's path if there isn't a # same-named file under a Repository directory. sub rpath { $_[0]->{rpath} || ($_[0]->{rpath} = $_[0]->rfile->path) } # Return a path to the first linked srcpath file under a Repositoy # directory, implicitly returning the current file's srcpath if there # isn't a same-named file under a Repository directory. sub rsrcpath { return $_[0]->{rsrcpath} if exists $_[0]->{rsrcpath}; my($self) = @_; my($path) = $self->{rsrcpath} = $self->srcpath; if (@param::rpath && ! File::Spec->file_name_is_absolute($path) && ! -f $path) { my($dir); foreach $dir (@param::rpath) { my($t) = $dir->prefix . $path; if (-f $t) { $self->{rsrcpath} = $t; last; } } } $self->{rsrcpath}; } # Return if a same-named file source file exists. # This handles the interaction of Link and Repository logic. # As a side effect, it will link a source file from its Linked # directory (preferably local, but maybe in a repository) # into a build directory from its proper Linked directory. sub source_exists { return $_[0]->{source_exists} if defined $_[0]->{source_exists}; my($self) = @_; my($path) = $self->path; my($time) = (stat($path))[9]; if ($self->is_linked) { # Linked directory, local logic. my($srcpath) = $self->srcpath; my($srctime) = (stat($srcpath))[9]; if ($srctime) { if (! $time || $srctime != $time) { futil::install($srcpath, $self); } return $self->{source_exists} = 1; } # Linked directory, repository logic. if (@param::rpath) { if ($self != $self->rfile) { return $self->{source_exists} = 1; } my($rsrcpath) = $self->rsrcpath; if ($path ne $rsrcpath) { my($rsrctime) = (stat($rsrcpath))[9]; if ($rsrctime) { if (! $time || $rsrctime != $time) { futil::install($rsrcpath, $self); } return $self->{source_exists} = 1; } } } # There was no source file in any Linked directory # under any Repository. If there's one in the local # build directory, it no longer belongs there. if ($time) { unlink($path) || die("$0: couldn't unlink $path ($!)\n"); } return $self->{source_exists} = ''; } else { if ($time) { return $self->{source_exists} = 1; } if (@param::rpath && $self != $self->rfile) { return $self->{source_exists} = 1; } return $self->{source_exists} = ''; } } # Return if a same-named derived file exists under a Repository directory. sub derived_exists { $_[0]->{derived_exists} || ($_[0]->{derived_exists} = ($_[0] != $_[0]->rfile)); } # Return if this file is somewhere under a Repository directory. sub is_on_rpath { $_[0]->{is_on_rpath}; } sub local { my($self, $arg) = @_; if (defined $arg) { $self->{'local'} = $arg; } $self->{'local'}; } # Return the entry name of the specified file with the specified # suffix appended. Leave it untouched if the suffix is already there. # Differs from the addsuffix function, below, in that this strips # the existing suffix (if any) before appending the desired one. sub base_suf { my($entry) = $_[0]->{entry}; if ($entry !~ m/$_[1]$/) { $entry =~ s/\.[^\.]*$//; $entry .= $_[1]; } $entry; } # Return the suffix of the file, for up to a 3 character # suffix. Anything less returns nothing. sub suffix { if (! $main::_WIN32) { $_[0]->{entry} =~ /\.[^\.\/]{0,3}$/; $& } else { my @pieces = split(/\./, $_[0]->{entry}); my $suffix = pop(@pieces); return ".$suffix"; } } # Called as a simple function file::addsuffix(name, suffix) sub addsuffix { my($name, $suffix) = @_; if ($suffix && substr($name, -length($suffix)) ne $suffix) { return $name .= $suffix; } $name; } # Return true if the file is (or will be) accessible. # That is, if we can build it, or if it is already present. sub accessible { (exists $_[0]->{builder}) || ($_[0]->source_exists); } # Return true if the file should be ignored for the purpose # of computing dependency information (should not be considered # as a dependency and, further, should not be scanned for # dependencies). sub ignore { return 0 if !$param::ignore; return $_[0]->{ignore} if exists $_[0]->{ignore}; $_[0]->{ignore} = $_[0]->path =~ /$param::ignore/o; } # Build the file, if necessary. sub build { $_[0]->{status} || &file::_build; } sub _build { my($self) = @_; print main::DEPFILE $self->path, "\n" if $param::depfile; print((' ' x $level), "Checking ", $self->path, "\n") if $param::depends; if (!exists $self->{builder}) { # We don't know how to build the file. This is OK, if # the file is present as a source file, under either the # local tree or a Repository. if ($self->source_exists) { return $self->{status} = 'handled'; } else { my($name) = $self->path; print("$0: don't know how to construct \"$name\"\n"); exit(1) unless $param::kflag; return $self->{status} = 'errors'; # xxx used to be 'unknown' } } # An associated build object exists, so we know how to build # the file. We first compute the signature of the file, based # on its dependendencies, then only rebuild the file if the # signature has changed. my($builder) = $self->{builder}; $level += 2; my(@deps) = (@{$self->{dep}}, @{$self->{sources}}); my($rdeps) = \@deps; if ($param::random) { # If requested, build in a random order, instead of the # order that the dependencies were listed. my(%rdeps); map { $rdeps{$_,'*' x int(rand 10)} = $_ } @deps; $rdeps = [values(%rdeps)]; } $self->{status} = ''; my $dep; for $dep (@$rdeps) { if ((build $dep) eq 'errors') { # Propagate dependent errors to target. # but try to build all dependents regardless of errors. $self->{status} = 'errors'; } } # If any dependents had errors, then we abort. if ($self->{status} eq 'errors') { $level -= 2; return 'errors'; } # Compute the final signature of the file, based on # the static dependencies (in order), dynamic dependencies, # output path name, and (non-substituted) build script. my($sig) = 'sig'->collect(map('sig'->signature($_->rfile), @deps), $builder->includes($self), $builder->script); # May have gotten errors during computation of dynamic # dependency signature, above. $level -= 2; return 'errors' if $self->{status} eq 'errors'; if (@param::rpath && $self->derived_exists) { # There is no local file of this name, but there is one # under a Repository directory. if ('sig'->current($self->rfile, $sig)) { # The Repository copy is current (its signature matches # our calculated signature). if ($self->local) { # ...but they want a local copy, so provide it. main::showcom("Local copy of ${\$self->path} from " . "${\$self->rpath}"); futil::install($self->rpath, $self); 'sig'->set($self, $sig); } return $self->{status} = 'handled'; } # The signatures don't match, implicitly because something # on which we depend exists locally. Get rid of the reference # to the Repository file; we'll build this (and anything that # depends on it) locally. $self->no_rfile; } # Then check for currency. if (! 'sig'->current($self, $sig)) { # We have to build/derive the file. print((' ' x $level), "Rebuilding ", $self->path, ": out of date.\n") if $param::depends; # First check to see if the built file is cached. if ($builder->cachin($self, $sig)) { 'sig'->set($self, $sig); return $self->{status} = 'built'; } elsif ($builder->action($self)) { $builder->cachout($self, $sig); 'sig'->set($self, $sig); return $self->{status} = 'built'; } else { die("$0: errors constructing ${\$self->path}\n") unless $param::kflag; return $self->{status} = 'errors'; } } else { # Push this out to the cache if we've been asked to (-C option). # Don't normally do this because it slows us down. # In a fully built system, no accesses to the cache directory # are required to check any files. This is a win if cache is # heavily shared. Enabling this option puts the directory in the # loop. Useful only when you wish to recreate a cache from a build. if ($param::cachesync) { $builder->cachout($self, $sig); 'sig'->set($self, $sig); } return $self->{status} = 'handled'; } } # Bind an action to a file, with the specified sources. No return value. sub bind { my($self, $builder, @sources) = @_; if ($self->{builder} && !$self->{builder}->compatible($builder)) { # Even if not "compatible", we can still check to see if the # derivation is identical. It should be identical if the builder is # the same and the sources are the same. if ("$self->{builder} @{$self->{sources}}" ne "$builder @sources") { $main::errors++; my($_foo1, $script1, $line1, $sub1) = @{$self->creator}; my($_foo2, $script2, $line2, $sub2) = script::caller_info; my $err = "\t${\$self->path}\n" . "\tbuilt (at least) two different ways:\n" . "\t\t$script1, line $line1: $sub1\n" . "\t\t$script2, line $line2: $sub2\n"; die $err; } return; } if ($param::wflag) { my($script, $line, $sub); (undef, $script, $line, $sub) = script::caller_info; $self->{script} = '' if ! defined $self->{script}; $self->{script} .= "; " if $self->{script}; $self->{script} .= qq($sub in "$script", line $line); } $self->{builder} = $builder; push(@{$self->{sources}}, @sources); @{$self->{dep}} = () if ! defined $self->{dep}; $self->{conscript} = $priv::self->{script}; } sub is_under { $_[0]->{dir}->is_under($_[1]); } sub relpath { my $dirpath = $_[0]->relpath($_[1]->{dir}); if (! $dirpath) { return $_[1]->{entry}; } else { File::Spec->catfile($dirpath, $_[1]->{entry}); } } # Generic entry (file or directory) handling. # This is an empty subclass for nodes that haven't # quite decided whether they're files or dirs. # Use file methods until someone blesses them one way or the other. package entry; use vars qw( @ISA ); BEGIN { @ISA = qw(file) } # File utilities package futil; # Install one file as another. # Links them if possible (hard link), otherwise copies. # Don't ask why, but the source is a path, the tgt is a file obj. sub install { my($sp, $tgt) = @_; my($tp) = $tgt->path; return 1 if $tp eq $sp; return 1 if eval { link($sp, $tp) }; unlink($tp); if (! futil::mkdir($tgt->{dir})) { return undef; } return 1 if eval { link($sp, $tp) }; futil::copy($sp, $tp); } # Copy one file to another. Arguments are actual file names. # Returns undef on failure. Preserves mtime and mode. sub copy { my ($sp, $tp) = @_; my ($mode, $length, $atime, $mtime) = (stat($sp))[2,7,8,9]; # Use Perl standard library module for file copying, which handles # binary copies. 1998-06-18 if (! File::Copy::copy($sp, $tp)) { warn qq($0: can\'t install "$sp" to "$tp" ($!)\n); #' return undef; } # The file has been created, so try both the chmod and utime, # first making sure the copy is writable (because permissions # affect the ability to modify file times on some operating # systems), and then changing permissions back if necessary. my $ret = 1; my $wmode = $mode | 0700; if (! chmod $wmode, $tp) { warn qq($0: can\'t set mode $wmode on file "$tp" ($!)\n); #' $ret = undef; } if (! utime $atime, $mtime, $tp) { warn qq($0: can\'t set modification time for file "$tp" ($!)\n); #' $ret = undef; } if ($mode != $wmode && ! chmod $mode, $tp) { warn qq($0: can\'t set mode $mode on file "$tp" ($!)\n); #' $ret = undef; } return $ret; } # Ensure that the specified directory exists. # Aborts on failure. sub mkdir { return 1 if $_[0]->{'exists'}; if (! futil::mkdir($_[0]->{dir})) { # Recursively make parent. return undef; } my($path) = $_[0]->path; if (!-d $path && !mkdir($path, 0777)) { warn qq($0: can't create directory $path ($!).\n); #' return undef; } $_[0]->{'exists'} = 1; } # Signature package. package sig::hash; use vars qw( $called ); sub init { my($dir) = @_; my($consign) = $dir->prefix . ".consign"; my($dhash) = $dir->{consign} = {}; if (-f $consign) { open(CONSIGN, $consign) || die("$0: can't open $consign ($!)\n"); while() { chop; my ($file, $sig) = split(/:/,$_); $dhash->{$file} = $sig; } close(CONSIGN); } $dhash } # Read the hash entry for a particular file. sub in { my($dir) = $_[0]->{dir}; ($dir->{consign} || init($dir))->{$_[0]->{entry}} } # Write the hash entry for a particular file. sub out { my($file, $sig) = @_; my($dir) = $file->{dir}; ($dir->{consign} || init($dir))->{$file->{entry}} = $sig; $sig::hash::dirty{$dir} = $dir; } # Flush hash entries. Called at end or via ^C interrupt. sub END { return if $called++; # May be called twice. close(CONSIGN); # in case this came in via ^C. my $dir; for $dir (values %sig::hash::dirty) { my($consign) = $dir->prefix . ".consign"; my($constemp) = $consign . ".$$"; if (! open(CONSIGN, ">$constemp")) { die("$0: can't create $constemp ($!)\n"); } my($entry, $sig); while (($entry, $sig) = each %{$dir->{consign}}) { if (! print CONSIGN "$entry:$sig\n") { die("$0: error writing to $constemp ($!)\n"); } } close(CONSIGN); if (! rename($constemp, $consign)) { if (futil::copy($constemp, $consign)) { unlink($constemp); } else { die("$0: couldn't rename or copy $constemp to $consign " . "($!)\n"); } } } } # Derived file caching. package cache; # Find a file in the cache. Return non-null if the file is in the cache. sub in { return undef unless $param::cache; my($file, $sig) = @_; # Add the path to the signature, to make it unique. $sig = 'sig'->collect($sig, $file->path) unless $param::mixtargets; my($dir) = substr($sig, 0, 1); my($cp) = File::Spec->catfile($param::cache, $dir, $sig); return -f $cp && futil::install($cp, $file); } # Try to flush a file to the cache, if not already there. # If it doesn't make it out, due to an error, then that doesn't # really matter. sub out { return unless $param::cache; my($file, $sig) = @_; # Add the path to the signature, to make it unique. $sig = 'sig'->collect($sig, $file->path) unless $param::mixtargets; my($dir) = substr($sig, 0, 1); my($sp) = $file->path; my($cp) = File::Spec->catfile($param::cache, $dir, $sig); my($cdir) = File::Spec->catfile($param::cache, $dir); if (! -d $cdir) { mkdir($cdir, 0777) || die("$0: can't create cache directory $cdir ($!).\n"); } elsif (-f $cp) { # Already cached: try to use that instead, to save space. # This can happen if the -cs option is used on a previously # uncached build, or if two builds occur simultaneously. my($lp) = ".$sig"; unlink($lp); return if ! eval { link($cp, $lp) }; rename($lp, $sp); # Unix98 says, "If the old argument and the new argument both # [refer] to the same existing file, the rename() function # returns successfully and performs no other action." So, if # $lp and $sp are links (i.e., $cp and $sp are links), $lp is # left, and we must unlink it ourselves. If the rename failed # for any reason, it is also good form to unlink the temporary # $lp. Otherwise $lp no longer exists and, barring some race, # the unlink fails silently. unlink($lp); return; } return if eval { link($sp, $cp) }; return if ! -f $sp; # if nothing to cache. if (futil::copy($sp, "$cp.new")) { rename("$cp.new", $cp); } } # Generic signature handling package sig; use vars qw( @ISA ); sub select { my($package, $subclass) = @_; @ISA = ($package . "::" . $subclass); }; # MD5-based signature package. package sig::md5; use vars qw( $md5 ); BEGIN { my $module; my @md5_modules = qw(Digest::MD5 MD5); for (@md5_modules) { eval "use $_"; if (! $@) { $module = $_; last; } } die "Cannot find any MD5 module from: @md5_modules" if $@; $md5 = new $module; } # Invalidate a cache entry. sub invalidate { delete $_[1]->{sig} } # Determine the current signature of an already-existing or # non-existant file. sub signature { if (defined $_[1]->{sig}) { return $_[1]->{sig}; } my ($self, $file) = @_; my($path) = $file->path; my($time) = (stat($path))[9]; if ($time) { my($sigtime) = sig::hash::in($file); if ($file->is_on_rpath) { if ($sigtime) { my ($htime, $hsig) = split(' ',$sigtime); if (! $hsig) { # There was no separate $htime recorded in # the .consign file, which implies that this # is a source file in the repository. # (Source file .consign entries don't record # $htime.) Just return the signature that # someone else conveniently calculated for us. return $htime; # actually the signature } else { if (! $param::rep_sig_times_ok || $htime == $time) { return $file->{sig} = $hsig; } } } return $file->{sig} = $file->path . $time; } if ($sigtime) { my ($htime, $hsig) = split(' ',$sigtime); if ($htime eq $time) { return $file->{sig} = $hsig; } } if (! File::Spec->file_name_is_absolute($path)) { # A file in the local build directory. Assume we can write # a signature file for it, and compute the actual source # signature. We compute the file based on the build path, # not source path, only because there might be parallel # builds going on... In principle, we could use the source # path and only compute this once. my($sig) = srcsig($path); sig::hash::out($file, $sig); return $file->{sig} = $sig; } else { return $file->{sig} = $file->{entry} . $time; } } $file->{sig} = ''; } # Is the provided signature equal to the signature of the current # instantiation of the target (and does the target exist)? sub current { my($self, $file, $sig) = @_; # Uncomment this to debug checks for signature currency. # 1998-10-29 # my $fsig = $self->signature($file); # print STDOUT "\$self->signature(${\$file->path}) # '$fsig' eq \$sig '$sig'\n"; # return $fsig eq $sig; $self->signature($file) eq $sig; } # Store the signature for a file. sub set { my($self, $file, $sig) = @_; my($time) = (stat($file->path))[9]; sig::hash::out($file, "$time $sig"); $file->{sig} = $sig } # Return an aggregate signature sub collect { my($self, @sigs) = @_; # The following sequence is faster than calling the hex interface. $md5->reset(); $md5->add(join('', $param::salt, @sigs)); # Uncomment this to debug dependency signatures. # 1998-05-08 # my $buf = join(', ', $param::salt, @sigs); # print STDOUT "sigbuf=|$buf|\n"; # Uncomment this to print the result of dependency signature calculation. # 1998-10-13 # $buf = unpack("H*", $md5->digest()); # print STDOUT "\t=>|$buf|\n"; # return $buf; unpack("H*", $md5->digest()); } # Directly compute a file signature as the MD5 checksum of the # bytes in the file. sub srcsig { my($path) = @_; $md5->reset(); open(FILE, $path) || return ''; binmode(FILE); $md5->addfile(\*FILE); close(FILE); # Uncomment this to print the result of file signature calculation. # 1998-10-13 # my $buf = unpack("H*", $md5->digest()); # print STDOUT "$path=|$buf|\n"; # return $buf; unpack("H*", $md5->digest()); } __END__; =head1 NAME Cons - A Software Construction System =head1 DESCRIPTION A guide and reference for version 2.2.0 Copyright (c) 1996-2000 Free Software Foundation, Inc. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; see the file COPYING. If not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. =head1 Introduction B is a system for constructing, primarily, software, but is quite different from previous software construction systems. Cons was designed from the ground up to deal easily with the construction of software spread over multiple source directories. Cons makes it easy to create build scripts that are simple, understandable and maintainable. Cons ensures that complex software is easily and accurately reproducible. Cons uses a number of techniques to accomplish all of this. Construction scripts are just Perl scripts, making them both easy to comprehend and very flexible. Global scoping of variables is replaced with an import/export mechanism for sharing information between scripts, significantly improving the readability and maintainability of each script. B are introduced: these are Perl objects that capture the information required for controlling the build process. Multiple environments are used when different semantics are required for generating products in the build tree. Cons implements automatic dependency analysis and uses this to globally sequence the entire build. Variant builds are easily produced from a single source tree. Intelligent build subsetting is possible, when working on localized changes. Overrides can be setup to easily override build instructions without modifying any scripts. MD5 cryptographic B are associated with derived files, and are used to accurately determine whether a given file needs to be rebuilt. While offering all of the above, and more, Cons remains simple and easy to use. This will, hopefully, become clear as you read the remainder of this document. =head1 Why Cons? Why not Make? Cons is a B replacement. In the following paragraphs, we look at a few of the undesirable characteristics of make--and typical build environments based on make--that motivated the development of Cons. =head2 Build complexity Traditional make-based systems of any size tend to become quite complex. The original make utility and its derivatives have contributed to this tendency in a number of ways. Make is not good at dealing with systems that are spread over multiple directories. Various work-arounds are used to overcome this difficulty; the usual choice is for make to invoke itself recursively for each sub-directory of a build. This leads to complicated code, in which it is often unclear how a variable is set, or what effect the setting of a variable will have on the build as a whole. The make scripting language has gradually been extended to provide more possibilities, but these have largely served to clutter an already overextended language. Often, builds are done in multiple passes in order to provide appropriate products from one directory to another directory. This represents a further increase in build complexity. =head2 Build reproducibility The bane of all makes has always been the correct handling of dependencies. Most often, an attempt is made to do a reasonable job of dependencies within a single directory, but no serious attempt is made to do the job between directories. Even when dependencies are working correctly, make's reliance on a simple time stamp comparison to determine whether a file is out of date with respect to its dependents is not, in general, adequate for determining when a file should be rederived. If an external library, for example, is rebuilt and then ``snapped'' into place, the timestamps on its newly created files may well be earlier than the last local build, since it was built before it became visible. =head2 Variant builds Make provides only limited facilities for handling variant builds. With the proliferation of hardware platforms and the need for debuggable vs. optimized code, the ability to easily create these variants is essential. More importantly, if variants are created, it is important to either be able to separate the variants or to be able to reproduce the original or variant at will. With make it is very difficult to separate the builds into multiple build directories, separate from the source. And if this technique isn't used, it's also virtually impossible to guarantee at any given time which variant is present in the tree, without resorting to a complete rebuild. =head2 Repositories Make provides only limited support for building software from code that exists in a central repository directory structure. The VPATH feature of GNU make (and some other make implementations) is intended to provide this, but doesn't work as expected: it changes the path of target file to the VPATH name too early in its analysis, and therefore searches for all dependencies in the VPATH directory. To ensure correct development builds, it is important to be able to create a file in a local build directory and have any files in a code repository (a VPATH directory, in make terms) that depend on the local file get rebuilt properly. This isn't possible with VPATH, without coding a lot of complex repository knowledge directly into the makefiles. =head1 Keeping it simple A few of the difficulties with make have been cited above. In this and subsequent sections, we shall introduce Cons and show how these issues are addressed. =head2 Perl scripts Cons is Perl-based. That is, Cons scripts--F and F files, the equivalent to F or F--are all written in Perl. This provides an immediate benefit: the language for writing scripts is a familiar one. Even if you don't happen to be a Perl programmer, it helps to know that Perl is basically just a simple declarative language, with a well-defined flow of control, and familiar semantics. It has variables that behave basically the way you would expect them to, subroutines, flow of control, and so on. There is no special syntax introduced for Cons. The use of Perl as a scripting language simplifies the task of expressing the appropriate solution to the often complex requirements of a build. =head2 Hello, World! To ground the following discussion, here's how you could build the B C application with Cons: $env = new cons(); Program $env 'hello', 'hello.c'; If you install this script in a directory, naming the script F, and create the F source file in the same directory, then you can type C to build the application: % cons hello cc -c hello.c -o hello.o cc -o hello hello.o =head2 Construction environments A key simplification of Cons is the idea of a B. A construction environment is an B characterized by a set of key/value pairs and a set of BIn order to tell Cons how to build something, you invoke the appropriate method via an appropriate construction environment. Consider the following example: $env = new cons( CC => 'gcc', LIBS => 'libworld.a' ); Program $env 'hello', 'hello.c'; In this case, rather than using the default construction environment, as is, we have overridden the value of C so that the GNU C Compiler equivalent is used, instead. Since this version of B requires a library, F, we have specified that any program linked in this environment should be linked with that library. If the library exists already, well and good, but if not, then we'll also have to include the statement: Library $env 'libworld', 'world.c'; Now if you type C, the library will be built before the program is linked, and, of course, C will be used to compile both modules: % cons hello gcc -c hello.c -o hello.o gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a =head2 Automatic and complete dependency analysis With Cons, dependencies are handled automatically. Continuing the previous example, note that when we modify F, F is recompiled, F recreated, and F relinked: % vi world.c [EDIT] % cons hello gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a This is a relatively simple example: Cons ``knows'' F depends upon F, because the dependency is explicitly set up by the C method. It also knows that F depends upon F and that F depends upon F, all for similar reasons. Now it turns out that F also includes the interface definition file, F: % emacs world.h [EDIT] % cons hello gcc -c hello.c -o hello.o gcc -o hello hello.o libworld.a How does Cons know that F includes F, and that F must therefore be recompiled? For now, suffice it to say that when considering whether or not F is up-to-date, Cons invokes a scanner for its dependency, F. This scanner enumerates the files included by F to come up with a list of further dependencies, beyond those made explicit by the Cons script. This process is recursive: any files included by included files will also be scanned. Isn't this expensive? The answer is--it depends. If you do a full build of a large system, the scanning time is insignificant. If you do a rebuild of a large system, then Cons will spend a fair amount of time thinking about it before it decides that nothing has to be done (although not necessarily more time than make!). The good news is that Cons makes it very easy to intelligently subset your build, when you are working on localized changes. =head2 Automatic global build sequencing Because Cons does full and accurate dependency analysis, and does this globally, for the entire build, Cons is able to use this information to take full control of the B of the build. This sequencing is evident in the above examples, and is equivalent to what you would expect for make, given a full set of dependencies. With Cons, this extends trivially to larger, multi-directory builds. As a result, all of the complexity involved in making sure that a build is organized correctly--including multi-pass hierarchical builds--is eliminated. We'll discuss this further in the next sections. =head1 Building large trees--still just as simple =head2 A hierarchy of build scripts A larger build, in Cons, is organized by creating a hierarchy of B. At the top of the tree is a script called F. The rest of the scripts, by convention, are each called F. These scripts are connected together, very simply, by the C, C, and C commands. =head2 The Build command The C command takes a list of F file names, and arranges for them to be included in the build. For example: Build qw( drivers/display/Conscript drivers/mouse/Conscript parser/Conscript utilities/Conscript ); This is a simple two-level hierarchy of build scripts: all the subsidiary F files are mentioned in the top-level F file. Notice that not all directories in the tree necessarily have build scripts associated with them. This could also be written as a multi-level script. For example, the F file might contain this command: Build qw( parser/Conscript drivers/Conscript utilities/Conscript ); and the F file in the F directory might contain this: Build qw( display/Conscript mouse/Conscript ); Experience has shown that the former model is a little easier to understand, since the whole construction tree is laid out in front of you, at the top-level. Hybrid schemes are also possible. A separately maintained component that needs to be incorporated into a build tree, for example, might hook into the build tree in one place, but define its own construction hierarchy. By default, Cons does not change its working directory to the directory containing a subsidiary F file it is including. This behavior can be enabled for a build by specifying, in the top-level F file: Conscript_chdir 1; When enabled, Cons will change to the subsidiary F file's containing directory while reading in that file, and then change back to the top-level directory once the file has been processed. It is expected that this behavior will become the default in some future version of Cons. To prepare for this transition, builds that expect Cons to remain at the top of the build while it reads in a subsidiary F file should explicitly disable this feature as follows: Conscript_chdir 0; =head2 Relative, top-relative, and absolute file names You may have noticed that the file names specified to the Build command are relative to the location of the script it is invoked from. This is generally true for other filename arguments to other commands, too, although we might as well mention here that if you begin a file name with a hash mark, ``#'', then that file is interpreted relative to the top-level directory (where the F file resides). And, not surprisingly, if you begin it with ``/'', then it is considered to be an absolute pathname. This is true even on systems which use a back slash rather than a forward slash to name absolute paths. =head2 Using modules in build scripts You may pull modules into each F file using the normal Perl C or C statements: use English; require My::Module; Each C or C only affects the one F file in which it appears. To use a module in multiple F files, you must put a C or C statement in each one that needs the module. =head2 Scope of variables The top-level F file and all F files begin life in a common, separate Perl package. B controls the symbol table for the package so that, the symbol table for each script is empty, except for the F file, which gets some of the command line arguments. All of the variables that are set or used, therefore, are set by the script itself--not by some external script. Variables can be explicitly B by a script from its parent script. To import a variable, it must have been B by the parent and initialized (otherwise an error will occur). =head2 The Export command The C command is used as in the following example: $env = new cons(); $INCLUDE = "#export/include"; $LIB = "#export/lib"; Export qw( env INCLUDE LIB ); Build qw( util/Conscript ); The values of the simple variables mentioned in the C list will be squirreled away by any subsequent C commands. The C command will only export Perl B variables, that is, variables whose name begins with C<$>. Other variables, objects, etc. can be exported by reference--but all scripts will refer to the same object, and this object should be considered to be read-only by the subsidiary scripts and by the original exporting script. It's acceptable, however, to assign a new value to the exported scalar variable--that won't change the underlying variable referenced. This sequence, for example, is OK: $env = new cons(); Export qw( env INCLUDE LIB ); Build qw( util/Conscript ); $env = new cons(CFLAGS => '-O'); Build qw( other/Conscript ); It doesn't matter whether the variable is set before or after the C command. The important thing is the value of the variable at the time the C command is executed. This is what gets squirreled away. Any subsequent C commands, by the way, invalidate the first: you must mention all the variables you wish to export on each C command. =head2 The Import command Variables exported by the C command can be imported into subsidiary scripts by the C command. The subsidiary script always imports variables directly from the superior script. Consider this example: Import qw( env INCLUDE ); This is only legal if the parent script exported both C<$env> and C<$INCLUDE>. It also must have given each of these variables values. It is OK for the subsidiary script to only import a subset of the exported variables (in this example, C<$LIB>, which was exported by the previous example, is not imported). All the imported variables are automatically re-exported, so the sequence: Import qw ( env INCLUDE ); Build qw ( beneath-me/Conscript ); will supply both C<$env> and C<$INCLUDE> to the subsidiary file. If only C<$env> is to be exported, then the following will suffice: Import qw ( env INCLUDE ); Export qw ( env ); Build qw ( beneath-me/Conscript ); Needless to say, the variables may be modified locally before invoking C on the subsidiary script. =head2 Build script evaluation order The only constraint on the ordering of build scripts is that superior scripts are evaluated before their inferior scripts. The top-level F file, for instance, is evaluated first, followed by any inferior scripts. This is all you really need to know about the evaluation order, since order is generally irrelevant. Consider the following C command: Build qw( drivers/display/Conscript drivers/mouse/Conscript parser/Conscript utilities/Conscript ); We've chosen to put the script names in alphabetical order, simply because that's the most convenient for maintenance purposes. Changing the order will make no difference to the build. =head1 A Model for sharing files =head2 Some simple conventions In any complex software system, a method for sharing build products needs to be established. We propose a simple set of conventions which are trivial to implement with Cons, but very effective. The basic rule is to require that all build products which need to be shared between directories are shared via an intermediate directory. We have typically called this F, and, in a C environment, provided conventional sub-directories of this directory, such as F, F, F, etc. These directories are defined by the top-level F file. A simple F file for a B application, organized using multiple directories, might look like this: # Construct file for Hello, World! # Where to put all our shared products. $EXPORT = '#export'; Export qw( CONS INCLUDE LIB BIN ); # Standard directories for sharing products. $INCLUDE = "$EXPORT/include"; $LIB = "$EXPORT/lib"; $BIN = "$EXPORT/bin"; # A standard construction environment. $CONS = new cons ( CPPPATH => $INCLUDE, # Include path for C Compilations LIBPATH => $LIB, # Library path for linking programs LIBS => '-lworld', # List of standard libraries ); Build qw( hello/Conscript world/Conscript ); The F directory's F file looks like this: # Conscript file for directory world Import qw( CONS INCLUDE LIB ); # Install the products of this directory Install $CONS $LIB, 'libworld.a'; Install $CONS $INCLUDE, 'world.h'; # Internal products Library $CONS 'libworld.a', 'world.c'; and the F directory's F file looks like this: # Conscript file for directory hello Import qw( CONS BIN ); # Exported products Install $CONS $BIN, 'hello'; # Internal products Program $CONS 'hello', 'hello.c'; To construct a B program with this directory structure, go to the top-level directory, and invoke C with the appropriate arguments. In the following example, we tell Cons to build the directory F. To build a directory, Cons recursively builds all known products within that directory (only if they need rebuilding, of course). If any of those products depend upon other products in other directories, then those will be built, too. % cons export Install world/world.h as export/include/world.h cc -Iexport/include -c hello/hello.c -o hello/hello.o cc -Iexport/include -c world/world.c -o world/world.o ar r world/libworld.a world/world.o ar: creating world/libworld.a ranlib world/libworld.a Install world/libworld.a as export/lib/libworld.a cc -o hello/hello hello/hello.o -Lexport/lib -lworld Install hello/hello as export/bin/hello =head2 Clean, understandable, location-independent scripts You'll note that the two F files are very clean and to-the-point. They simply specify products of the directory and how to build those products. The build instructions are minimal: they specify which construction environment to use, the name of the product, and the name of the inputs. Note also that the scripts are location-independent: if you wish to reorganize your source tree, you are free to do so: you only have to change the F file (in this example), to specify the new locations of the F files. The use of an export tree makes this goal easy. Note, too, how Cons takes care of little details for you. All the F directories, for example, were made automatically. And the installed files were really hard-linked into the respective export directories, to save space and time. This attention to detail saves considerable work, and makes it even easier to produce simple, maintainable scripts. =head1 Separating source and build trees It's often desirable to keep any derived files from the build completely separate from the source files. This makes it much easier to keep track of just what is a source file, and also makes it simpler to handle B builds, especially if you want the variant builds to co-exist. =head2 Separating build and source directories using the Link command Cons provides a simple mechanism that handles all of these requirements. The C command is invoked as in this example: Link 'build' => 'src'; The specified directories are ``linked'' to the specified source directory. Let's suppose that you setup a source directory, F, with the sub-directories F and F below it, as in the previous example. You could then substitute for the original build lines the following: Build qw( build/world/Conscript build/hello/Conscript ); Notice that you treat the F file as if it existed in the build directory. Now if you type the same command as before, you will get the following results: % cons export Install build/world/world.h as export/include/world.h cc -Iexport/include -c build/hello/hello.c -o build/hello/hello.o cc -Iexport/include -c build/world/world.c -o build/world/world.o ar r build/world/libworld.a build/world/world.o ar: creating build/world/libworld.a ranlib build/world/libworld.a Install build/world/libworld.a as export/lib/libworld.a cc -o build/hello/hello build/hello/hello.o -Lexport/lib -lworld Install build/hello/hello as export/bin/hello Again, Cons has taken care of the details for you. In particular, you will notice that all the builds are done using source files and object files from the build directory. For example, F is compiled from F, and F is installed from F. This is accomplished on most systems by the simple expedient of ``hard'' linking the required files from each source directory into the appropriate build directory. The links are maintained correctly by Cons, no matter what you do to the source directory. If you modify a source file, your editor may do this ``in place'' or it may rename it first and create a new file. In the latter case, any hard link will be lost. Cons will detect this condition the next time the source file is needed, and will relink it appropriately. You'll also notice, by the way, that B changes were required to the underlying F files. And we can go further, as we shall see in the next section. =head1 Variant builds =head2 Hello, World! for baNaNa and peAcH OS's Variant builds require just another simple extension. Let's take as an example a requirement to allow builds for both the baNaNa and peAcH operating systems. In this case, we are using a distributed file system, such as NFS to access the particular system, and only one or the other of the systems has to be compiled for any given invocation of C. Here's one way we could set up the F file for our B application: # Construct file for Hello, World! die qq(OS must be specified) unless $OS = $ARG{OS}; die qq(OS must be "peach" or "banana") if $OS ne "peach" && $OS ne "banana"; # Where to put all our shared products. $EXPORT = "#export/$OS"; Export qw( CONS INCLUDE LIB BIN ); # Standard directories for sharing products. $INCLUDE = "$EXPORT/include"; $LIB = "$EXPORT/lib"; $BIN = "$EXPORT/bin"; # A standard construction environment. $CONS = new cons ( CPPPATH => $INCLUDE, # Include path for C Compilations LIBPATH => $LIB, # Library path for linking programs LIBS => '-lworld', # List of standard libraries ); # $BUILD is where we will derive everything. $BUILD = "#build/$OS"; # Tell cons where the source files for $BUILD are. Link $BUILD => 'src'; Build ( "$BUILD/hello/Conscript", "$BUILD/world/Conscript", ); Now if we login to a peAcH system, we can build our B application for that platform: % cons export OS=peach Install build/peach/world/world.h as export/peach/include/world.h cc -Iexport/peach/include -c build/peach/hello/hello.c -o build/peach/hello/hello.o cc -Iexport/peach/include -c build/peach/world/world.c -o build/peach/world/world.o ar r build/peach/world/libworld.a build/peach/world/world.o ar: creating build/peach/world/libworld.a ranlib build/peach/world/libworld.a Install build/peach/world/libworld.a as export/peach/lib/libworld.a cc -o build/peach/hello/hello build/peach/hello/hello.o -Lexport/peach/lib -lworld Install build/peach/hello/hello as export/peach/bin/hello =head2 Variations on a theme Other variations of this model are possible. For example, you might decide that you want to separate out your include files into platform dependent and platform independent files. In this case, you'd have to define an alternative to C<$INCLUDE> for platform-dependent files. Most F files, generating purely platform-independent include files, would not have to change. You might also want to be able to compile your whole system with debugging or profiling, for example, enabled. You could do this with appropriate command line options, such as C. This would then be translated into the appropriate platform-specific requirements to enable debugging (this might include turning off optimization, for example). You could optionally vary the name space for these different types of systems, but, as we'll see in the next section, it's not B to do this, since Cons is pretty smart about rebuilding things when you change options. =head1 Signatures =head2 MD5 cryptographic signatures Whenever Cons creates a derived file, it stores a B for that file. The signature is stored in a separate file, one per directory. After the previous example was compiled, the F<.consign> file in the F directory looked like this: world.o:834179303 23844c0b102ecdc0b4548d1cd1cbd8c6 libworld.a:834179304 9bf6587fa06ec49d864811a105222c00 The first number is a timestamp--for a UNIX systems, this is typically the number of seconds since January 1st, 1970. The second value is an MD5 checksum. The B is an algorithm that, given an input string, computes a strong cryptographic signature for that string. The MD5 checksum stored in the F<.consign> file is, in effect, a digest of all the dependency information for the specified file. So, for example, for the F file, this includes at least the F file, and also any header files that Cons knows about that are included, directly or indirectly by F. Not only that, but the actual command line that was used to generate F is also fed into the computation of the signature. Similarly, F gets a signature which ``includes'' all the signatures of its constituents (and hence, transitively, the signatures of B constituents), as well as the command line that created the file. The signature of a non-derived file is computed, by default, by taking the current modification time of the file and the file's entry name (unless there happens to be a current F<.consign> entry for that file, in which case that signature is used). Notice that there is no need for a derived file to depend upon any particular F or F file--if changes to these files affect the file in question, then this will be automatically reflected in its signature, since relevant parts of the command line are included in the signature. Unrelated changes will have no effect. When Cons considers whether to derive a particular file, then, it first computes the expected signature of the file. It then compares the file's last modification time with the time recorded in the F<.consign> entry, if one exists. If these times match, then the signature stored in the F<.consign> file is considered to be accurate. If the file's previous signature does not match the new, expected signature, then the file must be rederived. Notice that a file will be rederived whenever anything about a dependent file changes. In particular, notice that B change to the modification time of a dependent (forward or backwards in time) will force recompilation of the derived file. The use of these signatures is an extremely simple, efficient, and effective method of improving--dramatically--the reproducibility of a system. We'll demonstrate this with a simple example: # Simple "Hello, World!" Construct file $CFLAGS = '-g' if $ARG{DEBUG} eq 'on'; $CONS = new cons(CFLAGS => $CFLAGS); Program $CONS 'hello', 'hello.c'; Notice how Cons recompiles at the appropriate times: % cons hello cc -c hello.c -o hello.o cc -o hello hello.o % cons hello cons: "hello" is up-to-date. % cons DEBUG=on hello cc -g -c hello.c -o hello.o cc -o hello hello.o % cons DEBUG=on hello cons: "hello" is up-to-date. % cons hello cc -c hello.c -o hello.o cc -o hello hello.o =head1 Code Repositories Many software development organizations will have one or more central repository directory trees containing the current source code for one or more projects, as well as the derived object files, libraries, and executables. In order to reduce unnecessary recompilation, it is useful to use files from the repository to build development software--assuming, of course, that no newer dependency file exists in the local build tree. =head2 Repository Cons provides a mechanism to specify a list of code repositories that will be searched, in-order, for source files and derived files not found in the local build directory tree. The following lines in a F file will instruct Cons to look first under the F directory and then under the F directory: Repository qw ( /usr/experiment/repository /usr/product/repository ); The repository directories specified may contain source files, derived files (objects, libraries and executables), or both. If there is no local file (source or derived) under the directory in which Cons is executed, then the first copy of a same-named file found under a repository directory will be used to build any local derived files. Cons maintains one global list of repositories directories. Cons will eliminate the current directory, and any non-existent directories, from the list. =head2 Finding the Construct file in a Repository Cons will also search for F and F files in the repository tree or trees. This leads to a chicken-and-egg situation, though: how do you look in a repository tree for a F file if the F file tells you where the repository is? To get around this, repositories may be specified via C<-R> options on the command line: % cons -R /usr/experiment/repository -R /usr/product/repository . Any repository directories specified in the F or F files will be appended to the repository directories specified by command-line C<-R> options. =head2 Repository source files If the source code (include the F file) for the library version of the I C application is in a repository (with no derived files), Cons will use the repository source files to create the local object files and executable file: % cons -R /usr/src_only/repository hello gcc -c /usr/src_only/repository/hello.c -o hello.o gcc -c /usr/src_only/repository/world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a Creating a local source file will cause Cons to rebuild the appropriate derived file or files: % pico world.c [EDIT] % cons -R /usr/src_only/repository hello gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a And removing the local source file will cause Cons to revert back to building the derived files from the repository source: % rm world.c % cons -R /usr/src_only/repository hello gcc -c /usr/src_only/repository/world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a =head2 Repository derived files If a repository tree contains derived files (usually object files, libraries, or executables), Cons will perform its normal signature calculation to decide whether the repository file is up-to-date or a derived file must be built locally. This means that, in order to ensure correct signature calculation, a repository tree must also contain the F<.consign> files that were created by Cons when generating the derived files. This would usually be accomplished by building the software in the repository (or, alternatively, in a build directory, and then copying the result to the repository): % cd /usr/all/repository % cons hello gcc -c hello.c -o hello.o gcc -c world.c -o world.o ar r libworld.a world.o ar: creating libworld.a ranlib libworld.a gcc -o hello hello.o libworld.a (This is safe even if the F file lists the F directory in a C command because Cons will remove the current directory from the repository list.) Now if we want to build a copy of the application with our own F file, we only need to create the one necessary source file, and use the C<-R> option to have Cons use other files from the repository: % mkdir $HOME/build1 % cd $HOME/build1 % ed hello.c [EDIT] % cons -R /usr/all/repository hello gcc -c hello.c -o hello.o gcc -o hello hello.o /usr/all/repository/libworld.a Notice that Cons has not bothered to recreate a local F library (or recompile the F module), but instead uses the already-compiled version from the repository. Because the MD5 signatures that Cons puts in the F<.consign> file contain timestamps for the derived files, the signature timestamps must match the file timestamps for a signature to be considered valid. Some software systems may alter the timestamps on repository files (by copying them, e.g.), in which case Cons will, by default, assume the repository signatures are invalid and rebuild files unnecessarily. This behavior may be altered by specifying: Repository_Sig_Times_OK 0; This tells Cons to ignore timestamps when deciding whether a signature is valid. (Note that avoiding this sanity check means there must be proper control over the repository tree to ensure that the derived files cannot be modified without updating the F<.consign> signature.) =head2 Local copies of files If the repository tree contains the complete results of a build, and we try to build from the repository without any files in our local tree, something moderately surprising happens: % mkdir $HOME/build2 % cd $HOME/build2 % cons -R /usr/all/repository hello cons: "hello" is up-to-date. Why does Cons say that the F program is up-to-date when there is no F program in the local build directory? Because the repository (not the local directory) contains the up-to-date F program, and Cons correctly determines that nothing needs to be done to rebuild this up-to-date copy of the file. There are, however, many times in which it is appropriate to ensure that a local copy of a file always exists. A packaging or testing script, for example, may assume that certain generated files exist locally. Instead of making these subsidiary scripts aware of the repository directory, the C command may be added to a F or F file to specify that a certain file or files must appear in the local build directory: Local qw( hello ); Then, if we re-run the same command, Cons will make a local copy of the program from the repository copy (telling you that it is doing so): % cons -R /usr/all/repository hello Local copy of hello from /usr/all/repository/hello cons: "hello" is up-to-date. Notice that, because the act of making the local copy is not considered a "build" of the F file, Cons still reports that it is up-to-date. Creating local copies is most useful for files that are being installed into an intermediate directory (for sharing with other directories) via the C command. Accompanying the C command for a file with a companion C command is so common that Cons provides a C command as a convenient way to do both: Install_Local $env, '#export', 'hello'; is exactly equivalent to: Install $env '#export', 'hello'; Local '#export/hello'; Both the C and C commands update the local F<.consign> file with the appropriate file signatures, so that future builds are performed correctly. =head2 Repository dependency analysis Due to its built-in scanning, Cons will search the specified repository trees for included F<.h> files. Unless the compiler also knows about the repository trees, though, it will be unable to find F<.h> files that only exist in a repository. If, for example, the F file includes the F file in its current directory: % cons -R /usr/all/repository hello gcc -c /usr/all/repository/hello.c -o hello.o /usr/all/repository/hello.c:1: hello.h: No such file or directory Solving this problem forces some requirements onto the way construction environments are defined and onto the way the C C<#include> preprocessor directive is used to include files. In order to inform the compiler about the repository trees, Cons will add appropriate C<-I> flags to the compilation commands. This means that the C variable in the construct environment must explicitly specify all subdirectories which are to be searched for included files, including the current directory. Consequently, we can fix the above example by changing the environment creation in the F file as follows: $env = new cons( CC => 'gcc', CPPPATH => '.', LIBS => 'libworld.a', ); Due to the definition of the C variable, this yields, when we re-execute the command: % cons -R /usr/all/repository hello gcc -c -I. -I/usr/all/repository /usr/all/repository/hello.c -o hello.o gcc -o hello hello.o /usr/all/repository/libworld.a The order of the C<-I> flags replicates, for the C preprocessor, the same repository-directory search path that Cons uses for its own dependency analysis. If there are multiple repositories and multiple C directories, Cons will append the repository directories to the beginning of each C directory, rapidly multiplying the number of C<-I> flags. As an extreme example, a F file containing: Repository qw( /u1 /u2 ); $env = new cons( CPPPATH => 'a:b:c', ); Would yield a compilation command of: cc -Ia -I/u1/a -I/u2/a -Ib -I/u1/b -I/u2/b -Ic -I/u1/c -I/u2/c -c hello.c -o hello.o Because Cons relies on the compiler's C<-I> flags to communicate the order in which repository directories must be searched, Cons' handling of repository directories is fundamentally incompatible with using double-quotes on the C<#include> directives in your C source code: #include "file.h" /* DON'T USE DOUBLE-QUOTES LIKE THIS */ This is because most C preprocessors, when faced with such a directive, will always first search the directory containing the source file. This undermines the elaborate C<-I> options that Cons constructs to make the preprocessor conform to its preferred search path. Consequently, when using repository trees in Cons, B use angle-brackets for included files: #include /* USE ANGLE-BRACKETS INSTEAD */ =head2 Repository_List Cons provides a C command to return a list of all repository directories in their current search order. This can be used for debugging, or to do more complex Perl stuff: @list = Repository_List; print join(' ', @list), "\n"; =head2 Repository interaction with other Cons features Cons' handling of repository trees interacts correctly with other Cons features--which is to say, it generally does what you would expect. Most notably, repository trees interact correctly, and rather powerfully, with the 'Link' command. A repository tree may contain one or more subdirectories for version builds established via C to a source subdirectory. Cons will search for derived files in the appropriate build subdirectories under the repository tree. =head1 Default targets Until now, we've demonstrated invoking Cons with an explicit target to build: % cons hello Normally, Cons does not build anything unless a target is specified, but specifying '.' (the current directory) will build everything: % cons # does not build anything % cons . # builds everything under the top-level directory Adding the C method to any F or F file will add the specified targets to a list of default targets. Cons will build these defaults if there are no targets specified on the command line. So adding the following line to the top-level F file will mimic Make's typical behavior of building everything by default: Default '.'; The following would add the F and F commands (in the same directory as the F or F file) to the default list: Default qw( hello goodbye ); The C method may be used more than once to add targets to the default list. =head1 Selective builds Cons provides two methods for reducing the size of given build. The first is by specifying targets on the command line, and the second is a method for pruning the build tree. We'll consider target specification first. =head2 Selective targeting Like make, Cons allows the specification of ``targets'' on the command line. Cons targets may be either files or directories. When a directory is specified, this is simply a short-hand notation for every derivable product--that Cons knows about--in the specified directory and below. For example: % cons build/hello/hello.o means build F and everything that F might need. This is from a previous version of the B program in which F depended upon F. If that file is not up-to-date (because someone modified F, then it will be rebuilt, even though it is in a directory remote from F. In this example: % cons build Everything in the F directory is built, if necessary. Again, this may cause more files to be built. In particular, both F and F are required by the F directory, and so they will be built if they are out-of-date. If we do, instead: % cons export then only the files that should be installed in the export directory will be rebuilt, if necessary, and then installed there. Note that C might build files that C doesn't build, and vice-versa. =head2 No ``special'' targets With Cons, make-style ``special'' targets are not required. The simplest analog with Cons is to use special F directories, instead. Let's suppose, for example, that you have a whole series of unit tests that are associated with your code. The tests live in the source directory near the code. Normally, however, you don't want to build these tests. One solution is to provide all the build instructions for creating the tests, and then to install the tests into a separate part of the tree. If we install the tests in a top-level directory called F, then: % cons tests will build all the tests. % cons export will build the production version of the system (but not the tests), and: % cons build should probably be avoided (since it will compile tests unecessarily). If you want to build just a single test, then you could explicitly name the test (in either the F directory or the F directory). You could also aggregate the tests into a convenient hierarchy within the tests directory. This hierarchy need not necessarily match the source hierarchy, in much the same manner that the include hierarchy probably doesn't match the source hierarchy (the include hierarchy is unlikely to be more than two levels deep, for C programs). If you want to build absolutely everything in the tree (subject to whatever options you select), you can use: % cons . This is not particularly efficient, since it will redundantly walk all the trees, including the source tree. The source tree, of course, may have buildable objects in it--nothing stops you from doing this, even if you normally build in a separate build tree. =head1 Build Pruning In conjunction with target selection, B can be used to reduce the scope of the build. In the previous peAcH and baNaNa example, we have already seen how script-driven build pruning can be used to make only half of the potential build available for any given invocation of C. Cons also provides, as a convenience, a command line convention that allows you to specify which F files actually get ``built''--that is, incorporated into the build tree. For example: % cons build +world The C<+> argument introduces a Perl regular expression. This must, of course, be quoted at the shell level if there are any shell meta-characters within the expression. The expression is matched against each F file which has been mentioned in a C statement, and only those scripts with matching names are actually incorporated into the build tree. Multiple such arguments are allowed, in which case a match against any of them is sufficient to cause a script to be included. In the example, above, the F program will not be built, since Cons will have no knowledge of the script F. The F archive will be built, however, if need be. There are a couple of uses for build pruning via the command line. Perhaps the most useful is the ability to make local changes, and then, with sufficient knowledge of the consequences of those changes, restrict the size of the build tree in order to speed up the rebuild time. A second use for build pruning is to actively prevent the recompilation of certain files that you know will recompile due to, for example, a modified header file. You may know that either the changes to the header file are immaterial, or that the changes may be safely ignored for most of the tree, for testing purposes.With Cons, the view is that it is pragmatic to admit this type of behavior, with the understanding that on the next full build everything that needs to be rebuilt will be. There is no equivalent to a ``make touch'' command, to mark files as permanently up-to-date. So any risk that is incurred by build pruning is mitigated. For release quality work, obviously, we recommend that you do not use build pruning (it's perfectly OK to use during integration, however, for checking compilation, etc. Just be sure to do an unconstrained build before committing the integration). =head1 Temporary overrides Cons provides a very simple mechanism for overriding aspects of a build. The essence is that you write an override file containing one or more C commands, and you specify this on the command line, when you run C: % cons -o over export will build the F directory, with all derived files subject to the overrides present in the F file. If you leave out the C<-o> option, then everything necessary to remove all overrides will be rebuilt. =head2 Overriding environment variables The override file can contain two types of overrides. The first is incoming environment variables. These are normally accessible by the F file from the C<%ENV> hash variable. These can trivially be overridden in the override file by setting the appropriate elements of C<%ENV> (these could also be overridden in the user's environment, of course). =head2 The Override command The second type of override is accomplished with the C command, which looks like this: Override , => , => , ...; The regular expression I is matched against every derived file that is a candidate for the build. If the derived file matches, then the variable/value pairs are used to override the values in the construction environment associated with the derived file. Let's suppose that we have a construction environment like this: $CONS = new cons( COPT => '', CDBG => '-g', CFLAGS => '%COPT %CDBG', ); Then if we have an override file F containing this command: Override '\.o$', COPT => '-O', CDBG => ''; then any C invocation with C<-o over> that creates F<.o> files via this environment will cause them to be compiled with C<-O >and no C<-g>. The override could, of course, be restricted to a single directory by the appropriate selection of a regular expression. Here's the original version of the Hello, World! program, built with this environment. Note that Cons rebuilds the appropriate pieces when the override is applied or removed: % cons hello cc -g -c hello.c -o hello.o cc -o hello hello.o % cons -o over hello cc -O -c hello.c -o hello.o cc -o hello hello.o % cons -o over hello cons: "hello" is up-to-date. % cons hello cc -g -c hello.c -o hello.o cc -o hello hello.o It's important that the C command only be used for temporary, on-the-fly overrides necessary for development because the overrides are not platform independent and because they rely too much on intimate knowledge of the workings of the scripts. For temporary use, however, they are exactly what you want. Note that it is still useful to provide, say, the ability to create a fully optimized version of a system for production use--from the F and F files. This way you can tailor the optimized system to the platform. Where optimizer trade-offs need to be made (particular files may not be compiled with full optimization, for example), then these can be recorded for posterity (and reproducibility) directly in the scripts. =head1 More on construction environments =head2 Default construction variables We have mentioned, and used, the concept of a B, many times in the preceding pages. Now it's time to make this a little more concrete. With the following statement: $env = new cons(); a reference to a new, default construction environment is created. This contains a number of construction variables and some methods. At the present writing, the default list of construction variables is defined as follows: CC => 'cc', CFLAGS => '', CCCOM => '%CC %CFLAGS %_IFLAGS -c %< -o %>', INCDIRPREFIX => '-I', CXX => '%CC', CXXFLAGS => '%CFLAGS', CXXCOM => '%CXX %CXXFLAGS %_IFLAGS -c %< -o %>', LINK => '%CXX', LINKCOM => '%LINK %LDFLAGS -o %> %< %_LDIRS %LIBS', LINKMODULECOM => '%LD -r -o %> %<', LIBDIRPREFIX => '-L', AR => 'ar', ARFLAGS => 'r', ARCOM => "%AR %ARFLAGS %> %<\n%RANLIB %>", RANLIB => 'ranlib', AS => 'as', ASFLAGS => '', ASCOM => '%AS %ASFLAGS %< -o %>', LD => 'ld', LDFLAGS => '', PREFLIB => 'lib', SUFLIB => '.a', SUFLIBS => '.so:.a', SUFOBJ => '.o', ENV => { 'PATH' => '/bin:/usr/bin' }, On Win32 systems (Windows NT), the following construction variables are overridden in the default: CC => 'cl', CFLAGS => '/nologo', CCCOM => '%CC %CFLAGS %_IFLAGS /c %< /Fo%>', CXXCOM => '%CXX %CXXFLAGS %_IFLAGS /c %< /Fo%>', INCDIRPREFIX => '/I', LINK => 'link', LINKCOM => '%LINK %LDFLAGS /out:%> %< %_LDIRS %LIBS', LINKMODULECOM => '%LD /r /o %> %<', LIBDIRPREFIX => '/LIBPATH:', AR => 'lib', ARFLAGS => '/nologo ', ARCOM => "%AR %ARFLAGS /out:%> %<", RANLIB => '', LD => 'link', LDFLAGS => '/nologo ', PREFLIB => '', SUFEXE => '.exe', SUFLIB => '.lib', SUFLIBS => '.dll:.lib', SUFOBJ => '.obj', These variables are used by the various methods associated with the environment, in particular any method that ultimately invokes an external command will substitute these variables into the final command, as appropriate. For example, the C method takes a number of source files and arranges to derive, if necessary, the corresponding object files. For example: Objects $env 'foo.c', 'bar.c'; This will arrange to produce, if necessary, F and F. The command invoked is simply C<%CCCOM>, which expands through substitution, to the appropriate external command required to build each object. We will explore the substitution rules further under the C method, below. The construction variables are also used for other purposes. For example, C is used to specify a colon-separated path of include directories. These are intended to be passed to the C preprocessor and are also used by the C-file scanning machinery to determine the dependencies involved in a C Compilation. Variables beginning with underscore, are created by various methods, and should normally be considered ``internal'' variables. For example, when a method is called which calls for the creation of an object from a C source, the variable C<_IFLAGS> is created: this corresponds to the C<-I> switches required by the C compiler to represent the directories specified by C. Note that, for any particular environment, the value of a variable is set once, and then never reset (to change a variable, you must create a new environment. Methods are provided for copying existing environments for this purpose). Some internal variables, such as C<_IFLAGS> are created on demand, but once set, they remain fixed for the life of the environment. The C, C, and C variables all supply a place for passing options to the compiler, loader, and archiver, respectively. Less obviously, the C variable specifies the option string to be appended to the beginning of each include directory so that the compiler knows where to find F<.h> files. Similarly, the C variable specifies the option string to be appended to the beginning of each directory that the linker should search for libraries. Another variable, C, is used to determine the system environment during the execution of an external command. By default, the only environment variable that is set is C, which is the execution path for a UNIX command. For the utmost reproducibility, you should really arrange to set your own execution path, in your top-level F file (or perhaps by importing an appropriate construction package with the Perl C command). The default variables are intended to get you off the ground. =head2 Interpolating construction variables Construction environment variables may be interpolated in the source and target file names by prefixing the construction variable name with C<%>. $env = new cons( DESTDIR => 'programs', SRCDIR => 'src', ); Program $env '%DESTDIR/hello', '%SRCDIR/hello.c'; Expansion of construction variables is recursive--that is, the file name(s) will be re-expanded until no more substitutions can be made. If a construction variable is not defined in the environment, then the null string will be substituted. =head1 Default construction methods The list of default construction methods includes the following: =head2 The C constructor The C method is a Perl object constructor. That is, it is not invoked via a reference to an existing construction environment B, but, rather statically, using the name of the Perl B where the constructor is defined. The method is invoked like this: $env = new cons(); The environment you get back is blessed into the package C, which means that it will have associated with it the default methods described below. Individual construction variables can be overridden by providing name/value pairs in an override list. Note that to override any command environment variable (i.e. anything under C), you will have to override all of them. You can get around this difficulty by using the C method on an existing construction environment. =head2 The C method The C method creates a clone of an existing construction environment, and can be called as in the following example: $env2 = $env1->clone(); You can provide overrides in the usual manner to create a different environment from the original. If you just want a new name for the same environment (which may be helpful when exporting environments to existing components), you can just use simple assignment. =head2 The C method The C method extracts the externally defined construction variables from an environment and returns them as a list of name/value pairs. Overrides can also be provided, in which case, the overridden values will be returned, as appropriate. The returned list can be assigned to a hash, as shown in the prototype, below, but it can also be manipulated in other ways: %env = $env1->copy(); The value of C, which is itself a hash, is also copied to a new hash, so this may be changed without fear of affecting the original environment. So, for example, if you really want to override just the C variable in the default environment, you could do the following: %cons = new cons()->copy(); $cons{ENV}{PATH} = ""; $cons = new cons(%cons); This will leave anything else that might be in the default execution environment undisturbed. =head2 The C method The C method arranges for the specified files to be installed in the specified directory. The installation is optimized: the file is not copied if it can be linked. If this is not the desired behavior, you will need to use a different method to install the file. It is called as follows: Install $env , ; Note that, while the files to be installed may be arbitrarily named, only the last component of each name is used for the installed target name. So, for example, if you arrange to install F in F, this will create a F file in the F directory (not F). =head2 The C method The C method arranges for the specified source file(s) to be installed as the specified target file(s). Multiple files should be specified as a file list. The installation is optimized: the file is not copied if it can be linked. If this is not the desired behavior, you will need to use a different method to install the file. It is called as follows: C works in two ways: Single file install: InstallAs $env TgtFile, SrcFile; Multiple file install: InstallAs $env ['tgt1', 'tgt2'], ['src1', 'src2']; Or, even as: @srcs = qw(src1 src2 src3); @tgts = qw(tgt1 tgt2 tgt3); InstallAs $env [@tgts], [@srcs]; Both the target and the sources lists should be of the same length. =head2 The C method The C method asks cons not to delete the specified file or list of files before building them again. It is invoked as: Precious ; This is especially useful for allowing incremental updates to libraries or debug information files which are updated rather than rebuilt anew each time. Cons will still delete the files when the C<-r> flag is specified. =head2 The C method The C method is a catchall method which can be used to arrange for any external command to be called to update the target. For this command, a target file and list of inputs is provided. In addition a construction command line, or lines, is provided as a string (this string may have multiple commands embedded within it, separated by new lines). C is called as follows: Command $env , , ; The target is made dependent upon the list of input files specified, and the inputs must be built successfully or Cons will not attempt to build the target. Within the construction command, any variable from the construction environment may be introduced by prefixing the name of the construction variable with C<%>. This is recursive: the command is expanded until no more substitutions can be made. If a construction variable is not defined in the environment, then the null string will be substituted. A doubled C<%%> will be replaced by a single C<%> in the construction command. There are several pseudo variables which will also be expanded: =over 10 =item %> The target file name (in a multi-target command, this is always the first target mentioned). =item %0 Same as C<%E>. =item %1, %2, ..., %9 These refer to the first through ninth input file, respectively. =item %E The full set of inputs. If any of these have been used anywhere else in the current command line (via C<%1>, C<%2>, etc.), then those will be deleted from the list provided by C<%E>. Consider the following command found in a F file in the F directory: Command $env 'tgt', qw(foo bar baz), qq( echo %< -i %1 > %> echo %< -i %2 >> %> echo %< -i %3 >> %> ); If F needed to be updated, then this would result in the execution of the following commands, assuming that no remapping has been established for the F directory: echo test/bar test/baz -i test/foo > test/tgt echo test/foo test/baz -i test/bar >> test/tgt echo test/foo test/bar -i test/baz >> test/tgt =back Any of the above pseudo variables may be followed immediately by one of the following suffixes to select a portion of the expanded path name: :a the absolute path to the file name :b the directory plus the file name stripped of any suffix :d the directory :f the file name :s the file name suffix :F the file name stripped of any suffix Continuing with the above example, C<%<:f> would expand to C, and C<%>:d> would expand to C. It is possible to programmatically rewrite part of the command by enclosing part of it between C<%[> and C<%]>. This will call the construction variable named as the first word enclosed in the brackets as a Perl code reference; the results of this call will be used to replace the contents of the brackets in the command line. For example, given an existing input file named F: @keywords = qw(foo bar baz); $env = new cons(X_COMMA => sub { join(",", @_) }); Command $env 'tgt', 'tgt.in', qq( echo '# Keywords: %[X_COMMA @keywords %]' > %> cat %< >> %> ); This will execute: echo '# Keywords: foo,bar,baz' > tgt cat tgt.in >> tgt After substitution occurs, strings of white space are converted into single blanks, and leading and trailing white space is eliminated. It is therefore not possible to introduce variable length white space in strings passed into a command, without resorting to some sort of shell quoting. If a multi-line command string is provided, the commands are executed sequentially. If any of the commands fails, then none of the rest are executed, and the target is not marked as updated, i.e. a new signature is not stored for the target. Normally, if all the commands succeed, and return a zero status (or whatever platform-specific indication of success is required), then a new signature is stored for the target. If a command erroneously reports success even after a failure, then Cons will assume that the target file created by that command is accurate and up-to-date. The first word of each command string, after expansion, is assumed to be an executable command looked up on the C environment variable (which is, in turn, specified by the C construction variable). If this command is found on the path, then the target will depend upon it: the command will therefore be automatically built, as necessary. It's possible to write multi-part commands to some shells, separated by semi-colons. Only the first command word will be depended upon, however, so if you write your command strings this way, you must either explicitly set up a dependency (with the C method), or be sure that the command you are using is a system command which is expected to be available. If it isn't available, you will, of course, get an error. If any command (even one within a multi-line command) begins with C<[perl]>, the remainder of that command line will be evaluated by the running Perl instead of being forked by the shell. If an error occurs in parsing the Perl or if the Perl expression returns 0 or undef, the command will be considered to have failed. For example, here is a simple command which creates a file C directly from Perl: $env = new cons(); Command $env 'foo', qq([perl] open(FOO,'>foo');print FOO "hi\\n"; close(FOO); 1); Note that when the command is executed, you are in the same package as when the F or F file was read, so you can call Perl functions you've defined in the same F or F file in which the C appears: $env = new cons(); sub create_file { my $file = shift; open(FILE, ">$file"); print FILE "hi\n"; close(FILE); return 1; } Command $env 'foo', "[perl] &create_file('%>')"; The Perl string will be used to generate the signature for the derived file, so if you change the string, the file will be rebuilt. The contents of any subroutines you call, however, are not part of the signature, so if you modify a called subroutine such as C above, the target will I be rebuilt. Caveat user. Cons normally prints a command before executing it. This behavior is suppressed if the first character of the command is C<@>. Note that you may need to separate the C<@> from the command name or escape it to prevent C<@cmd> from looking like an array to Perl quote operators that perform interpolation: # The first command line is incorrect, # because "@cp" looks like an array # to the Perl qq// function. # Use the second form instead. Command $env 'foo', 'foo.in', qq( @cp %< tempfile @ cp tempfile %> ); If there are shell meta characters anywhere in the expanded command line, such as C>, C>, quotes, or semi-colon, then the command will actually be executed by invoking a shell. This means that a command such as: cd foo alone will typically fail, since there is no command C on the path. But the command string: cd $<:d; tar cf $>:f $<:f when expanded will still contain the shell meta character semi-colon, and a shell will be invoked to interpret the command. Since C is interpreted by this sub-shell, the command will execute as expected. To specify a command with multiple targets, you can specify a reference to a list of targets. In Perl, a list reference can be created by enclosing a list in square brackets. Hence the following command: Command $env ['foo.h', 'foo.c'], 'foo.template', q( gen %1 ); could be used in a case where the command C creates two files, both F and F. =head2 The C method The C method arranges to create the object files that correspond to the specified source files. It is invoked as shown below: @files = Objects $env ; Under Unix, source files ending in F<.s> and F<.c> are currently supported, and will be compiled into a name of the same file ending in F<.o>. By default, all files are created by invoking the external command which results from expanding the C construction variable, with C<%E> and C<%E> set to the source and object files, respectively (see the C method for expansion details). The variable C is also used when scanning source files for dependencies. This is a colon separated list of pathnames, and is also used to create the construction variable C<_IFLAGS,> which will contain the appropriate list of -C options for the compilation. Any relative pathnames in C is interpreted relative to the directory in which the associated construction environment was created (absolute and top-relative names may also be used). This variable is used by C. The behavior of this command can be modified by changing any of the variables which are interpolated into C, such as C, C, and, indirectly, C. It's also possible to replace the value of C, itself. As a convenience, this file returns the list of object filenames. =head2 The C method The C method arranges to link the specified program with the specified object files. It is invoked in the following manner: Program $env , ; The program name will have the value of the C construction variable appended (by default, C<.exe> on Win32 systems, nothing on Unix systems) if the suffix is not already present. Source files may be specified in place of objects files--the C method will be invoked to arrange the conversion of all the files into object files, and hence all the observations about the C method, above, apply to this method also. The actual linking of the program will be handled by an external command which results from expanding the C construction variable, with C<%E> set to the object files to be linked (in the order presented), and C<%E> set to the target (see the C method for expansion details). The user may set additional variables in the construction environment, including C, to define which program to use for linking, C, a colon-separated list of library search paths, for use with library specifications of the form I<-llib>, and C, specifying the list of libraries to link against (in either I<-llib> form or just as pathnames. Relative pathnames in both C and C are interpreted relative to the directory in which the associated construction environment is created (absolute and top-relative names may also be used). Cons automatically sets up dependencies on any libraries mentioned in C: those libraries will be built before the command is linked. =head2 The C method The C method arranges to create the specified library from the specified object files. It is invoked as follows: Library $env , ; The library name will have the value of the C construction variable appended (by default, C<.lib> on Win32 systems, C<.a> on Unix systems) if the suffix is not already present. Source files may be specified in place of objects files--the C method will be invoked to arrange the conversion of all the files into object files, and hence all the observations about the C method, above, apply to this method also. The actual creation of the library will be handled by an external command which results from expanding the C construction variable, with C<%E> set to the library members (in the order presented), and C<%E> to the library to be created (see the C method for expansion details). The user may set variables in the construction environment which will affect the operation of the command. These include C, the archive program to use, C, which can be used to modify the flags given to the program specified by C, and C, the name of a archive index generation program, if needed (if the particular need does not require the latter functionality, then C must be redefined to not reference C). The C method allows the same library to be specified in multiple method invocations. All of the contributing objects from all the invocations (which may be from different directories) are combined and generated by a single archive command. Note, however, that if you prune a build so that only part of a library is specified, then only that part of the library will be generated (the rest will disappear!). =head2 The C method The C method is a combination of the C and C methods. Rather than generating an executable program directly, this command allows you to specify your own command to actually generate a module. The method is invoked as follows: Module $env , , ; This command is useful in instances where you wish to create, for example, dynamically loaded modules, or statically linked code libraries. =head2 The C method The C method allows you to specify additional dependencies for a target. It is invoked as follows: Depends $env , ; This may be occasionally useful, especially in cases where no scanner exists (or is writable) for particular types of files. Normally, dependencies are calculated automatically from a combination of the explicit dependencies set up by the method invocation or by scanning source files. A set of identical dependencies for multiple targets may be specified using a reference to a list of targets. In Perl, a list reference can be created by enclosing a list in square brackets. Hence the following command: Depends $env ['foo', 'bar'], 'input_file_1', 'input_file_2'; specifies that both the F and F files depend on the listed input files. =head2 The C method The C method allows you to ignore explicitly dependencies that Cons infers on its own. It is invoked as follows: Ignore ; This can be used to avoid recompilations due to changes in system header files or utilities that are known to not affect the generated targets. If, for example, a program is built in an NFS-mounted directory on multiple systems that have different copies of F, the differences will affect the signatures of all derived targets built from source files that C<#include Estdio.hE>. This will cause all those targets to be rebuilt when changing systems. If this is not desirable behavior, then the following line will remove the dependencies on the F file: Ignore '^/usr/include/stdio\.h$'; Note that the arguments to the C method are regular expressions, so special characters must be escaped and you may wish to anchor the beginning or end of the expression with C<^> or C<$> characters. =head2 The C method The C method adds a constant value to the signature calculation for every derived file. It is invoked as follows: Salt $string; Changing the Salt value will force a complete rebuild of every derived file. This can be used to force rebuilds in certain desired circumstances. For example, Salt `uname -s`; Would force a complete rebuild of every derived file whenever the operating system on which the build is performed (as reported by C) changes. =head2 The C method The C method instructs Cons to maintain a cache of derived files, to be shared among separate build trees of the same project. UseCache("cache/") || warn("cache directory not found"); =head2 The C method The C mathod returns the real source path name of a file, as opposted to the path name within a build directory. It is invoked as follows: $path = SourcePath ; =head2 The C method The C method returns true if the supplied path is a derivable file, and returns undef (false) otherwise. It is invoked as follows: $result = ConsPath ; =head2 The C method The C method looks up multiple path names in a string separated by the default path separator for the operating system (':' on UNIX systems, ';' on Windows NT), and returns the fully-qualified names. It is invoked as follows: @paths = SplitPath ; The C method will convert names prefixed '#' to the appropriate top-level build name (without the '#') and will convert relative names to top-level names. =head2 The C method The C method returns the build path name(s) of a directory or list of directories. It is invoked as follows: $cwd = DirPath ; The most common use for the C method is: $cwd = DirPath '.'; to fetch the path to the current directory of a subsidiary F file. =head2 The C method The C method returns the build path name(s) of a file or list of files. It is invoked as follows: $file = FilePath ; =head2 The C method The C method specifies help text that will be displayed when the user invokes C. This can be used to provide documentation of specific targets, values, build options, etc. for the build tree. It is invoked as follows: Help ; The C method may only be called once, and should typically be specified in the top-level F file. =head1 Extending Cons =head2 Overriding construction variables There are several ways of extending Cons, which vary in degree of difficulty. The simplest method is to define your own construction environment, based on the default environment, but modified to reflect your particular needs. This will often suffice for C-based applications. You can use the C constructor, and the C and C methods to create hybrid environments. These changes can be entirely transparent to the underlying F files. =head2 Adding new methods For slightly more demanding changes, you may wish to add new methods to the C package. Here's an example of a very simple extension, C, which installs a tcl script in a requested location, but edits the script first to reflect a platform-dependent path that needs to be installed in the script: # cons::InstallScript - Create a platform dependent version of a shell # script by replacing string ``#!your-path-here'' with platform specific # path $BIN_DIR. sub cons::InstallScript { my ($env, $dst, $src) = @_; Command $env $dst, $src, qq( sed s+your-path-here+$BIN_DIR+ %< > %> chmod oug+x %> ); } Notice that this method is defined directly in the C package (by prefixing the name with C). A change made in this manner will be globally visible to all environments, and could be called as in the following example: InstallScript $env "$BIN/foo", "foo.tcl"; For a small improvement in generality, the C variable could be passed in as an argument or taken from the construction environment--as C<%BINDIR>. =head2 Overriding methods Instead of adding the method to the C name space, you could define a new package which inherits existing methods from the C package and overrides or adds others. This can be done using Perl's inheritance mechanisms. The following example defines a new package C which overrides the standard C method. The overridden method builds linked library modules, rather than library archives. A new constructor is provided. Environments created with this constructor will have the new library method; others won't. package cons::switch; BEGIN {@ISA = 'cons'} sub new { shift; bless new cons(@_); } sub Library { my($env) = shift; my($lib) = shift; my(@objs) = Objects $env @_; Command $env $lib, @objs, q( %LD -r %LDFLAGS %< -o %> ); } This functionality could be invoked as in the following example: $env = new cons::switch(@overrides); ... Library $env 'lib.o', 'foo.c', 'bar.c'; =head1 Invoking Cons The C command is usually invoked from the root of the build tree. A F file must exist in that directory. If the C<-f> argument is used, then an alternate F file may be used (and, possibly, an alternate root, since C will cd to F file's containing directory). If C is invoked from a child of the root of the build tree with the C<-t> argument, it will walk up the directory hierarchy looking for a F file. (An alternate name may still be specified with C<-f>.) The targets supplied on the command line will be modified to be relative to the discovered F file. For example, from a directory containing a top-level F file, the following invocation: % cd libfoo/subdir % cons -t target is exactly equivalent to: % cons libfoo/subdir/target If there are any C targets specified in the directory hierarchy's F or F files, only the default targets at or below the directory from which C was invoked will be built. The command is invoked as follows: cons -- where I can be any of the following, in any order: =over 10 =item I Build the specified target. If I is a directory, then recursively build everything within that directory. =item I<+pattern> Limit the F files considered to just those that match I, which is a Perl regular expression. Multiple C<+> arguments are accepted. =item I= Sets I to value I in the C hash passed to the top-level F file. =item C<-cc> Show command that would have been executed, when retrieving from cache. No indication that the file has been retrieved is given; this is useful for generating build logs that can be compared with real build logs. =item C<-cd> Disable all caching. Do not retrieve from cache nor flush to cache. =item C<-cr> Build dependencies in random order. This is useful when building multiple similar trees with caching enabled. =item C<-cs> Synchronize existing build targets that are found to be up-to-date with cache. This is useful if caching has been disabled with -cc or just recently enabled with UseCache. =item C<-d> Enable dependency debugging. =item C<-f> Use the specified file instead of F (but first change to containing directory of I). =item C<-h> Show a help message local to the current build if one such is defined, and exit. =item C<-k> Keep going as far as possible after errors. =item C<-o> Read override file I. =item C<-p> Show construction products in specified trees. No build is attempted. =item C<-pa> Show construction products and associated actions. No build is attempted. =item C<-pw> Show products and where they are defined. No build is attempted. =item C<-q> Don't be verbose about Installing and Removing targets. =item C<-r> Remove construction products associated with . No build is attempted. =item C<-R> Search for files in I. Multiple B<-R> I directories are searched in the order specified. =item C<-t> Traverse up the directory hierarchy looking for a F file, if none exists in the current directory. Targets will be modified to be relative to the F file. =item C<-v> Show C version and continue processing. =item C<-V> Show C version and exit. =item C<-wf> Write all filenames considered into I. =item C<-x> Show a help message similar to this one, and exit. =back And I can be any arguments that you wish to process in the F file. Note that there should be a B<--> separating the arguments to cons and the arguments that you wish to process in the F file. Processing of I can be done by any standard package like B or its variants, or any user defined package. B will pass in the I as B<@ARGV> and will not attempt to interpret anything after the B<-->. % cons -R /usr/local/repository -d os=solaris +driver -- -c test -f DEBUG would pass the following to cons -R /usr/local/repository -d os=solaris +driver and the following, to the top level F file as B<@ARGV> -c test -f DEBUG Note that C is equivalent to a full recursive C, but requires no support in the F file or any F files. This is most useful if you are compiling files into source directories (if you separate the F and F directories, then you can just remove the directories). The options C<-p>, C<-pa>, and C<-pw> are extremely useful for use as an aid in reading scripts or debugging them. If you want to know what script installs F, for example, just type: % cons -pw export/include/foo.h =head1 Using and writing dependency scanners QuickScan allows simple target-independent scanners to be set up for source files. Only one QuickScan scanner may be associated with any given source file and environment. QuickScan is invoked as follows: QuickScan CONSENV CODEREF, FILENAME [, PATH] The subroutine referenced by CODEREF is expected to return a list of filenames included directly by FILE. These filenames will, in turn, be scanned. The optional PATH argument supplies a lookup path for finding FILENAME and/or files returned by the user-supplied subroutine. The PATH may be a reference to an array of lookup-directory names, or a string of names separated by the system's separator character (':' on UNIX systems, ';' on Windows NT). The subroutine is called once for each line in the file, with $_ set to the current line. If the subroutine needs to look at additional lines, or, for that matter, the entire file, then it may read them itself, from the filehandle SCAN. It may also terminate the loop, if it knows that no further include information is available, by closing the filehandle. Whether or not a lookup path is provided, QuickScan first tries to lookup the file relative to the current directory (for the top-level file supplied directly to QuickScan), or from the directory containing the file which referenced the file. This is not very general, but seems good enough--especially if you have the luxury of writing your own utilities and can control the use of the search path in a standard way. Finally, the search path is, currently, colon separated. This may not make the NT camp happy. Here's a real example, taken from a F file here: sub cons::SMFgen { my($env, @tables) = @_; foreach $t (@tables) { $env->QuickScan(sub { /\b\S*?\.smf\b/g }, "$t.smf", $env->{SMF_INCLUDE_PATH}); $env->Command( ["$t.smdb.cc","$t.smdb.h","$t.snmp.cc","$t.ami.cc", "$t.http.cc"], "$t.smf", q( smfgen %( %SMF_INCLUDE_OPT %) %< ) ); } } [NOTE that the form C<$env-EQuickScan ...> and C<$env-ECommand ...> should not be necessary, but, for some reason, is required for this particular invocation. This appears to be a bug in Perl or a misunderstanding on my part; this invocation style does not always appear to be necessary.] This finds all names of the form .smf in the file. It will return the names even if they're found within comments, but that's OK (the mechanism is forgiving of extra files; they're just ignored on the assumption that the missing file will be noticed when the program, in this example, smfgen, is actually invoked). A scanner is only invoked for a given source file if it is needed by some target in the tree. It is only ever invoked once for a given source file. Here is another way to build the same scanner. This one uses an explicit code reference, and also (unecessarily, in this case) reads the whole file itself: sub myscan { my(@includes); do { push(@includes, /\b\S*?\.smf\b/g); } while ; @includes } Note that the order of the loop is reversed, with the loop test at the end. This is because the first line is already read for you. This scanner can be attached to a source file by: QuickScan $env \myscan, "$_.smf"; =head1 SUPPORT AND SUGGESTIONS Cons is maintained by the user community. To subscribe, send mail to B with body B. Please report any suggestions through the B mailing list. =head1 BUGS Sure to be some. Please report any bugs through the B mailing list. =head1 INFORMATION ABOUT CONS Information about CONS can be obtained from the official cons web site B or its mirrors listed there. The cons maintainers can be contacted by email at B =head1 AUTHORS Originally by Bob Sidebotham. Then significantly enriched by the members of the Cons community B. The Cons community would like to thank Ulrich Pfeifer for the original pod documentation derived from the F file. Cons documentation is now a part of the program itself. =cut #:endofperl cons-2.2.0.orig/cons.html0100444000175000017500000033552207206610310015500 0ustar jgoerzenjgoerzen Cons - Software Construction System

NAME

Cons - A Software Construction System


DESCRIPTION

A guide and reference for version 2.2.0

Copyright (c) 1996-2000 Free Software Foundation, Inc.

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; see the file COPYING. If not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.


Introduction

Cons is a system for constructing, primarily, software, but is quite different from previous software construction systems. Cons was designed from the ground up to deal easily with the construction of software spread over multiple source directories. Cons makes it easy to create build scripts that are simple, understandable and maintainable. Cons ensures that complex software is easily and accurately reproducible.

Cons uses a number of techniques to accomplish all of this. Construction scripts are just Perl scripts, making them both easy to comprehend and very flexible. Global scoping of variables is replaced with an import/export mechanism for sharing information between scripts, significantly improving the readability and maintainability of each script. Construction environments are introduced: these are Perl objects that capture the information required for controlling the build process. Multiple environments are used when different semantics are required for generating products in the build tree. Cons implements automatic dependency analysis and uses this to globally sequence the entire build. Variant builds are easily produced from a single source tree. Intelligent build subsetting is possible, when working on localized changes. Overrides can be setup to easily override build instructions without modifying any scripts. MD5 cryptographic signatures are associated with derived files, and are used to accurately determine whether a given file needs to be rebuilt.

While offering all of the above, and more, Cons remains simple and easy to use. This will, hopefully, become clear as you read the remainder of this document.


Why Cons? Why not Make?

Cons is a make replacement. In the following paragraphs, we look at a few of the undesirable characteristics of make--and typical build environments based on make--that motivated the development of Cons.

Build complexity

Traditional make-based systems of any size tend to become quite complex. The original make utility and its derivatives have contributed to this tendency in a number of ways. Make is not good at dealing with systems that are spread over multiple directories. Various work-arounds are used to overcome this difficulty; the usual choice is for make to invoke itself recursively for each sub-directory of a build. This leads to complicated code, in which it is often unclear how a variable is set, or what effect the setting of a variable will have on the build as a whole. The make scripting language has gradually been extended to provide more possibilities, but these have largely served to clutter an already overextended language. Often, builds are done in multiple passes in order to provide appropriate products from one directory to another directory. This represents a further increase in build complexity.

Build reproducibility

The bane of all makes has always been the correct handling of dependencies. Most often, an attempt is made to do a reasonable job of dependencies within a single directory, but no serious attempt is made to do the job between directories. Even when dependencies are working correctly, make's reliance on a simple time stamp comparison to determine whether a file is out of date with respect to its dependents is not, in general, adequate for determining when a file should be rederived. If an external library, for example, is rebuilt and then ``snapped'' into place, the timestamps on its newly created files may well be earlier than the last local build, since it was built before it became visible.

Variant builds

Make provides only limited facilities for handling variant builds. With the proliferation of hardware platforms and the need for debuggable vs. optimized code, the ability to easily create these variants is essential. More importantly, if variants are created, it is important to either be able to separate the variants or to be able to reproduce the original or variant at will. With make it is very difficult to separate the builds into multiple build directories, separate from the source. And if this technique isn't used, it's also virtually impossible to guarantee at any given time which variant is present in the tree, without resorting to a complete rebuild.

Repositories

Make provides only limited support for building software from code that exists in a central repository directory structure. The VPATH feature of GNU make (and some other make implementations) is intended to provide this, but doesn't work as expected: it changes the path of target file to the VPATH name too early in its analysis, and therefore searches for all dependencies in the VPATH directory. To ensure correct development builds, it is important to be able to create a file in a local build directory and have any files in a code repository (a VPATH directory, in make terms) that depend on the local file get rebuilt properly. This isn't possible with VPATH, without coding a lot of complex repository knowledge directly into the makefiles.


Keeping it simple

A few of the difficulties with make have been cited above. In this and subsequent sections, we shall introduce Cons and show how these issues are addressed.

Perl scripts

Cons is Perl-based. That is, Cons scripts--Conscript and Construct files, the equivalent to Makefile or makefile--are all written in Perl. This provides an immediate benefit: the language for writing scripts is a familiar one. Even if you don't happen to be a Perl programmer, it helps to know that Perl is basically just a simple declarative language, with a well-defined flow of control, and familiar semantics. It has variables that behave basically the way you would expect them to, subroutines, flow of control, and so on. There is no special syntax introduced for Cons. The use of Perl as a scripting language simplifies the task of expressing the appropriate solution to the often complex requirements of a build.

Hello, World!

To ground the following discussion, here's how you could build the Hello, World! C application with Cons:

  $env = new cons();
  Program $env 'hello', 'hello.c';

If you install this script in a directory, naming the script Construct, and create the hello.c source file in the same directory, then you can type cons hello to build the application:

  % cons hello
  cc -c hello.c -o hello.o
  cc -o hello hello.o

Construction environments

A key simplification of Cons is the idea of a construction environment. A construction environment is an object characterized by a set of key/value pairs and a set of methods. In order to tell Cons how to build something, you invoke the appropriate method via an appropriate construction environment. Consider the following example:

  $env = new cons(
        CC      =>      'gcc',
        LIBS    =>      'libworld.a'
  );
  Program $env 'hello', 'hello.c';

In this case, rather than using the default construction environment, as is, we have overridden the value of CC so that the GNU C Compiler equivalent is used, instead. Since this version of Hello, World! requires a library, libworld.a, we have specified that any program linked in this environment should be linked with that library. If the library exists already, well and good, but if not, then we'll also have to include the statement:

  Library $env 'libworld', 'world.c';

Now if you type cons hello, the library will be built before the program is linked, and, of course, gcc will be used to compile both modules:

  % cons hello
  gcc -c hello.c -o hello.o
  gcc -c world.c -o world.o
  ar r libworld.a world.o
  ar: creating libworld.a
  ranlib libworld.a
  gcc -o hello hello.o libworld.a

Automatic and complete dependency analysis

With Cons, dependencies are handled automatically. Continuing the previous example, note that when we modify world.c, world.o is recompiled, libworld.a recreated, and hello relinked:

  % vi world.c
    [EDIT]
  % cons hello
  gcc -c world.c -o world.o
  ar r libworld.a world.o
  ar: creating libworld.a
  ranlib libworld.a
  gcc -o hello hello.o libworld.a

This is a relatively simple example: Cons ``knows'' world.o depends upon world.c, because the dependency is explicitly set up by the Library method. It also knows that libworld.a depends upon world.o and that hello depends upon libworld.a, all for similar reasons.

Now it turns out that hello.c also includes the interface definition file, world.h:

  % emacs world.h
    [EDIT]
  % cons hello
  gcc -c hello.c -o hello.o
  gcc -o hello hello.o libworld.a

How does Cons know that hello.c includes world.h, and that hello.o must therefore be recompiled? For now, suffice it to say that when considering whether or not hello.o is up-to-date, Cons invokes a scanner for its dependency, hello.c. This scanner enumerates the files included by hello.c to come up with a list of further dependencies, beyond those made explicit by the Cons script. This process is recursive: any files included by included files will also be scanned.

Isn't this expensive? The answer is--it depends. If you do a full build of a large system, the scanning time is insignificant. If you do a rebuild of a large system, then Cons will spend a fair amount of time thinking about it before it decides that nothing has to be done (although not necessarily more time than make!). The good news is that Cons makes it very easy to intelligently subset your build, when you are working on localized changes.

Automatic global build sequencing

Because Cons does full and accurate dependency analysis, and does this globally, for the entire build, Cons is able to use this information to take full control of the sequencing of the build. This sequencing is evident in the above examples, and is equivalent to what you would expect for make, given a full set of dependencies. With Cons, this extends trivially to larger, multi-directory builds. As a result, all of the complexity involved in making sure that a build is organized correctly--including multi-pass hierarchical builds--is eliminated. We'll discuss this further in the next sections.


Building large trees--still just as simple

A hierarchy of build scripts

A larger build, in Cons, is organized by creating a hierarchy of build scripts. At the top of the tree is a script called Construct. The rest of the scripts, by convention, are each called Conscript. These scripts are connected together, very simply, by the Build, Export, and Import commands.

The Build command

The Build command takes a list of Conscript file names, and arranges for them to be included in the build. For example:

  Build qw(
        drivers/display/Conscript
        drivers/mouse/Conscript
        parser/Conscript
        utilities/Conscript
  );

This is a simple two-level hierarchy of build scripts: all the subsidiary Conscript files are mentioned in the top-level Construct file. Notice that not all directories in the tree necessarily have build scripts associated with them.

This could also be written as a multi-level script. For example, the Construct file might contain this command:

  Build qw(
        parser/Conscript
        drivers/Conscript
        utilities/Conscript
  );

and the Conscript file in the drivers directory might contain this:

  Build qw(
        display/Conscript
        mouse/Conscript
  );

Experience has shown that the former model is a little easier to understand, since the whole construction tree is laid out in front of you, at the top-level. Hybrid schemes are also possible. A separately maintained component that needs to be incorporated into a build tree, for example, might hook into the build tree in one place, but define its own construction hierarchy.

By default, Cons does not change its working directory to the directory containing a subsidiary Conscript file it is including. This behavior can be enabled for a build by specifying, in the top-level Construct file:

  Conscript_chdir 1;

When enabled, Cons will change to the subsidiary Conscript file's containing directory while reading in that file, and then change back to the top-level directory once the file has been processed.

It is expected that this behavior will become the default in some future version of Cons. To prepare for this transition, builds that expect Cons to remain at the top of the build while it reads in a subsidiary Conscript file should explicitly disable this feature as follows:

  Conscript_chdir 0;

Relative, top-relative, and absolute file names

You may have noticed that the file names specified to the Build command are relative to the location of the script it is invoked from. This is generally true for other filename arguments to other commands, too, although we might as well mention here that if you begin a file name with a hash mark, ``#'', then that file is interpreted relative to the top-level directory (where the Construct file resides). And, not surprisingly, if you begin it with ``/'', then it is considered to be an absolute pathname. This is true even on systems which use a back slash rather than a forward slash to name absolute paths.

Using modules in build scripts

You may pull modules into each Conscript file using the normal Perl use or require statements:

  use English;
  require My::Module;

Each use or require only affects the one Conscript file in which it appears. To use a module in multiple Conscript files, you must put a use or require statement in each one that needs the module.

Scope of variables

The top-level Construct file and all Conscript files begin life in a common, separate Perl package. Cons controls the symbol table for the package so that, the symbol table for each script is empty, except for the Construct file, which gets some of the command line arguments. All of the variables that are set or used, therefore, are set by the script itself--not by some external script.

Variables can be explicitly imported by a script from its parent script. To import a variable, it must have been exported by the parent and initialized (otherwise an error will occur).

The Export command

The Export command is used as in the following example:

  $env = new cons();
  $INCLUDE = "#export/include";
  $LIB = "#export/lib";
  Export qw( env INCLUDE LIB );
  Build qw( util/Conscript );

The values of the simple variables mentioned in the Export list will be squirreled away by any subsequent Build commands. The Export command will only export Perl scalar variables, that is, variables whose name begins with $. Other variables, objects, etc. can be exported by reference--but all scripts will refer to the same object, and this object should be considered to be read-only by the subsidiary scripts and by the original exporting script. It's acceptable, however, to assign a new value to the exported scalar variable--that won't change the underlying variable referenced. This sequence, for example, is OK:

  $env = new cons();
  Export qw( env INCLUDE LIB );
  Build qw( util/Conscript );
  $env = new cons(CFLAGS => '-O');
  Build qw( other/Conscript );

It doesn't matter whether the variable is set before or after the Export command. The important thing is the value of the variable at the time the Build command is executed. This is what gets squirreled away. Any subsequent Export commands, by the way, invalidate the first: you must mention all the variables you wish to export on each Export command.

The Import command

Variables exported by the Export command can be imported into subsidiary scripts by the Import command. The subsidiary script always imports variables directly from the superior script. Consider this example:

  Import qw( env INCLUDE );

This is only legal if the parent script exported both $env and $INCLUDE. It also must have given each of these variables values. It is OK for the subsidiary script to only import a subset of the exported variables (in this example, $LIB, which was exported by the previous example, is not imported).

All the imported variables are automatically re-exported, so the sequence:

  Import qw ( env INCLUDE );
  Build qw ( beneath-me/Conscript );

will supply both $env and $INCLUDE to the subsidiary file. If only $env is to be exported, then the following will suffice:

  Import qw ( env INCLUDE );
  Export qw ( env );
  Build qw ( beneath-me/Conscript );

Needless to say, the variables may be modified locally before invoking Build on the subsidiary script.

Build script evaluation order

The only constraint on the ordering of build scripts is that superior scripts are evaluated before their inferior scripts. The top-level Construct file, for instance, is evaluated first, followed by any inferior scripts. This is all you really need to know about the evaluation order, since order is generally irrelevant. Consider the following Build command:

  Build qw(
        drivers/display/Conscript
        drivers/mouse/Conscript
        parser/Conscript
        utilities/Conscript
  );

We've chosen to put the script names in alphabetical order, simply because that's the most convenient for maintenance purposes. Changing the order will make no difference to the build.


A Model for sharing files

Some simple conventions

In any complex software system, a method for sharing build products needs to be established. We propose a simple set of conventions which are trivial to implement with Cons, but very effective.

The basic rule is to require that all build products which need to be shared between directories are shared via an intermediate directory. We have typically called this export, and, in a C environment, provided conventional sub-directories of this directory, such as include, lib, bin, etc.

These directories are defined by the top-level Construct file. A simple Construct file for a Hello, World! application, organized using multiple directories, might look like this:

  # Construct file for Hello, World!
  # Where to put all our shared products.
  $EXPORT = '#export';
  Export qw( CONS INCLUDE LIB BIN );
  # Standard directories for sharing products.
  $INCLUDE = "$EXPORT/include";
  $LIB = "$EXPORT/lib";
  $BIN = "$EXPORT/bin";
  # A standard construction environment.
  $CONS = new cons (
        CPPPATH => $INCLUDE,    # Include path for C Compilations
        LIBPATH => $LIB,        # Library path for linking programs
        LIBS => '-lworld',      # List of standard libraries
  );
  Build qw(
        hello/Conscript
        world/Conscript
  );

The world directory's Conscript file looks like this:

  # Conscript file for directory world
  Import qw( CONS INCLUDE LIB );
  # Install the products of this directory
  Install $CONS $LIB, 'libworld.a';
  Install $CONS $INCLUDE, 'world.h';
  # Internal products
  Library $CONS 'libworld.a', 'world.c';

and the hello directory's Conscript file looks like this:

  # Conscript file for directory hello
  Import qw( CONS BIN );
  # Exported products
  Install $CONS $BIN, 'hello';
  # Internal products
  Program $CONS 'hello', 'hello.c';

To construct a Hello, World! program with this directory structure, go to the top-level directory, and invoke cons with the appropriate arguments. In the following example, we tell Cons to build the directory export. To build a directory, Cons recursively builds all known products within that directory (only if they need rebuilding, of course). If any of those products depend upon other products in other directories, then those will be built, too.

  % cons export
  Install world/world.h as export/include/world.h
  cc -Iexport/include -c hello/hello.c -o hello/hello.o
  cc -Iexport/include -c world/world.c -o world/world.o
  ar r world/libworld.a world/world.o
  ar: creating world/libworld.a
  ranlib world/libworld.a
  Install world/libworld.a as export/lib/libworld.a
  cc -o hello/hello hello/hello.o -Lexport/lib -lworld
  Install hello/hello as export/bin/hello

Clean, understandable, location-independent scripts

You'll note that the two Conscript files are very clean and to-the-point. They simply specify products of the directory and how to build those products. The build instructions are minimal: they specify which construction environment to use, the name of the product, and the name of the inputs. Note also that the scripts are location-independent: if you wish to reorganize your source tree, you are free to do so: you only have to change the Construct file (in this example), to specify the new locations of the Conscript files. The use of an export tree makes this goal easy.

Note, too, how Cons takes care of little details for you. All the export directories, for example, were made automatically. And the installed files were really hard-linked into the respective export directories, to save space and time. This attention to detail saves considerable work, and makes it even easier to produce simple, maintainable scripts.


Separating source and build trees

It's often desirable to keep any derived files from the build completely separate from the source files. This makes it much easier to keep track of just what is a source file, and also makes it simpler to handle variant builds, especially if you want the variant builds to co-exist.

Separating build and source directories using the Link command

Cons provides a simple mechanism that handles all of these requirements. The Link command is invoked as in this example:

  Link 'build' => 'src';

The specified directories are ``linked'' to the specified source directory. Let's suppose that you setup a source directory, src, with the sub-directories world and hello below it, as in the previous example. You could then substitute for the original build lines the following:

  Build qw(
        build/world/Conscript
        build/hello/Conscript
  );

Notice that you treat the Conscript file as if it existed in the build directory. Now if you type the same command as before, you will get the following results:

  % cons export
  Install build/world/world.h as export/include/world.h
  cc -Iexport/include -c build/hello/hello.c -o build/hello/hello.o
  cc -Iexport/include -c build/world/world.c -o build/world/world.o
  ar r build/world/libworld.a build/world/world.o
  ar: creating build/world/libworld.a
  ranlib build/world/libworld.a
  Install build/world/libworld.a as export/lib/libworld.a
  cc -o build/hello/hello build/hello/hello.o -Lexport/lib -lworld
  Install build/hello/hello as export/bin/hello

Again, Cons has taken care of the details for you. In particular, you will notice that all the builds are done using source files and object files from the build directory. For example, build/world/world.o is compiled from build/world/world.c, and export/include/world.h is installed from build/world/world.h. This is accomplished on most systems by the simple expedient of ``hard'' linking the required files from each source directory into the appropriate build directory.

The links are maintained correctly by Cons, no matter what you do to the source directory. If you modify a source file, your editor may do this ``in place'' or it may rename it first and create a new file. In the latter case, any hard link will be lost. Cons will detect this condition the next time the source file is needed, and will relink it appropriately.

You'll also notice, by the way, that no changes were required to the underlying Conscript files. And we can go further, as we shall see in the next section.


Variant builds

Hello, World! for baNaNa and peAcH OS's

Variant builds require just another simple extension. Let's take as an example a requirement to allow builds for both the baNaNa and peAcH operating systems. In this case, we are using a distributed file system, such as NFS to access the particular system, and only one or the other of the systems has to be compiled for any given invocation of cons. Here's one way we could set up the Construct file for our Hello, World! application:

  # Construct file for Hello, World!
  die qq(OS must be specified) unless $OS = $ARG{OS};
  die qq(OS must be "peach" or "banana")
        if $OS ne "peach" && $OS ne "banana";
  # Where to put all our shared products.
  $EXPORT = "#export/$OS";
  Export qw( CONS INCLUDE LIB BIN );
  # Standard directories for sharing products.
  $INCLUDE = "$EXPORT/include";
  $LIB = "$EXPORT/lib";
  $BIN = "$EXPORT/bin";
  # A standard construction environment.
  $CONS = new cons (
        CPPPATH => $INCLUDE,    # Include path for C Compilations
        LIBPATH => $LIB,        # Library path for linking programs
        LIBS => '-lworld',      # List of standard libraries
  );
  # $BUILD is where we will derive everything.
  $BUILD = "#build/$OS";
  # Tell cons where the source files for $BUILD are.
  Link $BUILD => 'src';
  Build (
        "$BUILD/hello/Conscript",
        "$BUILD/world/Conscript",
  );

Now if we login to a peAcH system, we can build our Hello, World! application for that platform:

  % cons export OS=peach
  Install build/peach/world/world.h as export/peach/include/world.h
  cc -Iexport/peach/include -c build/peach/hello/hello.c -o build/peach/hello/hello.o
  cc -Iexport/peach/include -c build/peach/world/world.c -o build/peach/world/world.o
  ar r build/peach/world/libworld.a build/peach/world/world.o
  ar: creating build/peach/world/libworld.a
  ranlib build/peach/world/libworld.a
  Install build/peach/world/libworld.a as export/peach/lib/libworld.a
  cc -o build/peach/hello/hello build/peach/hello/hello.o -Lexport/peach/lib -lworld
  Install build/peach/hello/hello as export/peach/bin/hello

Variations on a theme

Other variations of this model are possible. For example, you might decide that you want to separate out your include files into platform dependent and platform independent files. In this case, you'd have to define an alternative to $INCLUDE for platform-dependent files. Most Conscript files, generating purely platform-independent include files, would not have to change.

You might also want to be able to compile your whole system with debugging or profiling, for example, enabled. You could do this with appropriate command line options, such as DEBUG=on. This would then be translated into the appropriate platform-specific requirements to enable debugging (this might include turning off optimization, for example). You could optionally vary the name space for these different types of systems, but, as we'll see in the next section, it's not essential to do this, since Cons is pretty smart about rebuilding things when you change options.


Signatures

MD5 cryptographic signatures

Whenever Cons creates a derived file, it stores a signature for that file. The signature is stored in a separate file, one per directory. After the previous example was compiled, the .consign file in the build/peach/world directory looked like this:

  world.o:834179303 23844c0b102ecdc0b4548d1cd1cbd8c6
  libworld.a:834179304 9bf6587fa06ec49d864811a105222c00

The first number is a timestamp--for a UNIX systems, this is typically the number of seconds since January 1st, 1970. The second value is an MD5 checksum. The Message Digest Algorithm is an algorithm that, given an input string, computes a strong cryptographic signature for that string. The MD5 checksum stored in the .consign file is, in effect, a digest of all the dependency information for the specified file. So, for example, for the world.o file, this includes at least the world.c file, and also any header files that Cons knows about that are included, directly or indirectly by world.c. Not only that, but the actual command line that was used to generate world.o is also fed into the computation of the signature. Similarly, libworld.a gets a signature which ``includes'' all the signatures of its constituents (and hence, transitively, the signatures of their constituents), as well as the command line that created the file.

The signature of a non-derived file is computed, by default, by taking the current modification time of the file and the file's entry name (unless there happens to be a current .consign entry for that file, in which case that signature is used).

Notice that there is no need for a derived file to depend upon any particular Construct or Conscript file--if changes to these files affect the file in question, then this will be automatically reflected in its signature, since relevant parts of the command line are included in the signature. Unrelated changes will have no effect.

When Cons considers whether to derive a particular file, then, it first computes the expected signature of the file. It then compares the file's last modification time with the time recorded in the .consign entry, if one exists. If these times match, then the signature stored in the .consign file is considered to be accurate. If the file's previous signature does not match the new, expected signature, then the file must be rederived.

Notice that a file will be rederived whenever anything about a dependent file changes. In particular, notice that any change to the modification time of a dependent (forward or backwards in time) will force recompilation of the derived file.

The use of these signatures is an extremely simple, efficient, and effective method of improving--dramatically--the reproducibility of a system.

We'll demonstrate this with a simple example:

  # Simple "Hello, World!" Construct file
  $CFLAGS = '-g' if $ARG{DEBUG} eq 'on';
  $CONS = new cons(CFLAGS => $CFLAGS);
  Program $CONS 'hello', 'hello.c';

Notice how Cons recompiles at the appropriate times:

  % cons hello
  cc -c hello.c -o hello.o
  cc -o hello hello.o
  % cons hello
  cons: "hello" is up-to-date.
  % cons DEBUG=on hello
  cc -g -c hello.c -o hello.o
  cc -o hello hello.o
  % cons DEBUG=on hello
  cons: "hello" is up-to-date.
  % cons hello
  cc -c hello.c -o hello.o
  cc -o hello hello.o


Code Repositories

Many software development organizations will have one or more central repository directory trees containing the current source code for one or more projects, as well as the derived object files, libraries, and executables. In order to reduce unnecessary recompilation, it is useful to use files from the repository to build development software--assuming, of course, that no newer dependency file exists in the local build tree.

Repository

Cons provides a mechanism to specify a list of code repositories that will be searched, in-order, for source files and derived files not found in the local build directory tree.

The following lines in a Construct file will instruct Cons to look first under the /usr/experiment/repository directory and then under the /usr/product/repository directory:

  Repository qw (
        /usr/experiment/repository
        /usr/product/repository
  );

The repository directories specified may contain source files, derived files (objects, libraries and executables), or both. If there is no local file (source or derived) under the directory in which Cons is executed, then the first copy of a same-named file found under a repository directory will be used to build any local derived files.

Cons maintains one global list of repositories directories. Cons will eliminate the current directory, and any non-existent directories, from the list.

Finding the Construct file in a Repository

Cons will also search for Construct and Conscript files in the repository tree or trees. This leads to a chicken-and-egg situation, though: how do you look in a repository tree for a Construct file if the Construct file tells you where the repository is? To get around this, repositories may be specified via -R options on the command line:

  % cons -R /usr/experiment/repository -R /usr/product/repository .

Any repository directories specified in the Construct or Conscript files will be appended to the repository directories specified by command-line -R options.

Repository source files

If the source code (include the Conscript file) for the library version of the Hello, World! C application is in a repository (with no derived files), Cons will use the repository source files to create the local object files and executable file:

  % cons -R /usr/src_only/repository hello
  gcc -c /usr/src_only/repository/hello.c -o hello.o
  gcc -c /usr/src_only/repository/world.c -o world.o
  ar r libworld.a world.o
  ar: creating libworld.a
  ranlib libworld.a
  gcc -o hello hello.o libworld.a

Creating a local source file will cause Cons to rebuild the appropriate derived file or files:

  % pico world.c
    [EDIT]
  % cons -R /usr/src_only/repository hello
  gcc -c world.c -o world.o
  ar r libworld.a world.o
  ar: creating libworld.a
  ranlib libworld.a
  gcc -o hello hello.o libworld.a

And removing the local source file will cause Cons to revert back to building the derived files from the repository source:

  % rm world.c
  % cons -R /usr/src_only/repository hello
  gcc -c /usr/src_only/repository/world.c -o world.o
  ar r libworld.a world.o
  ar: creating libworld.a
  ranlib libworld.a
  gcc -o hello hello.o libworld.a

Repository derived files

If a repository tree contains derived files (usually object files, libraries, or executables), Cons will perform its normal signature calculation to decide whether the repository file is up-to-date or a derived file must be built locally. This means that, in order to ensure correct signature calculation, a repository tree must also contain the .consign files that were created by Cons when generating the derived files.

This would usually be accomplished by building the software in the repository (or, alternatively, in a build directory, and then copying the result to the repository):

  % cd /usr/all/repository
  % cons hello
  gcc -c hello.c -o hello.o
  gcc -c world.c -o world.o
  ar r libworld.a world.o
  ar: creating libworld.a
  ranlib libworld.a
  gcc -o hello hello.o libworld.a

(This is safe even if the Construct file lists the /usr/all/repository directory in a Repository command because Cons will remove the current directory from the repository list.)

Now if we want to build a copy of the application with our own hello.c file, we only need to create the one necessary source file, and use the -R option to have Cons use other files from the repository:

  % mkdir $HOME/build1
  % cd $HOME/build1
  % ed hello.c
    [EDIT]
  % cons -R /usr/all/repository hello
  gcc -c hello.c -o hello.o
  gcc -o hello hello.o /usr/all/repository/libworld.a

Notice that Cons has not bothered to recreate a local libworld.a library (or recompile the world.o module), but instead uses the already-compiled version from the repository.

Because the MD5 signatures that Cons puts in the .consign file contain timestamps for the derived files, the signature timestamps must match the file timestamps for a signature to be considered valid.

Some software systems may alter the timestamps on repository files (by copying them, e.g.), in which case Cons will, by default, assume the repository signatures are invalid and rebuild files unnecessarily. This behavior may be altered by specifying:

  Repository_Sig_Times_OK 0;

This tells Cons to ignore timestamps when deciding whether a signature is valid. (Note that avoiding this sanity check means there must be proper control over the repository tree to ensure that the derived files cannot be modified without updating the .consign signature.)

Local copies of files

If the repository tree contains the complete results of a build, and we try to build from the repository without any files in our local tree, something moderately surprising happens:

  % mkdir $HOME/build2
  % cd $HOME/build2
  % cons -R /usr/all/repository hello
  cons: "hello" is up-to-date.

Why does Cons say that the hello program is up-to-date when there is no hello program in the local build directory? Because the repository (not the local directory) contains the up-to-date hello program, and Cons correctly determines that nothing needs to be done to rebuild this up-to-date copy of the file.

There are, however, many times in which it is appropriate to ensure that a local copy of a file always exists. A packaging or testing script, for example, may assume that certain generated files exist locally. Instead of making these subsidiary scripts aware of the repository directory, the Local command may be added to a Construct or Conscript file to specify that a certain file or files must appear in the local build directory:

  Local qw(
        hello
  );

Then, if we re-run the same command, Cons will make a local copy of the program from the repository copy (telling you that it is doing so):

  % cons -R /usr/all/repository hello
  Local copy of hello from /usr/all/repository/hello
  cons: "hello" is up-to-date.

Notice that, because the act of making the local copy is not considered a ``build'' of the hello file, Cons still reports that it is up-to-date.

Creating local copies is most useful for files that are being installed into an intermediate directory (for sharing with other directories) via the Install command. Accompanying the Install command for a file with a companion Local command is so common that Cons provides a Install_Local command as a convenient way to do both:

  Install_Local $env, '#export', 'hello';

is exactly equivalent to:

  Install $env '#export', 'hello';
  Local '#export/hello';

Both the Local and Install_Local commands update the local .consign file with the appropriate file signatures, so that future builds are performed correctly.

Repository dependency analysis

Due to its built-in scanning, Cons will search the specified repository trees for included .h files. Unless the compiler also knows about the repository trees, though, it will be unable to find .h files that only exist in a repository. If, for example, the hello.c file includes the hello.h file in its current directory:

  % cons -R /usr/all/repository hello
  gcc -c /usr/all/repository/hello.c -o hello.o
  /usr/all/repository/hello.c:1: hello.h: No such file or directory

Solving this problem forces some requirements onto the way construction environments are defined and onto the way the C #include preprocessor directive is used to include files.

In order to inform the compiler about the repository trees, Cons will add appropriate -I flags to the compilation commands. This means that the CPPPATH variable in the construct environment must explicitly specify all subdirectories which are to be searched for included files, including the current directory. Consequently, we can fix the above example by changing the environment creation in the Construct file as follows:

  $env = new cons(
        CC      => 'gcc',
        CPPPATH => '.',
        LIBS    => 'libworld.a',
  );

Due to the definition of the CPPPATH variable, this yields, when we re-execute the command:

  % cons -R /usr/all/repository hello
  gcc -c -I. -I/usr/all/repository /usr/all/repository/hello.c -o hello.o
  gcc -o hello hello.o /usr/all/repository/libworld.a

The order of the -I flags replicates, for the C preprocessor, the same repository-directory search path that Cons uses for its own dependency analysis. If there are multiple repositories and multiple CPPPATH directories, Cons will append the repository directories to the beginning of each CPPPATH directory, rapidly multiplying the number of -I flags. As an extreme example, a Construct file containing:

  Repository qw(
        /u1
        /u2
  );
  $env = new cons(
        CPPPATH => 'a:b:c',
  );

Would yield a compilation command of:

  cc -Ia -I/u1/a -I/u2/a -Ib -I/u1/b -I/u2/b -Ic -I/u1/c -I/u2/c -c hello.c -o hello.o

Because Cons relies on the compiler's -I flags to communicate the order in which repository directories must be searched, Cons' handling of repository directories is fundamentally incompatible with using double-quotes on the #include directives in your C source code:

  #include "file.h"     /* DON'T USE DOUBLE-QUOTES LIKE THIS */

This is because most C preprocessors, when faced with such a directive, will always first search the directory containing the source file. This undermines the elaborate -I options that Cons constructs to make the preprocessor conform to its preferred search path.

Consequently, when using repository trees in Cons, always use angle-brackets for included files:

  #include <file.h>     /* USE ANGLE-BRACKETS INSTEAD */

Repository_List

Cons provides a Repository_List command to return a list of all repository directories in their current search order. This can be used for debugging, or to do more complex Perl stuff:

  @list = Repository_List;
  print join(' ', @list), "\n";

Repository interaction with other Cons features

Cons' handling of repository trees interacts correctly with other Cons features--which is to say, it generally does what you would expect.

Most notably, repository trees interact correctly, and rather powerfully, with the 'Link' command. A repository tree may contain one or more subdirectories for version builds established via Link to a source subdirectory. Cons will search for derived files in the appropriate build subdirectories under the repository tree.


Default targets

Until now, we've demonstrated invoking Cons with an explicit target to build:

  % cons hello

Normally, Cons does not build anything unless a target is specified, but specifying '.' (the current directory) will build everything:

  % cons                # does not build anything
  % cons .              # builds everything under the top-level directory

Adding the Default method to any Construct or Conscript file will add the specified targets to a list of default targets. Cons will build these defaults if there are no targets specified on the command line. So adding the following line to the top-level Construct file will mimic Make's typical behavior of building everything by default:

  Default '.';

The following would add the hello and goodbye commands (in the same directory as the Construct or Conscript file) to the default list:

  Default qw(
        hello
        goodbye
  );

The Default method may be used more than once to add targets to the default list.


Selective builds

Cons provides two methods for reducing the size of given build. The first is by specifying targets on the command line, and the second is a method for pruning the build tree. We'll consider target specification first.

Selective targeting

Like make, Cons allows the specification of ``targets'' on the command line. Cons targets may be either files or directories. When a directory is specified, this is simply a short-hand notation for every derivable product--that Cons knows about--in the specified directory and below. For example:

  % cons build/hello/hello.o

means build hello.o and everything that hello.o might need. This is from a previous version of the Hello, World! program in which hello.o depended upon export/include/world.h. If that file is not up-to-date (because someone modified src/world/world.h), then it will be rebuilt, even though it is in a directory remote from build/hello.

In this example:

  % cons build

Everything in the build directory is built, if necessary. Again, this may cause more files to be built. In particular, both export/include/world.h and export/lib/libworld.a are required by the build/hello directory, and so they will be built if they are out-of-date.

If we do, instead:

  % cons export

then only the files that should be installed in the export directory will be rebuilt, if necessary, and then installed there. Note that cons build might build files that cons export doesn't build, and vice-versa.

No ``special'' targets

With Cons, make-style ``special'' targets are not required. The simplest analog with Cons is to use special export directories, instead. Let's suppose, for example, that you have a whole series of unit tests that are associated with your code. The tests live in the source directory near the code. Normally, however, you don't want to build these tests. One solution is to provide all the build instructions for creating the tests, and then to install the tests into a separate part of the tree. If we install the tests in a top-level directory called tests, then:

  % cons tests

will build all the tests.

  % cons export

will build the production version of the system (but not the tests), and:

  % cons build

should probably be avoided (since it will compile tests unecessarily).

If you want to build just a single test, then you could explicitly name the test (in either the tests directory or the build directory). You could also aggregate the tests into a convenient hierarchy within the tests directory. This hierarchy need not necessarily match the source hierarchy, in much the same manner that the include hierarchy probably doesn't match the source hierarchy (the include hierarchy is unlikely to be more than two levels deep, for C programs).

If you want to build absolutely everything in the tree (subject to whatever options you select), you can use:

  % cons .

This is not particularly efficient, since it will redundantly walk all the trees, including the source tree. The source tree, of course, may have buildable objects in it--nothing stops you from doing this, even if you normally build in a separate build tree.


Build Pruning

In conjunction with target selection, build pruning can be used to reduce the scope of the build. In the previous peAcH and baNaNa example, we have already seen how script-driven build pruning can be used to make only half of the potential build available for any given invocation of cons. Cons also provides, as a convenience, a command line convention that allows you to specify which Conscript files actually get ``built''--that is, incorporated into the build tree. For example:

  % cons build +world

The + argument introduces a Perl regular expression. This must, of course, be quoted at the shell level if there are any shell meta-characters within the expression. The expression is matched against each Conscript file which has been mentioned in a Build statement, and only those scripts with matching names are actually incorporated into the build tree. Multiple such arguments are allowed, in which case a match against any of them is sufficient to cause a script to be included.

In the example, above, the hello program will not be built, since Cons will have no knowledge of the script hello/Conscript. The libworld.a archive will be built, however, if need be.

There are a couple of uses for build pruning via the command line. Perhaps the most useful is the ability to make local changes, and then, with sufficient knowledge of the consequences of those changes, restrict the size of the build tree in order to speed up the rebuild time. A second use for build pruning is to actively prevent the recompilation of certain files that you know will recompile due to, for example, a modified header file. You may know that either the changes to the header file are immaterial, or that the changes may be safely ignored for most of the tree, for testing purposes.With Cons, the view is that it is pragmatic to admit this type of behavior, with the understanding that on the next full build everything that needs to be rebuilt will be. There is no equivalent to a ``make touch'' command, to mark files as permanently up-to-date. So any risk that is incurred by build pruning is mitigated. For release quality work, obviously, we recommend that you do not use build pruning (it's perfectly OK to use during integration, however, for checking compilation, etc. Just be sure to do an unconstrained build before committing the integration).


Temporary overrides

Cons provides a very simple mechanism for overriding aspects of a build. The essence is that you write an override file containing one or more Override commands, and you specify this on the command line, when you run cons:

  % cons -o over export

will build the export directory, with all derived files subject to the overrides present in the over file. If you leave out the -o option, then everything necessary to remove all overrides will be rebuilt.

Overriding environment variables

The override file can contain two types of overrides. The first is incoming environment variables. These are normally accessible by the Construct file from the %ENV hash variable. These can trivially be overridden in the override file by setting the appropriate elements of %ENV (these could also be overridden in the user's environment, of course).

The Override command

The second type of override is accomplished with the Override command, which looks like this:

  Override <regexp>, <var1> => <value1>, <var2> => <value2>, ...;

The regular expression regexp is matched against every derived file that is a candidate for the build. If the derived file matches, then the variable/value pairs are used to override the values in the construction environment associated with the derived file.

Let's suppose that we have a construction environment like this:

  $CONS = new cons(
        COPT => '',
        CDBG => '-g',
        CFLAGS => '%COPT %CDBG',
  );

Then if we have an override file over containing this command:

  Override '\.o$', COPT => '-O', CDBG => '';

then any cons invocation with -o over that creates .o files via this environment will cause them to be compiled with -O and no -g. The override could, of course, be restricted to a single directory by the appropriate selection of a regular expression.

Here's the original version of the Hello, World! program, built with this environment. Note that Cons rebuilds the appropriate pieces when the override is applied or removed:

  % cons hello
  cc -g -c hello.c -o hello.o
  cc -o hello hello.o
  % cons -o over hello
  cc -O -c hello.c -o hello.o
  cc -o hello hello.o
  % cons -o over hello
  cons: "hello" is up-to-date.
  % cons hello
  cc -g -c hello.c -o hello.o
  cc -o hello hello.o

It's important that the Override command only be used for temporary, on-the-fly overrides necessary for development because the overrides are not platform independent and because they rely too much on intimate knowledge of the workings of the scripts. For temporary use, however, they are exactly what you want.

Note that it is still useful to provide, say, the ability to create a fully optimized version of a system for production use--from the Construct and Conscript files. This way you can tailor the optimized system to the platform. Where optimizer trade-offs need to be made (particular files may not be compiled with full optimization, for example), then these can be recorded for posterity (and reproducibility) directly in the scripts.


More on construction environments

Default construction variables

We have mentioned, and used, the concept of a construction environment, many times in the preceding pages. Now it's time to make this a little more concrete. With the following statement:

  $env = new cons();

a reference to a new, default construction environment is created. This contains a number of construction variables and some methods. At the present writing, the default list of construction variables is defined as follows:

  CC            => 'cc',
  CFLAGS        => '',
  CCCOM         => '%CC %CFLAGS %_IFLAGS -c %< -o %>',
  INCDIRPREFIX  => '-I',
  CXX           => '%CC',
  CXXFLAGS      => '%CFLAGS',
  CXXCOM        => '%CXX %CXXFLAGS %_IFLAGS -c %< -o %>',
  LINK          => '%CXX',
  LINKCOM       => '%LINK %LDFLAGS -o %> %< %_LDIRS %LIBS',
  LINKMODULECOM => '%LD -r -o %> %<',
  LIBDIRPREFIX  => '-L',
  AR            => 'ar',
  ARFLAGS       => 'r',
  ARCOM         => "%AR %ARFLAGS %> %<\n%RANLIB %>",
  RANLIB        => 'ranlib',
  AS            => 'as',
  ASFLAGS       => '',
  ASCOM         => '%AS %ASFLAGS %< -o %>',
  LD            => 'ld',
  LDFLAGS       => '',
  PREFLIB       => 'lib',
  SUFLIB        => '.a',
  SUFLIBS       => '.so:.a',
  SUFOBJ        => '.o',
  ENV           => { 'PATH' => '/bin:/usr/bin' },

On Win32 systems (Windows NT), the following construction variables are overridden in the default:

  CC            => 'cl',
  CFLAGS        => '/nologo',
  CCCOM         => '%CC %CFLAGS %_IFLAGS /c %< /Fo%>',
  CXXCOM        => '%CXX %CXXFLAGS %_IFLAGS /c %< /Fo%>',
  INCDIRPREFIX  => '/I',
  LINK          => 'link',
  LINKCOM       => '%LINK %LDFLAGS /out:%> %< %_LDIRS %LIBS',
  LINKMODULECOM => '%LD /r /o %> %<',
  LIBDIRPREFIX  => '/LIBPATH:',
  AR            => 'lib',
  ARFLAGS       => '/nologo ',
  ARCOM         => "%AR %ARFLAGS /out:%> %<",
  RANLIB        => '',
  LD            => 'link',
  LDFLAGS       => '/nologo ',
  PREFLIB       => '',
  SUFEXE        => '.exe',
  SUFLIB        => '.lib',
  SUFLIBS       => '.dll:.lib',
  SUFOBJ        => '.obj',

These variables are used by the various methods associated with the environment, in particular any method that ultimately invokes an external command will substitute these variables into the final command, as appropriate. For example, the Objects method takes a number of source files and arranges to derive, if necessary, the corresponding object files. For example:

  Objects $env 'foo.c', 'bar.c';

This will arrange to produce, if necessary, foo.o and bar.o. The command invoked is simply %CCCOM, which expands through substitution, to the appropriate external command required to build each object. We will explore the substitution rules further under the Command method, below.

The construction variables are also used for other purposes. For example, CPPPATH is used to specify a colon-separated path of include directories. These are intended to be passed to the C preprocessor and are also used by the C-file scanning machinery to determine the dependencies involved in a C Compilation. Variables beginning with underscore, are created by various methods, and should normally be considered ``internal'' variables. For example, when a method is called which calls for the creation of an object from a C source, the variable _IFLAGS is created: this corresponds to the -I switches required by the C compiler to represent the directories specified by CPPPATH.

Note that, for any particular environment, the value of a variable is set once, and then never reset (to change a variable, you must create a new environment. Methods are provided for copying existing environments for this purpose). Some internal variables, such as _IFLAGS are created on demand, but once set, they remain fixed for the life of the environment.

The CFLAGS, LDFLAGS, and ARFLAGS variables all supply a place for passing options to the compiler, loader, and archiver, respectively. Less obviously, the INCDIRPREFIX variable specifies the option string to be appended to the beginning of each include directory so that the compiler knows where to find .h files. Similarly, the LIBDIRPREFIX variable specifies the option string to be appended to the beginning of each directory that the linker should search for libraries.

Another variable, ENV, is used to determine the system environment during the execution of an external command. By default, the only environment variable that is set is PATH, which is the execution path for a UNIX command. For the utmost reproducibility, you should really arrange to set your own execution path, in your top-level Construct file (or perhaps by importing an appropriate construction package with the Perl use command). The default variables are intended to get you off the ground.

Interpolating construction variables

Construction environment variables may be interpolated in the source and target file names by prefixing the construction variable name with %.

  $env = new cons(
        DESTDIR =>      'programs',
        SRCDIR  =>      'src',
  );
  Program $env '%DESTDIR/hello', '%SRCDIR/hello.c';

Expansion of construction variables is recursive--that is, the file name(s) will be re-expanded until no more substitutions can be made. If a construction variable is not defined in the environment, then the null string will be substituted.


Default construction methods

The list of default construction methods includes the following:

The new constructor

The new method is a Perl object constructor. That is, it is not invoked via a reference to an existing construction environment reference, but, rather statically, using the name of the Perl package where the constructor is defined. The method is invoked like this:

  $env = new cons(<overrides>);

The environment you get back is blessed into the package cons, which means that it will have associated with it the default methods described below. Individual construction variables can be overridden by providing name/value pairs in an override list. Note that to override any command environment variable (i.e. anything under ENV), you will have to override all of them. You can get around this difficulty by using the copy method on an existing construction environment.

The clone method

The clone method creates a clone of an existing construction environment, and can be called as in the following example:

  $env2 = $env1->clone(<overrides>);

You can provide overrides in the usual manner to create a different environment from the original. If you just want a new name for the same environment (which may be helpful when exporting environments to existing components), you can just use simple assignment.

The copy method

The copy method extracts the externally defined construction variables from an environment and returns them as a list of name/value pairs. Overrides can also be provided, in which case, the overridden values will be returned, as appropriate. The returned list can be assigned to a hash, as shown in the prototype, below, but it can also be manipulated in other ways:

  %env = $env1->copy(<overrides>);

The value of ENV, which is itself a hash, is also copied to a new hash, so this may be changed without fear of affecting the original environment. So, for example, if you really want to override just the PATH variable in the default environment, you could do the following:

  %cons = new cons()->copy();
  $cons{ENV}{PATH} = "<your path here>";
  $cons = new cons(%cons);

This will leave anything else that might be in the default execution environment undisturbed.

The Install method

The Install method arranges for the specified files to be installed in the specified directory. The installation is optimized: the file is not copied if it can be linked. If this is not the desired behavior, you will need to use a different method to install the file. It is called as follows:

  Install $env <directory>, <names>;

Note that, while the files to be installed may be arbitrarily named, only the last component of each name is used for the installed target name. So, for example, if you arrange to install foo/bar in baz, this will create a bar file in the baz directory (not foo/bar).

The InstallAs method

The InstallAs method arranges for the specified source file(s) to be installed as the specified target file(s). Multiple files should be specified as a file list. The installation is optimized: the file is not copied if it can be linked. If this is not the desired behavior, you will need to use a different method to install the file. It is called as follows:

InstallAs works in two ways:

Single file install:

  InstallAs $env TgtFile, SrcFile;

Multiple file install:

  InstallAs $env ['tgt1', 'tgt2'], ['src1', 'src2'];

Or, even as:

  @srcs = qw(src1 src2 src3);
  @tgts = qw(tgt1 tgt2 tgt3);
  InstallAs $env [@tgts], [@srcs];

Both the target and the sources lists should be of the same length.

The Precious method

The Precious method asks cons not to delete the specified file or list of files before building them again. It is invoked as:

  Precious <files>;

This is especially useful for allowing incremental updates to libraries or debug information files which are updated rather than rebuilt anew each time. Cons will still delete the files when the -r flag is specified.

The Command method

The Command method is a catchall method which can be used to arrange for any external command to be called to update the target. For this command, a target file and list of inputs is provided. In addition a construction command line, or lines, is provided as a string (this string may have multiple commands embedded within it, separated by new lines). Command is called as follows:

  Command $env <target>, <inputs>, <construction command>;

The target is made dependent upon the list of input files specified, and the inputs must be built successfully or Cons will not attempt to build the target.

Within the construction command, any variable from the construction environment may be introduced by prefixing the name of the construction variable with %. This is recursive: the command is expanded until no more substitutions can be made. If a construction variable is not defined in the environment, then the null string will be substituted. A doubled %% will be replaced by a single % in the construction command.

There are several pseudo variables which will also be expanded:

%>
The target file name (in a multi-target command, this is always the first target mentioned).

%0
Same as %>.

%1, %2, ..., %9
These refer to the first through ninth input file, respectively.

%<
The full set of inputs. If any of these have been used anywhere else in the current command line (via %1, %2, etc.), then those will be deleted from the list provided by %<. Consider the following command found in a Conscript file in the test directory:
  Command $env 'tgt', qw(foo bar baz), qq(
        echo %< -i %1 > %>
        echo %< -i %2 >> %>
        echo %< -i %3 >> %>
  );

If tgt needed to be updated, then this would result in the execution of the following commands, assuming that no remapping has been established for the test directory:

  echo test/bar test/baz -i test/foo > test/tgt
  echo test/foo test/baz -i test/bar >> test/tgt
  echo test/foo test/bar -i test/baz >> test/tgt

Any of the above pseudo variables may be followed immediately by one of the following suffixes to select a portion of the expanded path name:

  :a    the absolute path to the file name
  :b    the directory plus the file name stripped of any suffix
  :d    the directory
  :f    the file name
  :s    the file name suffix
  :F    the file name stripped of any suffix

Continuing with the above example, %<:f would expand to foo bar baz, and %:d> would expand to test.

It is possible to programmatically rewrite part of the command by enclosing part of it between %[ and %]. This will call the construction variable named as the first word enclosed in the brackets as a Perl code reference; the results of this call will be used to replace the contents of the brackets in the command line. For example, given an existing input file named tgt.in:

  @keywords = qw(foo bar baz);
  $env = new cons(X_COMMA => sub { join(",", @_) });
  Command $env 'tgt', 'tgt.in', qq(
        echo '# Keywords: %[X_COMMA @keywords %]' > %>
        cat %< >> %>
  );

This will execute:

  echo '# Keywords: foo,bar,baz' > tgt
  cat tgt.in >> tgt

After substitution occurs, strings of white space are converted into single blanks, and leading and trailing white space is eliminated. It is therefore not possible to introduce variable length white space in strings passed into a command, without resorting to some sort of shell quoting.

If a multi-line command string is provided, the commands are executed sequentially. If any of the commands fails, then none of the rest are executed, and the target is not marked as updated, i.e. a new signature is not stored for the target.

Normally, if all the commands succeed, and return a zero status (or whatever platform-specific indication of success is required), then a new signature is stored for the target. If a command erroneously reports success even after a failure, then Cons will assume that the target file created by that command is accurate and up-to-date.

The first word of each command string, after expansion, is assumed to be an executable command looked up on the PATH environment variable (which is, in turn, specified by the ENV construction variable). If this command is found on the path, then the target will depend upon it: the command will therefore be automatically built, as necessary. It's possible to write multi-part commands to some shells, separated by semi-colons. Only the first command word will be depended upon, however, so if you write your command strings this way, you must either explicitly set up a dependency (with the Depends method), or be sure that the command you are using is a system command which is expected to be available. If it isn't available, you will, of course, get an error.

If any command (even one within a multi-line command) begins with [perl], the remainder of that command line will be evaluated by the running Perl instead of being forked by the shell. If an error occurs in parsing the Perl or if the Perl expression returns 0 or undef, the command will be considered to have failed. For example, here is a simple command which creates a file foo directly from Perl:

  $env = new cons();
  Command $env 'foo',
    qq([perl] open(FOO,'>foo');print FOO "hi\\n"; close(FOO); 1);

Note that when the command is executed, you are in the same package as when the Construct or Conscript file was read, so you can call Perl functions you've defined in the same Construct or Conscript file in which the Command appears:

  $env = new cons();
  sub create_file {
        my $file = shift;
        open(FILE, ">$file");
        print FILE "hi\n";
        close(FILE);
        return 1;
  }
  Command $env 'foo', "[perl] &create_file('%>')";

The Perl string will be used to generate the signature for the derived file, so if you change the string, the file will be rebuilt. The contents of any subroutines you call, however, are not part of the signature, so if you modify a called subroutine such as create_file above, the target will not be rebuilt. Caveat user.

Cons normally prints a command before executing it. This behavior is suppressed if the first character of the command is @. Note that you may need to separate the @ from the command name or escape it to prevent @cmd from looking like an array to Perl quote operators that perform interpolation:

  # The first command line is incorrect,
  # because "@cp" looks like an array
  # to the Perl qq// function.
  # Use the second form instead.
  Command $env 'foo', 'foo.in', qq(
        @cp %< tempfile
        @ cp tempfile %>
  );

If there are shell meta characters anywhere in the expanded command line, such as <, >, quotes, or semi-colon, then the command will actually be executed by invoking a shell. This means that a command such as:

  cd foo

alone will typically fail, since there is no command cd on the path. But the command string:

  cd $<:d; tar cf $>:f $<:f

when expanded will still contain the shell meta character semi-colon, and a shell will be invoked to interpret the command. Since cd is interpreted by this sub-shell, the command will execute as expected.

To specify a command with multiple targets, you can specify a reference to a list of targets. In Perl, a list reference can be created by enclosing a list in square brackets. Hence the following command:

  Command $env ['foo.h', 'foo.c'], 'foo.template', q(
        gen %1
  );

could be used in a case where the command gen creates two files, both foo.h and foo.c.

The Objects method

The Objects method arranges to create the object files that correspond to the specified source files. It is invoked as shown below:

  @files = Objects $env <source or object files>;

Under Unix, source files ending in .s and .c are currently supported, and will be compiled into a name of the same file ending in .o. By default, all files are created by invoking the external command which results from expanding the CCCOM construction variable, with %< and %> set to the source and object files, respectively (see the Command method for expansion details). The variable CPPPATH is also used when scanning source files for dependencies. This is a colon separated list of pathnames, and is also used to create the construction variable _IFLAGS, which will contain the appropriate list of -I options for the compilation. Any relative pathnames in CPPPATH is interpreted relative to the directory in which the associated construction environment was created (absolute and top-relative names may also be used). This variable is used by CCCOM. The behavior of this command can be modified by changing any of the variables which are interpolated into CCCOM, such as CC, CFLAGS, and, indirectly, CPPPATH. It's also possible to replace the value of CCCOM, itself. As a convenience, this file returns the list of object filenames.

The Program method

The Program method arranges to link the specified program with the specified object files. It is invoked in the following manner:

  Program $env <program name>, <source or object files>;

The program name will have the value of the SUFEXE construction variable appended (by default, .exe on Win32 systems, nothing on Unix systems) if the suffix is not already present.

Source files may be specified in place of objects files--the Objects method will be invoked to arrange the conversion of all the files into object files, and hence all the observations about the Objects method, above, apply to this method also.

The actual linking of the program will be handled by an external command which results from expanding the LINKCOM construction variable, with %< set to the object files to be linked (in the order presented), and %> set to the target (see the Command method for expansion details). The user may set additional variables in the construction environment, including LINK, to define which program to use for linking, LIBPATH, a colon-separated list of library search paths, for use with library specifications of the form -llib, and LIBS, specifying the list of libraries to link against (in either -llib form or just as pathnames. Relative pathnames in both LIBPATH and LIBS are interpreted relative to the directory in which the associated construction environment is created (absolute and top-relative names may also be used). Cons automatically sets up dependencies on any libraries mentioned in LIBS: those libraries will be built before the command is linked.

The Library method

The Library method arranges to create the specified library from the specified object files. It is invoked as follows:

  Library $env <library name>, <source or object files>;

The library name will have the value of the SUFLIB construction variable appended (by default, .lib on Win32 systems, .a on Unix systems) if the suffix is not already present.

Source files may be specified in place of objects files--the Objects method will be invoked to arrange the conversion of all the files into object files, and hence all the observations about the Objects method, above, apply to this method also.

The actual creation of the library will be handled by an external command which results from expanding the ARCOM construction variable, with %< set to the library members (in the order presented), and %> to the library to be created (see the Command method for expansion details). The user may set variables in the construction environment which will affect the operation of the command. These include AR, the archive program to use, ARFLAGS, which can be used to modify the flags given to the program specified by AR, and RANLIB, the name of a archive index generation program, if needed (if the particular need does not require the latter functionality, then ARCOM must be redefined to not reference RANLIB).

The Library method allows the same library to be specified in multiple method invocations. All of the contributing objects from all the invocations (which may be from different directories) are combined and generated by a single archive command. Note, however, that if you prune a build so that only part of a library is specified, then only that part of the library will be generated (the rest will disappear!).

The Module method

The Module method is a combination of the Program and Command methods. Rather than generating an executable program directly, this command allows you to specify your own command to actually generate a module. The method is invoked as follows:

  Module $env <module name>, <source or object files>, <construction command>;

This command is useful in instances where you wish to create, for example, dynamically loaded modules, or statically linked code libraries.

The Depends method

The Depends method allows you to specify additional dependencies for a target. It is invoked as follows:

  Depends $env <target>, <dependencies>;

This may be occasionally useful, especially in cases where no scanner exists (or is writable) for particular types of files. Normally, dependencies are calculated automatically from a combination of the explicit dependencies set up by the method invocation or by scanning source files.

A set of identical dependencies for multiple targets may be specified using a reference to a list of targets. In Perl, a list reference can be created by enclosing a list in square brackets. Hence the following command:

  Depends $env ['foo', 'bar'], 'input_file_1', 'input_file_2';

specifies that both the foo and bar files depend on the listed input files.

The Ignore method

The Ignore method allows you to ignore explicitly dependencies that Cons infers on its own. It is invoked as follows:

  Ignore <patterns>;

This can be used to avoid recompilations due to changes in system header files or utilities that are known to not affect the generated targets.

If, for example, a program is built in an NFS-mounted directory on multiple systems that have different copies of stdio.h, the differences will affect the signatures of all derived targets built from source files that #include <stdio.h>. This will cause all those targets to be rebuilt when changing systems. If this is not desirable behavior, then the following line will remove the dependencies on the stdio.h file:

  Ignore '^/usr/include/stdio\.h$';

Note that the arguments to the Ignore method are regular expressions, so special characters must be escaped and you may wish to anchor the beginning or end of the expression with ^ or $ characters.

The Salt method

The Salt method adds a constant value to the signature calculation for every derived file. It is invoked as follows:

  Salt $string;

Changing the Salt value will force a complete rebuild of every derived file. This can be used to force rebuilds in certain desired circumstances. For example,

  Salt `uname -s`;

Would force a complete rebuild of every derived file whenever the operating system on which the build is performed (as reported by uname -s) changes.

The UseCache method

The UseCache method instructs Cons to maintain a cache of derived files, to be shared among separate build trees of the same project.

  UseCache("cache/<buildname>") || warn("cache directory not found");

The SourcePath method

The SourcePath mathod returns the real source path name of a file, as opposted to the path name within a build directory. It is invoked as follows:

  $path = SourcePath <buildpath>;

The ConsPath method

The ConsPath method returns true if the supplied path is a derivable file, and returns undef (false) otherwise. It is invoked as follows:

  $result = ConsPath <path>;

The SplitPath method

The SplitPath method looks up multiple path names in a string separated by the default path separator for the operating system (':' on UNIX systems, ';' on Windows NT), and returns the fully-qualified names. It is invoked as follows:

  @paths = SplitPath <pathlist>;

The SplitPath method will convert names prefixed '#' to the appropriate top-level build name (without the '#') and will convert relative names to top-level names.

The DirPath method

The DirPath method returns the build path name(s) of a directory or list of directories. It is invoked as follows:

  $cwd = DirPath <paths>;

The most common use for the DirPath method is:

  $cwd = DirPath '.';

to fetch the path to the current directory of a subsidiary Conscript file.

The FilePath method

The FilePath method returns the build path name(s) of a file or list of files. It is invoked as follows:

  $file = FilePath <path>;

The Help method

The Help method specifies help text that will be displayed when the user invokes cons -h. This can be used to provide documentation of specific targets, values, build options, etc. for the build tree. It is invoked as follows:

  Help <helptext>;

The Help method may only be called once, and should typically be specified in the top-level Construct file.


Extending Cons

Overriding construction variables

There are several ways of extending Cons, which vary in degree of difficulty. The simplest method is to define your own construction environment, based on the default environment, but modified to reflect your particular needs. This will often suffice for C-based applications. You can use the new constructor, and the clone and copy methods to create hybrid environments. These changes can be entirely transparent to the underlying Conscript files.

Adding new methods

For slightly more demanding changes, you may wish to add new methods to the cons package. Here's an example of a very simple extension, InstallScript, which installs a tcl script in a requested location, but edits the script first to reflect a platform-dependent path that needs to be installed in the script:

  # cons::InstallScript - Create a platform dependent version of a shell
  # script by replacing string ``#!your-path-here'' with platform specific
  # path $BIN_DIR.
  sub cons::InstallScript {
        my ($env, $dst, $src) = @_;
        Command $env $dst, $src, qq(
                sed s+your-path-here+$BIN_DIR+ %< > %>
                chmod oug+x %>
        );
  }

Notice that this method is defined directly in the cons package (by prefixing the name with cons::). A change made in this manner will be globally visible to all environments, and could be called as in the following example:

  InstallScript $env "$BIN/foo", "foo.tcl";

For a small improvement in generality, the BINDIR variable could be passed in as an argument or taken from the construction environment--as %BINDIR.

Overriding methods

Instead of adding the method to the cons name space, you could define a new package which inherits existing methods from the cons package and overrides or adds others. This can be done using Perl's inheritance mechanisms.

The following example defines a new package cons::switch which overrides the standard Library method. The overridden method builds linked library modules, rather than library archives. A new constructor is provided. Environments created with this constructor will have the new library method; others won't.

  package cons::switch;
  BEGIN {@ISA = 'cons'}
  sub new {
        shift;
        bless new cons(@_);
  }
  sub Library {
        my($env) = shift;
        my($lib) = shift;
        my(@objs) = Objects $env @_;
        Command $env $lib, @objs, q(
                %LD -r %LDFLAGS %< -o %>
        );
  }

This functionality could be invoked as in the following example:

  $env = new cons::switch(@overrides);
  ...
  Library $env 'lib.o', 'foo.c', 'bar.c';


Invoking Cons

The cons command is usually invoked from the root of the build tree. A Construct file must exist in that directory. If the -f argument is used, then an alternate Construct file may be used (and, possibly, an alternate root, since cons will cd to Construct file's containing directory).

If cons is invoked from a child of the root of the build tree with the -t argument, it will walk up the directory hierarchy looking for a Construct file. (An alternate name may still be specified with -f.) The targets supplied on the command line will be modified to be relative to the discovered Construct file. For example, from a directory containing a top-level Construct file, the following invocation:

  % cd libfoo/subdir
  % cons -t target

is exactly equivalent to:

  % cons libfoo/subdir/target

If there are any Default targets specified in the directory hierarchy's Construct or Conscript files, only the default targets at or below the directory from which cons -t was invoked will be built.

The command is invoked as follows:

  cons <arguments> -- <construct-args>

where arguments can be any of the following, in any order:

target
Build the specified target. If target is a directory, then recursively build everything within that directory.

+pattern
Limit the Conscript files considered to just those that match pattern, which is a Perl regular expression. Multiple + arguments are accepted.

name=<val>
Sets name to value val in the ARG hash passed to the top-level Construct file.

-cc
Show command that would have been executed, when retrieving from cache. No indication that the file has been retrieved is given; this is useful for generating build logs that can be compared with real build logs.

-cd
Disable all caching. Do not retrieve from cache nor flush to cache.

-cr
Build dependencies in random order. This is useful when building multiple similar trees with caching enabled.

-cs
Synchronize existing build targets that are found to be up-to-date with cache. This is useful if caching has been disabled with -cc or just recently enabled with UseCache.

-d
Enable dependency debugging.

-f <file>
Use the specified file instead of Construct (but first change to containing directory of file).

-h
Show a help message local to the current build if one such is defined, and exit.

-k
Keep going as far as possible after errors.

-o <file>
Read override file file.

-p
Show construction products in specified trees. No build is attempted.

-pa
Show construction products and associated actions. No build is attempted.

-pw
Show products and where they are defined. No build is attempted.

-q
Don't be verbose about Installing and Removing targets.

-r
Remove construction products associated with <targets>. No build is attempted.

-R <repos>
Search for files in repos. Multiple -R repos directories are searched in the order specified.

-t
Traverse up the directory hierarchy looking for a Construct file, if none exists in the current directory. Targets will be modified to be relative to the Construct file.

-v
Show cons version and continue processing.

-V
Show cons version and exit.

-wf <file>
Write all filenames considered into file.

-x
Show a help message similar to this one, and exit.

And construct-args can be any arguments that you wish to process in the Construct file. Note that there should be a -- separating the arguments to cons and the arguments that you wish to process in the Construct file.

Processing of construct-args can be done by any standard package like Getopt or its variants, or any user defined package. cons will pass in the construct-args as @ARGV and will not attempt to interpret anything after the --.

  % cons -R /usr/local/repository -d os=solaris +driver -- -c test -f DEBUG

would pass the following to cons

  -R /usr/local/repository -d os=solaris +driver

and the following, to the top level Construct file as @ARGV

  -c test -f DEBUG

Note that cons -r . is equivalent to a full recursive make clean, but requires no support in the Construct file or any Conscript files. This is most useful if you are compiling files into source directories (if you separate the build and export directories, then you can just remove the directories).

The options -p, -pa, and -pw are extremely useful for use as an aid in reading scripts or debugging them. If you want to know what script installs export/include/foo.h, for example, just type:

  % cons -pw export/include/foo.h


Using and writing dependency scanners

QuickScan allows simple target-independent scanners to be set up for source files. Only one QuickScan scanner may be associated with any given source file and environment.

QuickScan is invoked as follows:

  QuickScan CONSENV CODEREF, FILENAME [, PATH]

The subroutine referenced by CODEREF is expected to return a list of filenames included directly by FILE. These filenames will, in turn, be scanned. The optional PATH argument supplies a lookup path for finding FILENAME and/or files returned by the user-supplied subroutine. The PATH may be a reference to an array of lookup-directory names, or a string of names separated by the system's separator character (':' on UNIX systems, ';' on Windows NT).

The subroutine is called once for each line in the file, with $_ set to the current line. If the subroutine needs to look at additional lines, or, for that matter, the entire file, then it may read them itself, from the filehandle SCAN. It may also terminate the loop, if it knows that no further include information is available, by closing the filehandle.

Whether or not a lookup path is provided, QuickScan first tries to lookup the file relative to the current directory (for the top-level file supplied directly to QuickScan), or from the directory containing the file which referenced the file. This is not very general, but seems good enough--especially if you have the luxury of writing your own utilities and can control the use of the search path in a standard way. Finally, the search path is, currently, colon separated. This may not make the NT camp happy.

Here's a real example, taken from a Construct file here:

  sub cons::SMFgen {
      my($env, @tables) = @_;
      foreach $t (@tables) {
          $env->QuickScan(sub { /\b\S*?\.smf\b/g }, "$t.smf",
                          $env->{SMF_INCLUDE_PATH});
          $env->Command(
              ["$t.smdb.cc","$t.smdb.h","$t.snmp.cc","$t.ami.cc", "$t.http.cc"],
              "$t.smf",
              q(
                smfgen %( %SMF_INCLUDE_OPT %) %<
              )
          );
      }
  }

[NOTE that the form $env->QuickScan ... and $env->Command ... should not be necessary, but, for some reason, is required for this particular invocation. This appears to be a bug in Perl or a misunderstanding on my part; this invocation style does not always appear to be necessary.]

This finds all names of the form <name>.smf in the file. It will return the names even if they're found within comments, but that's OK (the mechanism is forgiving of extra files; they're just ignored on the assumption that the missing file will be noticed when the program, in this example, smfgen, is actually invoked).

A scanner is only invoked for a given source file if it is needed by some target in the tree. It is only ever invoked once for a given source file.

Here is another way to build the same scanner. This one uses an explicit code reference, and also (unecessarily, in this case) reads the whole file itself:

  sub myscan {
      my(@includes);
      do {
          push(@includes, /\b\S*?\.smf\b/g);
      } while <SCAN>;
      @includes
  }

Note that the order of the loop is reversed, with the loop test at the end. This is because the first line is already read for you. This scanner can be attached to a source file by:

    QuickScan $env \myscan, "$_.smf";


SUPPORT AND SUGGESTIONS

Cons is maintained by the user community. To subscribe, send mail to cons-discuss-request@gnu.org with body subscribe.

Please report any suggestions through the cons-discuss@gnu.org mailing list.


BUGS

Sure to be some. Please report any bugs through the bug-cons@gnu.org mailing list.


INFORMATION ABOUT CONS

Information about CONS can be obtained from the official cons web site http://www.dsmit.com/cons/ or its mirrors listed there.

The cons maintainers can be contacted by email at cons-maintainers@gnu.org


AUTHORS

Originally by Bob Sidebotham. Then significantly enriched by the members of the Cons community cons-discuss@gnu.org.

The Cons community would like to thank Ulrich Pfeifer for the original pod documentation derived from the cons.html file. Cons documentation is now a part of the program itself.