pax_global_header00006660000000000000000000000064145363171370014524gustar00rootroot0000000000000052 comment=a311665120b2e18a8b529a431ac8582892fa579c fsvs-fsvs-1.2.12/000077500000000000000000000000001453631713700135475ustar00rootroot00000000000000fsvs-fsvs-1.2.12/.gitignore000066400000000000000000000002601453631713700155350ustar00rootroot00000000000000autom4te.cache/ config.log config.status configure src/Makefile src/config.h tests/Makefile src/.*.d src/*.o src/fsvs src/tags doxygen src/.vimrc.syntax src/.ycm_extra_conf.py fsvs-fsvs-1.2.12/CHANGES000066400000000000000000000073701453631713700145510ustar00rootroot00000000000000Changes in 1.2.12 - Don't use pcre2_get_match_data_size (github issue #2) - Allow comments in "fsvs ignore load" lists. - Repair webdav: close the root directory handle as well. https://bugs.launchpad.net/ubuntu/+source/fsvs/+bug/1875642 https://github.com/phmarek/fsvs/issues/18 Changes in 1.2.11 - (Potentially) fixed a long-standing bug (only on webdav): removed duplicated open calls. (https://bugs.launchpad.net/ubuntu/+source/fsvs/+bug/1875642) Changes in 1.2.10 - Restore properties like the update- and commit-pipe on "sync-repos" - When using a commit-pipe store the original (non-transformed) MD5 in the local properties as well, so that they match the remote data. - Tigris.org was shut down without warning, moved to github. - Switched to pcre2 (Debian bug#1000123) Changes in 1.2.9 - Various small fixes that got visible with new compiler versions and/or LLVM. Changes in 1.2.8 - Fixed URI canonicalization (UTF8) Changes in 1.2.7 - Updates for Clang - Fixed some compiler warnings - Fixed an "INTERNAL BUG" (issue 21) that didn't exist previously (?) Changes in 1.2.6 - Updates for GCC 5 Changes in 1.2.5 - Fix for segfault on deleted properties, eg. "svn:owner". - configure.in fix for OS X Lion with clang; thanks, Ryan! http://fsvs.tigris.org/issues/show_bug.cgi?id=16 - Removed nested functions, to make the stack non-executable (gcc needs trampoline code). See http://fsvs.tigris.org/issues/show_bug.cgi?id=17. Changes in 1.2.4 - Bugfix: auto-props not applied for explicitly specified entries. Thanks to Peter for the detailed bug report! Please note that the auto-props _only_ get applied if there are _no_ properties on an entry set (yet); so, after fsvs prop-set file property... the auto-props will _not_ be applied (as they might overwrite the manually set properties). Changes in 1.2.3 - Compilation fixes for MacOS 10.6; thanks, Thomas! - Added "password" option, as sent by Mark. Thank you! - Workarounds for gcc-4.5 and gcc-4.6 regressions. Thank you, Brian! - Compatibility with autoconf 2.68. Changes in 1.2.2 - Tried to get configuration/compilation to work with OSX 10.6. Thanks, Florian. - Fix for a stray "fstat64", which compilation for MacOSX10.4. Thank you, Mike. - Fix length calculation bug, found by Mark via a (bad?) compilation warning. Thank you! Changes in 1.2.1 - Documentation fixes. Thank you, Gunnar. - Fixed config_dir, so that using other authentication paths work. Previously $CONF/auth was fixed; better default. - Fix "unversion" on the wc root. - Fix "." as only parameter when started from the root. - Two compile fixes; thank you, Stan! - Solaris 10 compatibility fixes. Thank you, Stan! - Fix SIGPIPE handling. - Don't do the "_base" symlink; it breaks eg. "grep -r /etc". Write an readme instead. - Fix ENOMEM because of still mapped file data; thank you, Mark! - New option "dir_exclude_mtime". Thank you, Gunnar! Changes in 1.2.0 - Documentation updates. - Fixed some small bugs - The secondary URL/revision file doesn't have to exist. Thank you, Mark! - Fix recursive behaviour of "_build-new-list". - Now supports arbitrary "svn+" tunnels, like subversion does. Thank you, Jake. - "fsvs log -v" for now filters the changed entries list, and shows the paths relative to the parameter. - Fixed "-o verbose=all" output; would be interpreted as "totally silent" because of signed compares. - Better out-of-date messages. - Make 'ext-tests' work with debian /bin/sh => dash, too. - Compatibility fixes for subversion 1.6.4. - Fix tempfile being left after FSVS run. - Bugfix: on commit empty property hashes got created. Thank you, Bogdan. - Bugfix for selection of entries (filter bit) - Bugfixes for non-UTF8 locales and update/sync. Thank you, Gunnar. - Additional configure check for Solaris. Thank you, Mark. fsvs-fsvs-1.2.12/LICENSE000066400000000000000000001045131453631713700145600ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . fsvs-fsvs-1.2.12/Makefile000066400000000000000000000014551453631713700152140ustar00rootroot00000000000000default-target: src/config.h @$(MAKE) --no-print-directory -C src %: @$(MAKE) --no-print-directory -C src $@ src/config.h: configure @echo '' @echo 'You have to run "./configure" before compiling, which might need' @echo 'some options depending on your system.' @echo '' @echo 'See "./configure --help" for a listing of the parameters.' @echo '' @false configure: configure.in @echo Generating configure. autoconf distclean: rm -f config.cache config.log config.status 2> /dev/null || true rm -f src/Makefile src/tags tests/Makefile 2> /dev/null || true rm -f src/config.h src/*.[os] src/.*.d src/fsvs 2> /dev/null || true test-shell: src/fsvs @$(MAKE) BINARY=$(CURDIR)/$< --no-print-directory -C tests shell tags: ctags -R src /usr/include/apr-1.0/ /usr/include/subversion-1/ .PHONY: tags fsvs-fsvs-1.2.12/README000066400000000000000000000070131453631713700144300ustar00rootroot00000000000000 FSVS - a fast system versioning tool. https://github.com/phmarek/fsvs (C)opyrights by philipp@marek.priv.at 2005-2020 This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License version 3 as published by the Free Software Foundation. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA What does it do? ---------------- FSVS is a backup/archival/versioning tool, which uses subversion backends for storage. This means that previous versions of all files are available in case of hardware problems, data loss, virus infections, user problems etc. FSVS is used to take snapshots of the current machine and restore them; all advanced operations (taking diffs, merging, etc.) should be done via some repository browser. FSVS runs currently on Linux, OpenBSD and OS X, and I think it works with Solaris, too - in short, UNIX should be fine. Why was it written? ------------------- Well, mostly to scratch an itch :-) Backup methods using rsync have no or very limited history, svn saves no metadata and needs double local storage, svk doesn't understand all file types and is (IMO) too slow for full system versioning. How is it used? --------------- Please take a look at subversion [1]'s documentation; subversion's libraries (and by implication apr [2]) are needed for operation. See also the subversion book [3]. First install subversion (and, by implication, apr). Next compile fsvs. cd src make And install the binary: (the man-pages are not automatically installed yet.) make install Make a repository somewhere, preferably on another machine. svnadmin create /path/to/repos Create a local directory for the "working copy administrative area". If you'd like to use another path, just set the environment variable WAA to it. mkdir -p /var/spool/fsvs /etc/fsvs Go to the base path for versioning: cd / Tell fsvs which URL it should use: fsvs url svn+ssh://username@machine/path/to/repos Define ignore patterns - all virtual filesystems (/proc, /sys, etc.), and (assuming that you're in / currently) the temporary files in /tmp: fsvs ignore DEVICE:0 ./tmp/* And you're ready to play! Check your data in: fsvs commit -m "First import" See the files in doc for more details; here, as (ordered) list: fsvs.1 - Manual page; describes FSVS' commands USAGE - Manual page in ASCII IGNORING - Why/how to ignore entries fsvs-url-format.5 - Detailed description of FSVS' URLs definitions fsvs-options.5 - Options for FSVS (command line, config file) fsvs-howto-backup.5 - A short HOWTO. These documents can be browsed in HTML on http://doc.fsvs-software.org/, too. (And they're a bit more readable there.) If it bails out with an error, I'd appreciate if you'd run the failing command with the option "-v" (verbose) and send the last lines to the developers mailing list; sometimes it may be necessary to see the complete debug log file, which you can get by using "-v -d". Notes/Links ----------- 1: http://subversion.tigris.org/ 2: http://apr.apache.org/ 3: http://svnbook.red-bean.com/ fsvs-fsvs-1.2.12/configure.in000066400000000000000000000234741453631713700160720ustar00rootroot00000000000000# -*- Autoconf -*- # Process this file with autoconf to produce a configure script. AC_PREREQ(2.60) AC_INIT(fsvs, [esyscmd(make --quiet --no-print-directory -f Makefile.in version-nnl 2>/dev/null)], https://github.com/phmarek/fsvs) AC_GNU_SOURCE # if [[ "x$cache_file" == /dev/null ]] # then # cache_file=config.cache # fi # AC_CACHE_LOAD AC_CONFIG_SRCDIR([src/actions.c]) AC_CONFIG_HEADERS([src/config.h]) AC_MSG_NOTICE([*** Now configuring FSVS ]AC_PACKAGE_VERSION[ ***]) # Checks for programs. AC_PROG_CC AC_PROG_CPP AC_DEFINE(APR_PATH) AC_SUBST(APR_PATH) ##################################### Header files INCDIRS="/usr/local/include /usr/include /openpkg/include " # The subversion headers do a #include , so the APR libraries # *have* to be directly specified. # Furthermore there's apr-1/ as directory name, depending on apr version. # Is there something like this available for subversion? AC_ARG_WITH(aprinc, AC_HELP_STRING([--with-aprinc=PATH], [Specify an include directory for the APR headers.]), [ INCDIRS="$INCDIRS $withval" ], [ if APR_PATH=`apr-1-config --includedir || apr-config --includedir` then INCDIRS="$INCDIRS $APR_PATH" fi ]) AC_ARG_WITH(svninc, AC_HELP_STRING([--with-svninc=PATH], [Specify an include directory for the subversion headers.]), [ INCDIRS="$INCDIRS $withval" ]) AC_ARG_WITH(svninc, AC_HELP_STRING([--with-svninc=PATH], [Specify an include directory for the subversion headers.]), [ INCDIRS="$INCDIRS $withval" ]) AC_ARG_WITH(waa_md5, AC_HELP_STRING([--with-waa_md5=NUMBER], [Specifies how many hex characters of the MD5 of the working copy root should be used to address the data in the WAA. This may be increased if you have a lot of different working copies on a single machine. The default is 0; useful values are 0, and from 6 to 32.]), [ # The shell gives an error on numeric comparision with a non-numeric # value. # We allow from 3 characters on, although it might not make much # sense. WAA_WC_MD5_CHARS=`perl -e '$_=0+shift; print $_+0 if $_==0 || ($_>3 && $_<=16)' "$withval"` if [[ "$WAA_WC_MD5_CHARS" = "" ]] then AC_MSG_ERROR([[The given value for --with-waa_md5 is invalid.]]) fi ], [ WAA_WC_MD5_CHARS=0 ]) AC_DEFINE_UNQUOTED(WAA_WC_MD5_CHARS, $WAA_WC_MD5_CHARS, [Number of bytes for WAA addressing is $WAA_WC_MD5_CHARS.]) AC_SUBST(WAA_WC_MD5_CHARS) CFLAGS="$CFLAGS -D_GNU_SOURCE=1 -D_FILE_OFFSET_BITS=64 -DPCRE2_CODE_UNIT_WIDTH=8" for dir in $INCDIRS do # using -I would result in the files being _non_ system include # directories, ie. they'd clutter the dependency files. # That's why -idirafter is used. CFLAGS="$CFLAGS -idirafter $dir" done AC_DEFINE_UNQUOTED(CFLAGS, [$CFLAGS]) AC_SUBST(CFLAGS) AC_MSG_NOTICE(["CFLAGS=$CFLAGS"]) ##################################### Linker LIBDIRS="/usr/local/lib /openpkg/lib" AC_ARG_WITH(aprlib, AC_HELP_STRING([--with-aprlib=PATH], [Specify a directory containing APR libraries.]), [ LIBDIRS="$LIBDIRS $withval" ]) AC_ARG_WITH(svnlib, AC_HELP_STRING([--with-svnlib=PATH], [Specify a directory containing subversion libraries.]), [ LIBDIRS="$LIBDIRS $withval" ]) for dir in $LIBDIRS do LDFLAGS="$LDFLAGS -L$dir" done AC_DEFINE_UNQUOTED(LDFLAGS, [$LDFLAGS]) AC_SUBST(LDFLAGS) AC_MSG_NOTICE(["LDFLAGS=$LDFLAGS"]) EXTRALIBS="-laprutil-1 -lapr-1" if [[ `uname -s` = "SunOS" ]] then # Solaris 10, thanks Peter. EXTRALIBS="-lsocket -lnsl $EXTRALIBS" fi if [[ `uname -s` = "Darwin" ]] then # OSX 10.6 - thanks, Florian. EXTRALIBS="-liconv $EXTRALIBS" have_fmemopen=no fi AC_DEFINE_UNQUOTED(EXTRALIBS, [$EXTRALIBS]) AC_SUBST(EXTRALIBS) ##################################### Checks # Checks for libraries. AC_CHECK_LIB([pcre2-8], [pcre2_compile_8], [], [AC_MSG_FAILURE([Sorry, can't find PCRE2-8.])]) AC_CHECK_LIB([aprutil-1], [apr_md5_init], [], [AC_MSG_FAILURE([Sorry, can't find APR.])]) AC_CHECK_LIB([svn_delta-1], [svn_txdelta_apply], [], [AC_MSG_FAILURE([Sorry, can't find subversion.])]) AC_CHECK_LIB([svn_ra-1], [svn_ra_initialize], [], [AC_MSG_FAILURE([Sorry, can't find subversion.])]) AC_CHECK_LIB([gdbm], [gdbm_firstkey], [], [AC_MSG_FAILURE([Sorry, can't find gdbm.])]) # Checks for header files. AC_HEADER_STDC AC_CHECK_HEADERS([fcntl.h stddef.h stdlib.h string.h sys/time.h unistd.h pcre2.h ], [], [AC_MSG_FAILURE([Needed header file not found.])]) #apr_file_io.h subversion-1/svn_md5.h]) AC_HEADER_DIRENT AC_CHECK_MEMBERS([struct stat.st_mtim]) AC_COMPILE_IFELSE( [AC_LANG_PROGRAM( [[ #include ]], [[ VALGRIND_MAKE_MEM_DEFINED(0, 2); ]] )], [have_valgrind=yes], [have_valgrind=no]) if test x$have_valgrind = xyes ; then AC_DEFINE(HAVE_VALGRIND, 1, compatible valgrind version found) else AC_MSG_NOTICE([No compatible valgrind version.]) fi # Check whether S_IFMT is dense, ie. a single block of binary ones. # If it isn't, the bitcount wouldn't tell the needed bits to represent the # data. # If S_IFMT is dense, the increment results in a single carry bit. # Checked via changing /usr/include/bits/stat.h. AC_RUN_IFELSE([AC_LANG_SOURCE([ #include "src/preproc.h" int main(int argc, char **args) { if (_BITCOUNT( (S_IFMT >> MODE_T_SHIFT_BITS) + 1) == 1) return 0; else return 1; } ])], [AC_MSG_NOTICE([S_IFMT is ok.])], [AC_MSG_FAILURE([You have a sparse S_IFMT. Please create a github issue.])]) AC_CHECK_HEADERS([linux/kdev_t.h]) AC_ARG_ENABLE(dev-fake, AC_HELP_STRING([--enable-dev-fake], [Include fake definitions for MAJOR(), MINOR() and MKDEV(). Needed if none found.]), [AC_DEFINE([ENABLE_DEV_FAKE]) ENABLE_DEV_FAKE=1], []) AC_SUBST(ENABLE_DEV_FAKE) AC_ARG_ENABLE(debug, AC_HELP_STRING([--enable-debug], [compile some extra debug checks in (valgrind, gdb) (default is no)]), [AC_DEFINE([ENABLE_DEBUG]) ENABLE_DEBUG=1], []) AC_SUBST(ENABLE_DEBUG) AC_ARG_ENABLE(gcov, AC_HELP_STRING([--enable-gcov], [whether to compile with instrumentation for gcov (default is no) (needs --enable-debug)]), [AC_DEFINE([ENABLE_GCOV]) ENABLE_GCOV=1], []) AC_DEFINE([ENABLE_GCOV]) AC_SUBST(ENABLE_GCOV) AC_COMPILE_IFELSE( [AC_LANG_PROGRAM( [[ #include ]], [[ int i=O_DIRECTORY; ]] )], [have_o_directory=yes], [have_o_directory=no]) if test x$have_o_directory = xyes ; then AC_DEFINE(HAVE_O_DIRECTORY, 1, O_DIRECTORY found) fi AC_SUBST(HAVE_O_DIRECTORY) AC_LINK_IFELSE( [AC_LANG_PROGRAM( [[ #include ]], [[ char **environ; int main(void) { return environ == NULL; } ]] )], [need_environ_extern=no], [need_environ_extern=yes]) if test x$need_environ_extern = xyes ; then AC_DEFINE(NEED_ENVIRON_EXTERN, 1, "char **environ" needs "extern") fi AC_SUBST(NEED_ENVIRON_EXTERN) if test x$have_fmemopen = x then AC_LINK_IFELSE( [AC_LANG_PROGRAM( [[ #include ]], [[ int main(int argc, char *args[]) { return fmemopen(args[0], 2, args[1]) == NULL; } ]] )], [have_fmemopen=yes], [have_fmemopen=no]) fi if test x$have_fmemopen = xyes then AC_DEFINE(HAVE_FMEMOPEN, 1, [fmemopen() found]) else AC_MSG_WARN([fmemopen() not found. debug_buffer option not available.]) fi AC_SUBST(HAVE_FMEMOPEN) if locale -a > /dev/null 2>&1 then AC_DEFINE([HAVE_LOCALES],[1]) fi AC_SUBST(HAVE_LOCALES) AC_ARG_WITH(chroot, AC_HELP_STRING([--with-chroot=PATH], [Specify a chroot environment for the fsvs-chrooter helper.]), [ if test "$withval" = "yes" ; then AC_MSG_ERROR([--with-chroot requires an argument.]) else CHROOTER_JAIL=$withval AC_DEFINE_UNQUOTED(CHROOTER_JAIL, "$CHROOTER_JAIL", [The path of a chroot jail.]) fi ]) AC_SUBST(CHROOTER_JAIL) AC_ARG_ENABLE(release, AC_HELP_STRING([--enable-release], [whether to compile without debug messages. Makes image smaller (to about half size), but makes -d and -D inoperative. (Default is no)]), [AC_DEFINE([ENABLE_RELEASE]) ENABLE_RELEASE=1], []) AC_SUBST(ENABLE_RELEASE) if [[ "$ENABLE_RELEASE$ENABLE_DEBUG" = "11" ]] then AC_MSG_ERROR([[--enable-debug and --enable-release are incompatibel. Use one or the other.]]) fi AC_CHECK_FUNCS([getdents64]) AC_CHECK_HEADERS([linux/types.h]) AC_CHECK_HEADERS([linux/unistd.h]) AC_CHECK_TYPES([comparison_fn_t]) AC_SYS_LARGEFILE # Checks for typedefs, structures, and compiler characteristics. AC_C_CONST AC_C_INLINE AC_CHECK_MEMBERS([struct stat.st_rdev]) AC_HEADER_TIME AC_STRUCT_TM AC_DEFINE([HAS_FASTCALL]) AC_SUBST(HAS_FASTCALL) # Only i386 (32bit) has fastcall. if [[ `uname -m` = i?86 ]] then HAS_FASTCALL=1 fi AC_TYPE_UINT32_T AC_SUBST(HAVE_UINT32_T) # See config.h for an explanation. if [[ "$ac_cv_c_uint32_t" = "yes" ]] then ac_cv_c_uint32_t=uint32_t fi AC_DEFINE_UNQUOTED(AC_CV_C_UINT32_T, [$ac_cv_c_uint32_t]) AC_TYPE_UINT64_T AC_SUBST(HAVE_UINT64_T) if [[ "$ac_cv_c_uint64_t" = "yes" ]] then ac_cv_c_uint64_t=uint64_t fi AC_DEFINE_UNQUOTED(AC_CV_C_UINT64_T, [$ac_cv_c_uint64_t]) # Checks for library functions. AC_FUNC_CHOWN AC_FUNC_FORK AC_FUNC_MALLOC AC_FUNC_MEMCMP AC_FUNC_MMAP AC_FUNC_REALLOC AC_TYPE_SIGNAL AC_FUNC_VPRINTF AC_CHECK_FUNCS([fchdir getcwd gettimeofday memmove memset mkdir munmap rmdir strchr strdup strerror strrchr strtoul strtoull alphasort dirfd lchown lutimes strsep]) # AC_CACHE_SAVE AC_CONFIG_FILES([src/Makefile tests/Makefile src/.ycm_extra_conf.py]) AC_OUTPUT # Cause a recompile touch src/config.h if [ [ "$ac_cv_header_linux_kdev_t_h" = "no" -a "x$ENABLE_DEV_FAKE" = "x" ] ] then AC_MSG_WARN([ * MAJOR(), MINOR() and MAKEDEV() definitions not found. * Fake a definition, but that could make problems for ignore patterns * and commits/updates of device nodes, so these will be disabled. * Please use a github issue for help or, if you know your * systems' way, to report the correct header name. * * If you *really* need to use device compares, and have *no* other way, * you could try using the --enable-dev-fake option on ./configure.]) fi # vi: ts=3 sw=3 fsvs-fsvs-1.2.12/doc/000077500000000000000000000000001453631713700143145ustar00rootroot00000000000000fsvs-fsvs-1.2.12/doc/FAQ000066400000000000000000000026311453631713700146500ustar00rootroot00000000000000Q: What does fsvs mean? A: Fast System VerSioning Q: How do you pronounce it? A: [fisvis] Q: Why are the listings not sorted? A: Because of speed considerations the program does the files in hard disk-order, ie. in the order they're on the hard disk. Doing the run and output later would leave the user without feedback for some time. Q: What meta-data is versioned? A: Currently modification time, user, group, and permissions are saved. Q: What kind of files are versioned? A: Files, directories, device nodes (block and char), symbolic links. Sockets and pipes are normally regenerated upon opening and are therefore not stored. Q: I don't like XYZ. A: Unified patches are welcome. Q: Why is it called fsvs and not X? A: I've had several great ideas myself, but after discarding SYSV (SYStem Versioning) and SUBS (SUbversion Backup System) I just searched for a unique string to describe this project. Q: Can I use some subdirectory of my repository instead of the root? A: Of course. You can use the normal subversion structures /trunk, /tags, /branches - you just have to create them and point your working copy there. So you do svn mkdir $URL/branches $URL/tags $URL/trunk and use fsvs init $URL/trunk to use the trunk, or fsvs export $URL/tags/tag2 to export your "tag2". Note: There's no way currently to "switch" between directories, although there might/should. fsvs-fsvs-1.2.12/doc/IGNORING000066400000000000000000000000001453631713700154410ustar00rootroot00000000000000fsvs-fsvs-1.2.12/doc/PERFORMANCE000066400000000000000000000052501453631713700160020ustar00rootroot00000000000000- Program size is 280kB with debug information, 96kB without. Needed libraries not counted. Some debug code could be eliminated by "configure --enable-release". - Initial checkin can take a while - there's a lot of data to transfer. - memory usage: my test machine with 150000 files never grew over 34MB in memory usage. (That is, with apr_pool_destroy(); with apr_pool_clean() I had to kill the process at 170MB) - The fsfs backend makes two files out of one date file - one for meta-data (properties) and one for the real file-data. So 300000 files are created for a commit of 130000 files. ** On ext3 enable dir_index ** (see "tune2fs", "fsck.ext3 -D") or use bdb. - "fsvs status" is (on cold caches) faster than "find"! See here: Script started on Mon 09 Jul 2007 16:48:34 CEST # How many entries are here? dolly:/example# find . | wc -l 22147 # Initialize fsvs, so that it knows its basepath dolly:/example# fsvs urls file://// # Warm up caches dolly:/example# find . > /dev/null # find with hot cache: dolly:/example# time find . > /dev/null real 0m0.096s user 0m0.052s sys 0m0.044s # Warm up cache (should already be done by find) dolly:/example# fsvs st > /dev/null # fsvs with hot cache: dolly:/example# time fsvs st > /dev/null real 0m0.175s user 0m0.088s sys 0m0.088s # Clear cache dolly:/example# echo 3 > /proc/sys/vm/drop_caches # find with cold cache - harddisk must seek a fair bit. dolly:/example# time find . > /dev/null real 0m8.279s user 0m0.084s sys 0m0.212s # Clear cache dolly:/example# echo 3 > /proc/sys/vm/drop_caches # fsvs with cold cache - harddisk must seek again dolly:/example# time fsvs st > /dev/null real 0m7.333s user 0m0.148s sys 0m0.372s # Now build a list of entries, like the one that exists after commit dolly:/example# fsvs _build > /dev/null # Clear cache dolly:/example# echo 3 > /proc/sys/vm/drop_caches # fsvs with cold cache, but using a sorted list of existing entries - # harddisk doesn't need to seek as much dolly:/example# time fsvs st > /dev/null real 0m6.000s user 0m0.240s sys 0m0.372s # Result: dolly:/example# bc -l bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 8.279/6.00 1.37983333333333333333 6.00/8.279 .72472520835849740306 # 28% (or 38%) time saved! dolly:/example# exit Script done on Mon 09 Jul 2007 16:50:13 CEST - testing goes much faster if you create a /tmp/ram directory and mount a tmpfs there. *** DO NOT USE ramfs !!! *** ramfs doesn't update the directory modification time on file creations, so fsvs won't work. fsvs-fsvs-1.2.12/doc/USAGE000066400000000000000000001212521453631713700151060ustar00rootroot00000000000000SYNOPSIS fsvs command [options] [args] The following commands are understood by FSVS: Local configuration and information: urls Define working copy base directories by their URL(s) status Get a list of changed entries info Display detailed information about single entries log Fetch the log messages from the repository diff Get differences between files (local and remote) copyfrom-detect Ask FSVS about probably copied/moved/renamed entries; see cp Defining which entries to take: ignore and rign Define ignore patterns unversion Remove entries from versioning add Add entries that would be ignored cp, mv Tell FSVS that entries were copied Commands working with the repository: commit Send changed data to the repository update Get updates from the repository checkout Fetch some part of the repository, and register it as working copy cat Get a file from the directory revert and uncp Undo local changes and entry markings remote-status Ask what an update would bring Property handling: prop-set Set user-defined properties prop-get Ask value of user-defined properties prop-list Get a list of user-defined properties Additional commands used for recovery and debugging: export Fetch some part of the repository sync-repos Drop local information about the entries, and fetch the current list from the repository. Note Multi-url-operations are relatively new; there might be rough edges. The return code is 0 for success, or 2 for an error. 1 is returned if the option Checking for changes in a script is used, and changes are found; see also Filtering entries. Universal options -V -- show version -V makes FSVS print the version and a copyright notice, and exit. -d and -D -- debugging If FSVS was compiled using –enable-debug you can enable printing of debug messages (to STDOUT) with -d. Per default all messages are printed; if you're only interested in a subset, you can use -D start-of-function-name. fsvs -d -D waa_ status would call the status action, printing all debug messages of all WAA functions - waa__init, waa__open, etc. For more details on the other debugging options debug_output and debug_buffer please see the options list. -N, -R -- recursion The -N and -R switches in effect just decrement/increment a counter; the behaviour is chosen depending on that. So a command line of -N -N -N -R -R is equivalent to -3 +2 = -1, this results in -N. -q, -v -- verbose/quiet -v / -q set/clear verbosity flags, and so give more/less output. Please see the verbose option for more details. -C -- checksum -C chooses to use more change detection checks; please see the change_check option for more details. -f -- filter entries This parameter allows to do a bit of filtering of entries, or, for some operations, modification of the work done on given entries. It requires a specification at the end, which can be any combination of any, text, new, deleted (or removed), meta, mtime, group, mode, changed or owner; default or def use the default value. By giving eg. the value text, with a status action only entries that are new or changed are shown; with mtime,group only entries whose group or modification time has changed are printed. Note Please see Change detection for some more information. If an entry gets replaced with an entry of a different type (eg. a directory gets replaced by a file), that counts as deleted and new. If you use -v, it's used as a any internally. If you use the string none, it resets the bitmask to no entries shown; then you can built a new mask. So owner,none,any,none,delete would show deleted entries. If the value after all commandline parsing is none, it is reset to the default. -W warning=action -- set warnings Here you can define the behaviour for certain situations that should not normally happen, but which you might encounter. The general format here is specification = action, where specification is a string matching the start of at least one of the defined situations, and action is one of these: * once to print only a single warning, * always to print a warning message every time, * stop to abort the program, * ignore to simply ignore this situation, or * count to just count the number of occurrences. If specification matches more than one situation, all of them are set; eg. for meta=ignore all of meta-mtime, meta-user etc. are ignored. If at least a single warning that is not ignored is encountered during the program run, a list of warnings along with the number of messages it would have printed with the setting always is displayed, to inform the user of possible problems. The following situations can be handled with this: meta-mtime, meta-user, meta-group, meta-umask These warnings are issued if a meta-data property that was fetched from the repository couldn't be parsed. This can only happen if some other program or a user changes properties on entries. In this case you can use -Wmeta=always or -Wmeta=count, until the repository is clean again. no-urllist This warning is issued if a info action is executed, but no URLs have been defined yet. charset-invalid If the function nl_langinfo(3) couldn't return the name of the current character encoding, a default of UTF-8 is used. You might need that for a minimal system installation, eg. on recovery. chmod-eperm, chown-eperm If you update a working copy as normal user, and get to update a file which has another owner but which you may modify, you'll get errors because neither the user, group, nor mode can be set. This way you can make the errors non-fatal. chmod-other, chown-other If you get another error than EPERM in the situation above, you might find these useful. mixed-rev-wc If you specify some revision number on a revert, it will complain that mixed-revision working copies are not allowed. While you cannot enable mixed-revision working copies (I'm working on that) you can avoid being told every time. propname-reserved It is normally not allowed to set a property with the prop-set action with a name matching some reserved prefixes. ignpat-wcbase This warning is issued if an absolute ignore pattern" does not match the working copy base directory. \n See \ref ignpat_shell_abs "absolute shell patterns" for more details. diff-status GNU diff has defined that it returns an exit code 2 in case of an error; sadly it returns that also for binary files, so that a simply fsvs diff some-binary-file text-file would abort without printing the diff for the second file. Because of this FSVS currently ignores the exit status of diff per default, but this can be changed by setting this option to eg. stop. Also an environment variable FSVS_WARNINGS is used and parsed; it is simply a whitespace-separated list of option specifications. -u URLname[@revision[:revision]] -- select URLs Some commands can be reduced to a subset of defined URLs; the update command is a example. If you have more than a single URL in use for your working copy, update normally updates all entries from all URLs. By using this parameter you can tell FSVS to update only the specified URLs. The parameter can be used repeatedly; the value can have multiple URLs, separated by whitespace or one of ",;". fsvs up -u base_install,boot@32 -u gcc This would get HEAD of base_install and gcc, and set the target revision of the boot URL for this command at 32. -o [name[=value]] -- other options This is used for setting some seldom used option, for which default can be set in a configuration file (to be implemented, currently only command-line). For a list of these please see Further options for FSVS.. Signals If you have a running FSVS, and you want to change its verbosity, you can send the process either SIGUSR1 (to make it more verbose) or SIGUSR2 (more quiet). add fsvs add [-u URLNAME] PATH [PATH...] With this command you can explicitly define entries to be versioned, even if they have a matching ignore pattern. They will be sent to the repository on the next commit, just like other new entries, and will therefore be reported as New . The -u option can be used if you're have more than one URL defined for this working copy and want to have the entries pinned to the this URL. Example Say, you're versioning your home directory, and gave an ignore pattern of ./.* to ignore all .* entries in your home-directory. Now you want .bashrc, .ssh/config, and your complete .kde3-tree saved, just like other data. So you tell fsvs to not ignore these entries: fsvs add .bashrc .ssh/config .kde3 Now the entries below .kde3 would match your earlier ./.* pattern (as a match at the beginning is sufficient), so you have to insert a negative ignore pattern (a take pattern): fsvs ignore prepend t./.kde3 Now a fsvs st would show your entries as New , and the next commit will send them to the repository. unversion fsvs unversion PATH [PATH...] This command flags the given paths locally as removed. On the next commit they will be deleted in the repository, and the local information of them will be removed, but not the entries themselves. So they will show up as New again, and you get another chance at ignoring them. Example Say, you're versioning your home directory, and found that you no longer want .bash_history and .sh_history versioned. So you do fsvs unversion .bash_history .sh_history and these files will be reported as d (will be deleted, but only in the repository). Then you do a fsvs commit Now fsvs would report these files as New , as it does no longer know anything about them; but that can be cured by fsvs ignore "./.*sh_history" Now these two files won't be shown as New , either. The example also shows why the given paths are not just entered as separate ignore patterns - they are just single cases of a (probably) much broader pattern. Note If you didn't use some kind of escaping for the pattern, the shell would expand it to the actual filenames, which is (normally) not what you want. _build_new_list This is used mainly for debugging. It traverses the filesystem and builds a new entries file. In production it should not be used; as neither URLs nor the revision of the entries is known, information is lost by calling this function! Look at sync-repos. delay This command delays execution until time has passed at least to the next second after writing the data files used by FSVS (dir and urls). This command is for use in scripts; where previously the delay option was used, this can be substituted by the given command followed by the delay command. The advantage against the delay option is that read-only commands can be used in the meantime. An example: fsvs commit /etc/X11 -m "Backup of X11" ... read-only commands, like "status" fsvs delay /etc/X11 ... read-write commands, like "commit" The optional path can point to any path in the WC. In the testing framework it is used to save a bit of time; in normal operation, where FSVS commands are not so tightly packed, it is normally preferable to use the delay option. cat fsvs cat [-r rev] path Fetches a file repository, and outputs it to STDOUT. If no revision is specified, it defaults to BASE, ie. the current local revision number of the entry. checkout fsvs checkout [path] URL [URLs...] Sets one or more URLs for the current working directory (or the directory path), and does an checkout of these URLs. Example: fsvs checkout . http://svn/repos/installation/machine-1/trunk The distinction whether a directory is given or not is done based on the result of URL-parsing – if it looks like an URL, it is used as an URL. Please mind that at most a single path is allowed; as soon as two non-URLs are found an error message is printed. If no directory is given, "." is used; this differs from the usual subversion usage, but might be better suited for usage as a recovery tool (where versioning / is common). Opinions welcome. The given path must exist, and should be empty – FSVS will abort on conflicts, ie. if files that should be created already exist. If there's a need to create that directory, please say so; patches for some parameter like -p are welcome. For a format definition of the URLs please see the chapter Format of URLs and the urls and update commands. Furthermore you might be interested in Using an alternate root directory and Recovery for a non-booting system. commit fsvs commit [-m "message"|-F filename] [-v] [-C [-C]] [PATH [PATH ...]] Commits (parts of) the current state of the working copy into the repository. Example The working copy is /etc , and it is set up and committed already. Then /etc/hosts and /etc/inittab got modified. Since these are non-related changes, you'd like them to be in separate commits. So you simply run these commands: fsvs commit -m "Added some host" /etc/hosts fsvs commit -m "Tweaked default runlevel" /etc/inittab If the current directory is /etc you could even drop the /etc/ in front, and use just the filenames. Please see status for explanations on -v and -C . For advanced backup usage see also the commit-pipe property". cp fsvs cp [-r rev] SRC DEST fsvs cp dump fsvs cp load The copy command marks DEST as a copy of SRC at revision rev, so that on the next commit of DEST the corresponding source path is sent as copy source. The default value for rev is BASE, ie. the revision the SRC (locally) is at. Please note that this command works always on a directory structure - if you say to copy a directory, the whole structure is marked as copy. That means that if some entries below the copy are missing, they are reported as removed from the copy on the next commit. (Of course it is possible to mark files as copied, too; non-recursive copies are not possible, but can be emulated by having parts of the destination tree removed.) Note TODO: There will be differences in the exact usage - copy will try to run the cp command, whereas copied will just remember the relation. If this command are used without parameters, the currently defined relations are printed; please keep in mind that the key is the destination name, ie. the 2nd line of each pair! The input format for load is newline-separated - first a SRC line, followed by a DEST line, then an line with just a dot ( ".") as delimiter. If you've got filenames with newlines or other special characters, you have to give the paths as arguments. Internally the paths are stored relative to the working copy base directory, and they're printed that way, too. Later definitions are appended to the internal database; to undo mistakes, use the uncopy action. Note Important: User-defined properties like fsvs:commit-pipe are not copied to the destinations, because of space/time issues (traversing through entire subtrees, copying a lot of property-files) and because it's not sure that this is really wanted. TODO: option for copying properties? Todo: -0 like for xargs? Todo: Are different revision numbers for load necessary? Should dump print the source revision number? Todo: Copying from URLs means update from there Note As subversion currently treats a rename as copy+delete, the mv command is an alias to cp. If you have a need to give the filenames dump or load as first parameter for copyfrom relations, give some path, too, as in "./dump". Note The source is internally stored as URL with revision number, so that operations like these $ fsvs cp a b $ rm a/1 $ fsvs ci a $ fsvs ci b work - FSVS sends the old (too recent!) revision number as source, and so the local filelist stays consistent with the repository. But it is not implemented (yet) to give an URL as copyfrom source directly - we'd have to fetch a list of entries (and possibly the data!) from the repository. Todo: Filter for dump (patterns?). copyfrom-detect fsvs copyfrom-detect [paths...] This command tells FSVS to look through the new entries, and see whether it can find some that seem to be copied from others already known. It will output a list with source and destination path and why it could match. This is just for information purposes and doesn't change any FSVS state, (TODO: unless some option/parameter is set). The list format is on purpose incompatible with the load syntax, as the best match normally has to be taken manually. Todo: some parameter that just prints the "best" match, and outputs the correct format. If verbose is used, an additional value giving the percentage of matching blocks, and the count of possibly copied entries is printed. Example: $ fsvs copyfrom-list -v newfile1 md5:oldfileA newfile2 md5:oldfileB md5:oldfileC md5:oldfileD newfile3 inode:oldfileI manber=82.6:oldfileF manber=74.2:oldfileG manber=53.3:oldfileH ... 3 copyfrom relations found. The abbreviations are: md5 The MD5 of the new file is identical to that of one or more already committed files; there is no percentage. inode The device/inode number is identical to the given known entry; this could mean that the old entry has been renamed or hardlinked. Note: Not all filesystems have persistent inode numbers (eg. NFS) - so depending on your filesystems this might not be a good indicator! name The entry has the same name as another entry. manber Analysing files of similar size shows some percentage of (variable-sized) common blocks (ignoring the order of the blocks). dirlist The new directory has similar files to the old directory. The percentage is (number_of_common_entries)/(files_in_dir1 + files_in_dir2 - number_of_common_entries). Note manber matching is not implemented yet. If too many possible matches for an entry are found, not all are printed; only an indicator ... is shown at the end. uncp fsvs uncopy DEST [DEST ...] The uncopy command removes a copyfrom mark from the destination entry. This will make the entry unknown again, and reported as New on the next invocations. Only the base of a copy can be un-copied; if a directory structure was copied, and the given entry is just implicitly copied, this command will return an error. This is not folded in revert, because it's not clear whether revert on copied, changed entries should restore the original copyfrom data or remove the copy attribute; by using another command this is no longer ambiguous. Example: $ fsvs copy SourceFile DestFile # Whoops, was wrong! $ fsvs uncopy DestFile diff fsvs diff [-v] [-r rev[:rev2]] [-R] PATH [PATH...] This command gives you diffs between local and repository files. With -v the meta-data is additionally printed, and changes shown. If you don't give the revision arguments, you get a diff of the base revision in the repository (the last commit) against your current local file. With one revision, you diff this repository version against your local file. With both revisions given, the difference between these repository versions is calculated. You'll need the diff program, as the files are simply passed as parameters to it. The default is to do non-recursive diffs; so fsvs diff . will output the changes in all files in the current directory and below. The output for special files is the diff of the internal subversion storage, which includes the type of the special file, but no newline at the end of the line (which diff complains about). For entries marked as copy the diff against the (clean) source entry is printed. Please see also Options relating to the "diff" action and Using colordiff. Todo: Two revisions diff is buggy in that it (currently) always fetches the full trees from the repository; this is not only a performance degradation, but you'll see more changed entries than you want (like changes A to B to A). This will be fixed. export fsvs export REPOS_URL [-r rev] If you want to export a directory from your repository without storing any FSVS-related data you can use this command. This restores all meta-data - owner, group, access mask and modification time; its primary use is for data recovery. The data gets written (in the correct directory structure) below the current working directory; if entries already exist, the export will stop, so this should be an empty directory. help help [command] This command shows general or specific help (for the given command). A similar function is available by using -h or -? after a command. groups fsvs groups dump|load fsvs groups [prepend|append|at=n] group-definition [group-def ...] fsvs ignore [prepend|append|at=n] pattern [pattern ...] fsvs groups test [-v|-q] [pattern ...] This command adds patterns to the end of the pattern list, or, with prepend, puts them at the beginning of the list. With at=x the patterns are inserted at the position x , counting from 0. The difference between groups and ignore is that groups requires a group name, whereas the latter just assumes the default group ignore. For the specification please see the related documentation . fsvs dump prints the patterns to STDOUT . If there are special characters like CR or LF embedded in the pattern without encoding (like \r or \n), the output will be garbled. The patterns may include * and ? as wildcards in one directory level, or ** for arbitrary strings. These patterns are only matched against new (not yet known) files; entries that are already versioned are not invalidated. If the given path matches a new directory, entries below aren't found, either; but if this directory or entries below are already versioned, the pattern doesn't work, as the match is restricted to the directory. So: fsvs ignore ./tmp ignores the directory tmp; but if it has already been committed, existing entries would have to be unmarked with fsvs unversion. Normally it's better to use fsvs ignore ./tmp/** as that takes the directory itself (which might be needed after restore as a mount point anyway), but ignore all entries below. Currently this has the drawback that mtime changes will be reported and committed; this is not the case if the whole directory is ignored. Examples: fsvs group group:unreadable,mode:4:0 fsvs group 'group:secrets,/etc/*shadow' fsvs ignore /proc fsvs ignore /dev/pts fsvs ignore './var/log/*-*' fsvs ignore './**~' fsvs ignore './**/*.bak' fsvs ignore prepend 'take,./**.txt' fsvs ignore append 'take,./**.svg' fsvs ignore at=1 './**.tmp' fsvs group dump fsvs group dump -v echo "./**.doc" | fsvs ignore load # Replaces the whole list Note Please take care that your wildcard patterns are not expanded by the shell! Testing patterns To see more easily what different patterns do you can use the test subcommand. The following combinations are available: * fsvs groups test pattern\n Tests only the given pattern against all new entries in your working copy, and prints the matching paths. The pattern is not stored in the pattern list. * fsvs groups test Uses the already defined patterns on the new entries, and prints the group name, a tab, and the path. With -v you can see the matching pattern in the middle column, too. By using -q you can avoid getting the whole list; this makes sense if you use the group_stats option at the same time. rign fsvs rel-ignore [prepend|append|at=n] path-spec [path-spec ...] fsvs ri [prepend|append|at=n] path-spec [path-spec ...] If you keep the same repository data at more than one working copy on the same machine, it will be stored in different paths - and that makes absolute ignore patterns infeasible. But relative ignore patterns are anchored at the beginning of the WC root - which is a bit tiring to type if you're deep in your WC hierarchy and want to ignore some files. To make that easier you can use the rel-ignore (abbreviated as ri) command; this converts all given path-specifications (which may include wildcards as per the shell pattern specification above) to WC-relative values before storing them. Example for /etc as working copy root: fsvs rel-ignore '/etc/X11/xorg.conf.*' cd /etc/X11 fsvs rel-ignore 'xorg.conf.*' Both commands would store the pattern "./X11/xorg.conf.*". Note This works only for shell patterns. For more details about ignoring files please see the ignore command and Specification of groups and patterns. info fsvs info [-R [-R]] [PATH...] Use this command to show information regarding one or more entries in your working copy. You can use -v to obtain slightly more information. This may sometimes be helpful for locating bugs, or to obtain the URL and revision a working copy is currently at. Example: $ fsvs info URL: file: .... 200 . Type: directory Status: 0x0 Flags: 0x100000 Dev: 0 Inode: 24521 Mode: 040755 UID/GID: 1000/1000 MTime: Thu Aug 17 16:34:24 2006 CTime: Thu Aug 17 16:34:24 2006 Revision: 4 Size: 200 The default is to print information about the given entry only. With a single -R you'll get this data about all entries of a given directory; with another -R you'll get the whole (sub-)tree. log fsvs log [-v] [-r rev1[:rev2]] [-u name] [path] This command views the revision log information associated with the given path at its topmost URL, or, if none is given, the highest priority URL. The optional rev1 and rev2 can be used to restrict the revisions that are shown; if no values are given, the logs are given starting from HEAD downwards, and then a limit on the number of revisions is applied (but see the limit option). If you use the -v -option, you get the files changed in each revision printed, too. There is an option controlling the output format; see the log_output option. Optionally the name of an URL can be given after -u; then the log of this URL, instead of the topmost one, is shown. TODOs: * –stop-on-copy * Show revision for all URLs associated with a working copy? In which order? prop-get fsvs prop-get PROPERTY-NAME PATH... Prints the data of the given property to STDOUT. Note Be careful! This command will dump the property as it is, ie. with any special characters! If there are escape sequences or binary data in the property, your terminal might get messed up! If you want a safe way to look at the properties, use prop-list with the -v parameter. prop-set fsvs prop-set [-u URLNAME] PROPERTY-NAME VALUE PATH... This command sets an arbitrary property value for the given path(s). Note Some property prefixes are reserved; currently everything starting with svn: throws a (fatal) warning, and fsvs: is already used, too. See Special property names. If you're using a multi-URL setup, and the entry you'd like to work on should be pinned to a specific URL, you can use the -u parameter; this is like the add command, see there for more details. prop-del fsvs prop-del PROPERTY-NAME PATH... This command removes a property for the given path(s). See also prop-set. prop-list fsvs prop-list [-v] PATH... Lists the names of all properties for the given entry. With -v, the value is printed as well; special characters will be translated, as arbitrary binary sequences could interfere with your terminal settings. If you need raw output, post a patch for –raw, or write a loop with prop-get. remote-status fsvs remote-status PATH [-r rev] This command looks into the repository and tells you which files would get changed on an update - it's a dry-run for update . Per default it compares to HEAD, but you can choose another revision with the -r parameter. Please see the update documentation for details regarding multi-URL usage. resolve fsvs resolve PATH [PATH...] When FSVS tries to update local files which have been changed, a conflict might occur. (For various ways of handling these please see the conflict option.) This command lets you mark such conflicts as resolved. revert fsvs revert [-rRev] [-R] PATH [PATH...] This command undoes local modifications: * An entry that is marked to be unversioned gets this flag removed. * For a already versioned entry (existing in the repository) the local entry is replaced with its repository version, and its status and flags are cleared. * An entry that is a modified copy destination gets reverted to the copy source data. * Manually added entries are changed back to "N"ew. Please note that implicitly copied entries, ie. entries that are marked as copied because some parent directory is the base of a copy, can not be un-copied; they can only be reverted to their original (copied-from) data, or removed. If you want to undo a copy operation, please see the uncopy command. See also HOWTO: Understand the entries' statii. If a directory is given on the command line all versioned entries in this directory are reverted to the old state; this behaviour can be modified with -R/-N, or see below. The reverted entries are printed, along with the status they had before the revert (because the new status is per definition unchanged). If a revision is given, the entries' data is taken from this revision; furthermore, the new status of that entry is shown. Note Please note that mixed revision working copies are not (yet) possible; the BASE revision is not changed, and a simple revert without a revision arguments gives you that. By giving a revision parameter you can just choose to get the text from a different revision. Difference to update If something doesn't work as it should in the installation you can revert entries until you are satisfied, and directly commit the new state. In contrast, if you update to an older version, you * cannot choose single entries (no mixed revision working copies yet), * and you cannot commit the old version with changes, as the "skipped" (later) changes will create conflicts in the repository. Currently only known entries are handled. If you need a switch (like –delete in rsync(1) ) to remove unknown (new, not yet versioned) entries, to get the directory in the exact state it is in the repository, please tell the dev@ mailing list. Todo: Another limitation is that just-deleted just-committed entries cannot be fetched via revert, as FSVS no longer knows about them. TODO: If a revision is given, take a look there, and ignore the local data? As a workaround you could use the cat and/or checkout commands to fetch repository-only data. Removed directory structures If a path is specified whose parent is missing, fsvs complains. We plan to provide a switch (probably -p), which would create (a sparse) tree up to this entry. Recursive behaviour When the user specifies a non-directory entry (file, device, symlink), this entry is reverted to the old state. If the user specifies a directory entry, these definitions should apply: command line switch result -N this directory only (meta-data), none this directory, and direct children of the directory, -R this directory, and the complete tree below. Working with copied entries If an entry is marked as copied from another entry (and not committed!), a revert will fetch the original copyfrom source. To undo the copy setting use the uncopy command. status fsvs status [-C [-C]] [-v] [-f filter] [PATHs...] This command shows the entries that have been changed locally since the last commit. The most important output formats are: * A status columns of four (or, with -v , six) characters. There are either flags or a "." printed, so that it's easily parsed by scripts – the number of columns is only changed by -q, -v – verbose/quiet. * The size of the entry, in bytes, or "dir" for a directory, or "dev" for a device. * The path and name of the entry, formatted by the path option. Normally only changed entries are printed; with -v all are printed, but see the filter option for more details. The status column can show the following flags: * 'D' and 'N' are used for deleted and new entries. * 'd' and 'n' are used for entries which are to be unversioned or added on the next commit; the characters were chosen as little delete (only in the repository, not removed locally) and little new (although ignored). See add and unversion. If such an entry does not exist, it is marked with an "!" in the last column – because it has been manually marked, and so the removal is unexpected. * A changed type (character device to symlink, file to directory etc.) is given as 'R' (replaced), ie. as removed and newly added. * If the entry has been modified, the change is shown as 'C'. If the modification or status change timestamps (mtime, ctime) are changed, but the size is still the same, the entry is marked as possibly changed (a question mark '?' in the last column) - but see change detection for details. * A 'x' signifies a conflict. * The meta-data flag 'm' shows meta-data changes like properties, modification timestamp and/or the rights (owner, group, mode); depending on the -v/-q command line parameters, it may be split into 'P' (properties), 't' (time) and 'p' (permissions). If 'P' is shown for the non-verbose case, it means only property changes, ie. the entries filesystem meta-data is unchanged. * A '+' is printed for files with a copy-from history; to see the URL of the copyfrom source, see the verbose option. Here's a table with the characters and their positions: * Without -v With -v * .... ...... * NmC? NtpPC? * DPx! D x! * R + R + * d d * n n * Furthermore please take a look at the stat_color option, and for more information about displayed data the verbose option. sync-repos fsvs sync-repos [-r rev] [working copy base] This command loads the file list afresh from the repository. A following commit will send all differences and make the repository data identical to the local. This is normally not needed; the only use cases are * debugging and * recovering from data loss in the $FSVS_WAA area. It might be of use if you want to backup two similar machines. Then you could commit one machine into a subdirectory of your repository, make a copy of that directory for another machine, and sync this other directory on the other machine. A commit then will transfer only changed files; so if the two machines share 2GB of binaries ( /usr , /bin , /lib , ...) then these 2GB are still shared in the repository, although over time they will deviate (as both committing machines know nothing of the other path with identical files). This kind of backup could be substituted by two or more levels of repository paths, which get overlaid in a defined priority. So the base directory, which all machines derive from, will be committed from one machine, and it's no longer necessary for all machines to send identical files into the repository. The revision argument should only ever be used for debugging; if you fetch a filelist for a revision, and then commit against later revisions, problems are bound to occur. Note There's issue 2286 in subversion which describes sharing identical files in the repository in unrelated paths. By using this relaxes the storage needs; but the network transfers would still be much larger than with the overlaid paths. update fsvs update [-r rev] [working copy base] fsvs update [-u url@rev ...] [working copy base] This command does an update on the current working copy; per default for all defined URLs, but you can restrict that via -u. It first reads all filelist changes from the repositories, overlays them (so that only the highest-priority entries are used), and then fetches all necessary changes. Updating to zero If you start an update with a target revision of zero, the entries belonging to that URL will be removed from your working copy, and the URL deleted from your URL list. This is a convenient way to replace an URL with another. Note As FSVS has no full mixed revision support yet, it doesn't know whether under the removed entry is a lower-priority one with the same path, which should get visible now. Directories get changed to the highest priority URL that has an entry below (which might be hidden!). Because of this you're advised to either use that only for completely distinct working copies, or do a sync-repos (and possibly one or more revert calls) after the update. urls fsvs urls URL [URLs...] fsvs urls dump fsvs urls load Initializes a working copy administrative area and connects the current working directory to REPOS_URL. All commits and updates will be done to this directory and against the given URL. Example: fsvs urls http://svn/repos/installation/machine-1/trunk For a format definition of the URLs please see the chapter Format of URLs. Note If there are already URLs defined, and you use that command later again, please note that as of 1.0.18 the older URLs are not overwritten as before, but that the new URLs are appended to the given list! If you want to start afresh, use something like true | fsvs urls load Loading URLs You can load a list of URLs from STDIN; use the load subcommand for that. Example: ( echo 'N:local,prio:10,http://svn/repos/install/machine-1/trunk' ; echo 'P:50,name:common,http://svn/repos/install/common/trunk' ) | fsvs urls load Empty lines are ignored. Dumping the defined URLs To see which URLs are in use for the current WC, you can use dump. As an optional parameter you can give a format statement: p priority n name r current revision t target revision R readonly-flag u URL I internal number for this URL Note That's not a real printf()-format; only these and a few \ sequences are recognized. Example: fsvs urls dump " %u %n:%p\\n" http://svn/repos/installation/machine-1/trunk local:10 http://svn/repos/installation/common/trunk common:50 The default format is "name:%n,prio:%p,target:%t,ro:%r,%u\\n"; for a more readable version you can use -v. Loading URLs You can change the various parameters of the defined URLs like this: # Define an URL fsvs urls name:url1,target:77,readonly:1,http://anything/... # Change values fsvs urls name:url1,target:HEAD fsvs urls readonly:0,http://anything/... fsvs urls name:url1,prio:88,target:32 Note FSVS as yet doesn't store the whole tree structures of all URLs. So if you change the priority of an URL, and re-mix the directory trees that way, you'll need a sync-repos and some revert commands. I'd suggest to avoid this, until FSVS does handle that case better. estat::md5 md5_digest_t md5 MD5-hash of the repository version. Definition: global.h:336 filename static char * filename Definition: update.c:90 estat::url struct url_t * url The URL this entry is from. Definition: global.h:318 estat::name char * name Name of this entry. Definition: global.h:303 status status Definition: cache.h:143 fsvs-fsvs-1.2.12/doc/develop/000077500000000000000000000000001453631713700157525ustar00rootroot00000000000000fsvs-fsvs-1.2.12/doc/develop/UTF8000066400000000000000000000021661453631713700164300ustar00rootroot00000000000000 UTF8 in FSVS ------------ Some points which trouble me a bit, and some random thoughts; everything connected with UTF-8: - Properties we get from the repository might be easiest stored locally as UTF8, if we don't do anything with them (eg. svn:entry). - In which properties can be non-ASCII-characters? Does someone define user/group names in UTF-8? Can eg. xattr have Unicode characters in them? Does that happen in practical usage? - The currently used properties should be safe. I've never heard from non-ASCII groups or users, and the mtime should always be in the same format. - I thought whether I should just do *everything* in UTF-8. But that is a performance trade off; on a simple "fsvs status" we'd have to all filenames from the waa-directory. It may not be much work, but if it's not necessary ... - I'd like to have the subversion headers to define a utf8_char *, which would (with gcc) be handled distinct from a normal char * ... (see linux kernel, include/linux/types.h: #define __bitwise ...) But that won't happen, as there's already too much software which relies on the current definitions. fsvs-fsvs-1.2.12/doc/fsvs-groups.5000066400000000000000000000303011453631713700166750ustar00rootroot00000000000000.TH "FSVS - Group definitions" 5 "11 Mar 2010" "Version trunk:2424" "fsvs" \" -*- nroff -*- .ad l .nh .SH NAME Using grouping patterns \- .PP Patterns are used to define groups for new entries; a group can be used to ignore the given entries, or to automatically set properties when the entry is taken on the entry list. Patterns are used to define groups for new entries; a group can be used to ignore the given entries, or to automatically set properties when the entry is taken on the entry list. So the auto-props are assigned when the entry gets put on the internal list; that happens for the \fBadd\fP, \fBprop-set\fP or \fBprop-del\fP, and of course \fBcommit\fP commands. .br To override the auto-props of some new entry just use the property commands. .SH "Overview" .PP When \fCFSVS\fP walks through your working copy it tries to find \fBnew\fP (ie. not yet versioned) entries. Every \fBnew\fP entry gets tested against the defined grouping patterns (in the given order!); if a pattern matches, the corresponding group is assigned to the entry, and no further matching is done. .PP See also \fBentry statii\fP. .SS "Predefined group 1: 'ignore'" If an entry gets a group named \fC'ignore'\fP assigned, it will not be considered for versioning. .PP This is the only \fBreally\fP special group name. .SS "Predefined group 2: 'take'" This group mostly specifies that no further matching is to be done, so that later \fBignore\fP patterns are not tested. .PP Basically the \fC'take'\fP group is an ordinary group like all others; it is just predefined, and available with a \fBshort-hand notation\fP. .SH "Why should I ignore files?" .PP Ignore patterns are used to ignore certain directory entries, where versioning makes no sense. If you're versioning the complete installation of a machine, you wouldn't care to store the contents of \fC/proc\fP (see \fCman 5 proc\fP), or possibly because of security reasons you don't want \fC/etc/shadow\fP , \fC/etc/sshd/ssh_host_*key\fP , and/or other password- or key-containing files. .PP Ignore patterns allow you to define which directory entries (files, subdirectories, devices, symlinks etc.) should be taken, respectively ignored. .SH "Why should I assign groups?" .PP The grouping patterns can be compared with the \fCauto-props\fP feature of subversion; it allows automatically defining properties for new entries, or ignoring them, depending on various criteria. .PP For example you might want to use encryption for the files in your users' \fC\fP.ssh directory, to secure them against unauthorized access in the repository, and completely ignore the private key files: .PP Grouping patterns: .PP .nf group:ignore,/home/*/.ssh/id* group:encrypt,/home/*/.ssh/** .fi .PP And the \fC$FSVS_CONF/groups/encrypt\fP file would have a definition for the \fCfsvs:commit-pipe\fP (see the \fBspecial properties\fP). .SH "Syntax of group files" .PP A group definition file looks like this: .PD 0 .IP "\(bu" 2 Whitespace on the beginning and the end of the line is ignored. .IP "\(bu" 2 Empty lines, and lines with the first non-whitespace character being \fC'#'\fP (comments) are ignored. .IP "\(bu" 2 It can have \fBeither\fP the keywords \fCignore\fP or \fCtake\fP; if neither is specified, the group \fCignore\fP has \fCignore\fP as default (surprise, surprise!), and all others use \fCtake\fP. .IP "\(bu" 2 An arbitrary (small) number of lines with the syntax .br \fCauto-prop \fIproperty-name\fP \fIproperty-value\fP\fP can be given; \fIproperty-name\fP may not include whitespace, as there's no parsing of any quote characters yet. .PP .PP An example: .PP .nf # This is a comment # This is another auto-props fsvs:commit-pipe gpg -er admin@my.net # End of definition .fi .PP .SH "Specification of groups and patterns" .PP While an ignore pattern just needs the pattern itself (in one of the formats below), there are some modifiers that can be additionally specified: .PP .nf [group:{name},][dir-only,][insens|nocase,][take,][mode:A:C,]pattern .fi .PP These are listed in the section \fBModifiers\fP below. .PP These kinds of ignore patterns are available: .SH "Shell-like patterns" .PP These must start with \fC./\fP, just like a base-directory-relative path. \fC\fP? , \fC*\fP as well as character classes \fC\fP[a-z] have their usual meaning, and \fC**\fP is a wildcard for directory levels. .PP You can use a backslash \fC\\\fP outside of character classes to match some common special characters literally, eg. \fC\\*\fP within a pattern will match a literal asterisk character within a file or directory name. Within character classes all characters except \fC\fP] are treated literally. If a literal \fC\fP] should be included in a character class, it can be placed as the first character or also be escaped using a backslash. .PP Example for \fC/\fP as the base-directory .PP .nf ./[oa]pt ./sys ./proc/* ./home/**~ .fi .PP .PP This would ignore files and directories called \fCapt\fP or \fCopt\fP in the root directory (and files below, in the case of a directory), the directory \fC/sys\fP and everything below, the contents of \fC/proc\fP (but take the directory itself, so that upon restore it gets created as a mountpoint), and all entries matching \fC*~\fP in and below \fC/home\fP . .PP \fBNote:\fP .RS 4 The patterns are anchored at the beginning and the end. So a pattern \fC./sys\fP will match \fBonly\fP a file or directory named \fCsys\fP. If you want to exclude a directories' files, but not the directory itself, use something like \fC./dir/*\fP or \fC./dir/**\fP .RE .PP If you're deep within your working copy and you'd like to ignore some files with a WC-relative ignore pattern, you might like to use the \fBrel-ignore\fP command. .SS "Absolute shell patterns" There is another way to specify shell patterns - using absolute paths. .br The syntax is similar to normal shell patterns; but instead of the \fC./\fP prefix the full path, starting with \fC/\fP, is used. .PP .PP .nf /etc/**.dpkg-old /etc/**.dpkg-bak /**.bak /**~ .fi .PP .PP The advantage of using full paths is that a later \fCdump\fP and \fCload\fP in another working copy (eg. when moving from versioning \fC/etc\fP to \fC/\fP) does simply work; the patterns don't have to be modified. .PP Internally this simply tries to remove the working copy base directory at the start of the patterns (on loading); then they are processed as usual. .PP If a pattern does \fBnot\fP match the wc base, and neither has the wild-wildcard prefix \fC/**\fP, a \fBwarning\fP is issued. .SH "PCRE-patterns" .PP PCRE stands for Perl Compatible Regular Expressions; you can read about them with \fCman pcre2\fP (if the manpages are installed), and/or \fCperldoc perlre\fP (if perldoc is installed). .br If both fail for you, just google it. .PP These patterns have the form \fCPCRE:{pattern}\fP, with \fCPCRE\fP in uppercase. .PP An example: .PP .nf PCRE:./home/.*~ .fi .PP This one achieves exactly the same as \fC./home/**~\fP . .PP Another example: .PP .nf PCRE:./home/[a-s] .fi .PP .PP This would match \fC/home/anthony\fP , \fC/home/guest\fP , \fC/home/somebody\fP and so on, but would not match \fC/home/theodore\fP . .PP One more: .PP .nf PCRE:./.*(\.(tmp|bak|sik|old|dpkg-\w+)|~)$ .fi .PP .PP Note that the pathnames start with \fC\fP./ , just like above, and that the patterns are anchored at the beginning. To additionally anchor at the end you could use a \fC$\fP at the end. .SH "Ignoring all files on a device" .PP Another form to discern what is needed and what not is possible with \fCDEVICE:[<|<=|>|>=]major[:minor]\fP. .PP This takes advantage of the major and minor device numbers of inodes (see \fCman 1 stat\fP and \fCman 2 stat\fP). .PP The rule is as follows: .IP "\(bu" 2 Directories have their parent matched against the given string .IP "\(bu" 2 All other entries have their own device matched. .PP .PP This is because mount-points (ie. directories where other filesystems get attached) show the device of the mounted device, but should be versioned (as they are needed after restore); all entries (and all binding mounts) below should not. .PP The possible options \fC<=\fP or \fC>=\fP define a less-or-equal-than respective bigger-or-equal-than relationship, to ignore a set of device classes. .PP Examples: .PP .nf tDEVICE:3 ./* .fi .PP This patterns would define that all filesystems on IDE-devices (with major number 3) are \fItaken\fP , and all other files are ignored. .PP .PP .nf DEVICE:0 .fi .PP This would ignore all filesystems with major number 0 - in linux these are the \fIvirtual\fP filesystems ( \fCproc\fP , \fCsysfs\fP , \fCdevpts\fP , etc.; see \fC/proc/filesystems\fP , the lines with \fCnodev\fP ). .PP Mind NFS and smb-mounts, check if you're using \fImd\fP , \fIlvm\fP and/or \fIdevice-mapper\fP ! .PP Note: The values are parsed with \fCstrtoul()\fP , so you can use decimal, hexadecimal (by prepending \fC'0x'\fP, like \fC'0x102'\fP) and octal (\fC'0'\fP, like \fC'0777'\fP) notation. .SH "Ignoring a single file, by inode" .PP At last, another form to ignore entries is to specify them via the device they are on and their inode: .PP .nf INODE:major:minor:inode .fi .PP This can be used if a file can be hardlinked to many places, but only one copy should be stored. Then one path can be marked as to \fItake\fP , and other instances can get ignored. .PP \fBNote:\fP .RS 4 That's probably a bad example. There should be a better mechanism for handling hardlinks, but that needs some help from subversion. .RE .PP .SH "Modifiers" .PP All of these patterns can have one or more of these modifiers \fBbefore\fP them, with (currently) optional \fC','\fP as separators; not all combinations make sense. .PP For patterns with the \fCm\fP (mode match) or \fCd\fP (dironly) modifiers the filename pattern gets optional; so you don't have to give an all-match wildcard pattern (\fC./**\fP) for these cases. .SS "'take': Take pattern" This modifier is just a short-hand for assigning the group \fBtake\fP. .SS "'ignore': Ignore pattern" This modifier is just a short-hand for assigning the group \fBignore\fP. .SS "'insens' or 'nocase': Case insensitive" With this modifier you can force the match to be case-insensitive; this can be useful if other machines use eg. \fCsamba\fP to access files, and you cannot be sure about them leaving \fC'.BAK'\fP or \fC'.bak'\fP behind. .SS "'dironly': Match only directories" This is useful if you have a directory tree in which only certain files should be taken; see below. .SS "'mode': Match entries' mode" This expects a specification of two octal values in the form \fCm:\fIand_value\fP:\fIcompare_value\fP\fP, like \fCm:04:00\fP; the bits set in \fCand_value\fP get isolated from the entries' mode, and compared against \fCcompare_value\fP. .PP As an example: the file has mode \fC0750\fP; a specification of .PD 0 .IP "\(bu" 2 \fCm:0700:0700\fP matches, .IP "\(bu" 2 \fCm:0700:0500\fP doesn't; and .IP "\(bu" 2 \fCm:0007:0000\fP matches, but .IP "\(bu" 2 \fCm:0007:0007\fP doesn't. .PP .PP A real-world example: \fCm:0007:0000\fP would match all entries that have \fBno\fP right bits set for \fI'others'\fP, and could be used to exclude private files (like \fC/etc/shadow\fP). (Alternatively, the \fIothers-read\fP bit could be used: \fCm:0004:0000\fP. .PP FSVS will reject invalid specifications, ie. when bits in \fCcompare_value\fP are set that are cleared in \fCand_value:\fP these patterns can never match. .br An example would be \fCm:0700:0007\fP. .SS "Examples" .PP .nf take,dironly,./var/vmail/** take,./var/vmail/**/.*.sieve ./var/vmail/** .fi .PP This would take all \fC'.*.sieve'\fP files (or directories) below \fC/var/vmail\fP, in all depths, and all directories there; but no other files. .PP If your files are at a certain depth, and you don't want all other directories taken, too, you can specify that exactly: .PP .nf take,dironly,./var/vmail/* take,dironly,./var/vmail/*/* take,./var/vmail/*/*/.*.sieve ./var/vmail/** .fi .PP .PP .PP .nf mode:04:0 take,./etc/ ./** .fi .PP This would take all files from \fC/etc\fP, but ignoring the files that are not world-readable (\fCother-read\fP bit cleared); this way only 'public' files would get taken. .SH "Author" .PP Generated automatically by Doxygen for fsvs from the source code. fsvs-fsvs-1.2.12/doc/fsvs-howto-backup.5000066400000000000000000000154441453631713700177740ustar00rootroot00000000000000.TH "FSVS - Backup HOWTO" 5 "11 Mar 2010" "Version trunk:2424" "fsvs" \" -*- nroff -*- .ad l .nh .SH NAME HOWTO: Backup \- .PP This document is a step-by-step explanation how to do backups using FSVS. This document is a step-by-step explanation how to do backups using FSVS. .SH "Preparation" .PP If you're going to back up your system, you have to decide what you want to have stored in your backup, and what should be left out. .PP Depending on your system usage and environment you first have to decide: .PD 0 .IP "\(bu" 2 Do you only want to backup your data in \fC/home\fP? .PD 0 .IP " \(bu" 4 Less storage requirements .IP " \(bu" 4 In case of hardware crash the OS must be set up again .PP .IP "\(bu" 2 Do you want to keep track of your configuration in \fC/etc\fP? .PD 0 .IP " \(bu" 4 Very small storage overhead .IP " \(bu" 4 Not much use for backup/restore, but shows what has been changed .PP .IP "\(bu" 2 Or do you want to backup your whole installation, from \fC/\fP on? .PD 0 .IP " \(bu" 4 Whole system versioned, restore is only a few commands .IP " \(bu" 4 Much more storage space needed - typically you'd need at least a few GB free space. .PP .PP .PP The next few moments should be spent thinking about the storage space for the repository - will it be on the system harddisk, a secondary or an external harddisk, or even off-site? .PP \fBNote:\fP .RS 4 If you just created a fresh repository, you probably should create the 'default' directory structure for subversion - \fCtrunk\fP, \fCbranches\fP, \fCtags\fP; this layout might be useful for your backups. .br The URL you'd use in fsvs would go to \fCtrunk\fP. .RE .PP Possibly you'll have to take the available bandwidth into your considerations; a single home directory may be backed up on a 56k modem, but a complete system installation would likely need at least some kind of DSL or LAN. .PP \fBNote:\fP .RS 4 If this is a production box with sparse, small changes, you could take the initial backup on a local harddisk, transfer the directory with some media to the target machine, and switch the URLs. .RE .PP A fair bit of time should go to a small investigation which file patterns and paths you \fBnot\fP want to back-up. .PD 0 .IP "\(bu" 2 Backup files like \fC*\fP.bak, \fC*~\fP, \fC*\fP.tmp, and similar .IP "\(bu" 2 History files: \fC.sh-history\fP and similar in the home-directories .IP "\(bu" 2 Cache directories: your favourite browser might store many MB of cached data in you home-directories .IP "\(bu" 2 Virtual system directories, like \fC/proc\fP and \fC/sys\fP, \fC/dev/shmfs\fP. .PP .SH "Telling FSVS what to do" .PP Given \fC$WC\fP as the \fIworking directory\fP - the base of the data you'd like backed up (\fC/\fP, \fC/home\fP), and \fC$URL\fP as a valid subversion URL to your (already created) repository path. .PP Independent of all these details the first steps look like these: .PP .nf cd $WC fsvs urls $URL .fi .PP Now you have to say what should be ignored - that'll differ depending on your needs/wishes. .PP .nf fsvs ignore './**~' './**.tmp' './**.bak' fsvs ignore ./proc/ ./sys/ ./tmp/ fsvs ignore ./var/tmp/ ./var/spool/lpd/ fsvs ignore './var/log/*.gz' fsvs ignore ./var/run/ /dev/pts/ fsvs ignore './etc/*.dpkg-dist' './etc/*.dpkg-new' fsvs ignore './etc/*.dpkg-old' './etc/*.dpkg-bak' .fi .PP .PP \fBNote:\fP .RS 4 \fC/var/run\fP is for transient files; I've heard reports that \fBreverting\fP files there can cause problems with running programs. .br Similar for \fC/dev/pts\fP - if that's a \fCdevpts\fP filesystem, you'll run into problems on \fBupdate\fP or \fBrevert\fP - as FSVS won't be allowed to create entries in this directory. .RE .PP Now you may find that you'd like to have some files encrypted in your backup - like \fC/etc/shadow\fP, or your \fC\fP.ssh/id_* files. So you tell fsvs to en/decrypt these files: .PP .nf fsvs propset fsvs:commit-pipe 'gpg -er {your backup key}' /etc/shadow /etc/gshadow fsvs propset fsvs:update-pipe 'gpg -d' /etc/shadow /etc/gshadow .fi .PP .PP \fBNote:\fP .RS 4 This are just examples. You'll probably have to exclude some other paths and patterns from your backup, and mark some others as to-be-filtered. .RE .PP .SH "The first backup" .PP .PP .nf fsvs commit -m 'First commit.' .fi .PP That's all there is to it! .SH "Further use and maintenance" .PP The further usage is more or less the \fCcommit\fP command from the last section. .br When do you have to do some manual work? .PD 0 .IP "\(bu" 2 When ignore patterns change. .PD 0 .IP " \(bu" 4 New filesystems that should be ignored, or would be ignored but shouldn't .IP " \(bu" 4 You find that your favorite word-processor leaves many *.segv files behind, and similar things .PP .IP "\(bu" 2 If you get an error message from fsvs, check the arguments and retry. In desperate cases (or just because it's quicker than debugging yourself) ask on \fCdev [at] fsvs.tigris.org\fP. .PP .SH "Restoration in a working system" .PP Depending on the circumstances you can take different ways to restore data from your repository. .PD 0 .IP "\(bu" 2 \fC 'fsvs export'\fP allows you to just dump some repository data into your filesystem - eg. into a temporary directory to sort things out. .IP "\(bu" 2 Using \fC'fsvs revert'\fP you can get older revisions of a given file, directory or directory tree inplace. .br .IP "\(bu" 2 Or you can do a fresh checkout - set an URL in an (empty) directory, and update to the needed revision. .IP "\(bu" 2 If everything else fails (no backup media with fsvs on it), you can use subversion commands (eg. \fCexport\fP) to restore needed parts, and update the rest with fsvs. .PP .SH "Recovery for a non-booting system" .PP In case of a real emergency, when your harddisks crashed or your filesystem was eaten and you have to re-partition or re-format, you should get your system working again by .PD 0 .IP "\(bu" 2 booting from a knoppix or some other Live-CD (with FSVS on it), .IP "\(bu" 2 partition/format as needed, .IP "\(bu" 2 mount your harddisk partitions below eg. \fC/mnt\fP, .IP "\(bu" 2 and then recovering by .PP .PP .nf $ cd /mnt $ export FSVS_CONF=/etc/fsvs # if non-standard $ export FSVS_WAA=/var/spool/fsvs # if non-standard $ fsvs checkout -o softroot=/mnt .fi .PP .PP If somebody asks really nice I'd possibly even create a \fCrecovery\fP command that deduces the \fCsoftroot\fP parameter from the current working directory. .PP For more information please take a look at \fBUsing an alternate root directory\fP. .SH "Feedback" .PP If you've got any questions, ideas, wishes or other feedback, please tell us in the mailing list \fCusers [at] fsvs.tigris.org\fP. .PP Thank you! .SH "Author" .PP Generated automatically by Doxygen for fsvs from the source code. fsvs-fsvs-1.2.12/doc/fsvs-howto-master_local.5000066400000000000000000000252261453631713700211730ustar00rootroot00000000000000.TH "FSVS - Master/Local HOWTO" 5 "11 Mar 2010" "Version trunk:2424" "fsvs" \" -*- nroff -*- .ad l .nh .SH NAME HOWTO: Master/Local repositories \- .PP This HOWTO describes how to use a single working copy with multiple repositories. This HOWTO describes how to use a single working copy with multiple repositories. Please read the \fBHOWTO: Backup\fP first, to know about basic steps using FSVS. .SH "Rationale" .PP If you manage a lot of machines with similar or identical software, you might notice that it's a bit of work keeping them all up-to-date. Sure, automating distribution via rsync or similar is easy; but then you get identical machines, or you have to play with lots of exclude patterns to keep the needed differences. .PP Here another way is presented; and even if you don't want to use FSVS for distributing your files, the ideas presented here might help you keep your machines under control. .SH "Preparation, repository layout" .PP In this document the basic assumption is that there is a group of (more or less identical) machines, that share most of their filesystems. .PP Some planning should be done beforehand; while the ideas presented here might suffice for simple versioning, your setup can require a bit of thinking ahead. .PP This example uses some distinct repositories, to achieve a bit more clarity; of course these can simply be different paths in a single repository (see \fBUsing a single repository\fP for an example configuration). .PP Repository in URL \fCbase:\fP .PP .nf trunk/ bin/ ls true lib/ libc6.so modules/ sbin/ mkfs usr/ local/ bin/ sbin/ tags/ branches/ .fi .PP .PP Repository in URL \fCmachine1\fP (similar for machine2): .PP .nf trunk/ etc/ HOSTNAME adjtime network/ interfaces passwd resolv.conf shadow var/ log/ auth.log messages tags/ branches/ .fi .PP .SS "User data versioning" If you want to keep the user data versioned, too, a idea might be to start a new working copy in \fBevery\fP home directory; this way .IP "\(bu" 2 the system- and (several) user-commits can be run in parallel, .IP "\(bu" 2 the intermediate \fChome\fP directory in the repository is not needed, and .IP "\(bu" 2 you get a bit more isolation (against FSVS failures, out-of-space errors and similar). .IP "\(bu" 2 Furthermore FSVS can work with smaller file sets, which helps performance a bit (less dentries to cache at once, less memory used, etc.). .PP .PP .PP .nf A/ Andrew/ .bashrc .ssh/ .kde/ Alexander/ .bashrc .ssh/ .kde/ B/ Bertram/ .fi .PP .PP A cronjob could simply loop over the directories in \fC/home\fP, and call fsvs for each one; giving a target URL name is not necessary if every home-directory is its own working copy. .PP \fBNote:\fP .RS 4 URL names can include a forward slash \fC/\fP in their name, so you might give the URLs names like \fChome/Andrew\fP - although that should not be needed, if every home directory is a distinct working copy. .RE .PP .SH "Using master/local repositories" .PP Imagine having 10 similar machines with the same base-installation. .PP Then you install one machine, commit that into the repository as \fCbase/trunk\fP, and make a copy as \fCbase/released\fP. .PP The other machines get \fCbase/released\fP as checkout source, and another (overlaid) from eg. \fCmachine1/trunk\fP. .br Per-machine changes are always committed into the \fCmachineX/trunk\fP of the per-machine repository; this would be the host name, IP address, and similar things. .PP On the development machine all changes are stored into \fCbase/trunk\fP; if you're satisfied with your changes, you merge them (see \fBBranching, tagging, merging\fP) into \fCbase/released\fP, whereupon all other machines can update to this latest version. .PP So by looking at \fCmachine1/trunk\fP you can see the history of the machine-specific changes; and in \fCbase/released\fP you can check out every old version to verify problems and bugs. .PP \fBNote:\fP .RS 4 You can take this system a bit further: optional software packages could be stored in other subtrees. They should be of lower priority than the base tree, so that in case of conflicts the base should always be preferred (but see \fB1\fP). .RE .PP Here is a small example; \fCmachine1\fP is the development machine, \fCmachine2\fP is a \fIclient\fP. .PP .nf machine1$ fsvs urls name:local,P:200,svn+ssh://lserver/per-machine/machine1/trunk machine1$ fsvs urls name:base,P:100,http://bserver/base-install1/trunk # Determine differences, and commit them machine1$ fsvs ci -o commit_to=local /etc/HOSTNAME /etc/network/interfaces /var/log machine1$ fsvs ci -o commit_to=base / .fi .PP .PP Now you've got a base-install in your repository, and can use that on the other machine: .PP .nf machine2$ fsvs urls name:local,P:200,svn+ssh://lserver/per-machine/machine2/trunk machine2$ fsvs urls name:base,P:100,http://bserver/base-install1/trunk machine2$ fsvs sync-repos # Now you see differences of this machines' installation against the other: machine2$ fsvs st # You can see what is different: machine2$ fsvs diff /etc/X11/xorg.conf # You can take the base installations files: machine2$ fsvs revert /bin/ls # And put the files specific to this machine into its repository: machine2$ fsvs ci -o commit_to=local /etc/HOSTNAME /etc/network/interfaces /var/log .fi .PP .PP Now, if this machine has a harddisk failure or needs setup for any other reason, you boot it (eg. via PXE, Knoppix or whatever), and do (\fB3\fP) .PP .nf # Re-partition and create filesystems (if necessary) machine2-knoppix$ fdisk ... machine2-knoppix$ mkfs ... # Mount everything below /mnt machine2-knoppix$ mount /mnt/[...] machine2-knoppix$ cd /mnt # Do a checkout below /mnt machine2-knoppix$ fsvs co -o softroot=/mnt .fi .PP .SH "Branching, tagging, merging" .PP Other names for your branches (instead of \fCtrunk\fP, \fCtags\fP and \fCbranches\fP) could be \fCunstable\fP, \fCtesting\fP, and \fCstable\fP; your production machines would use \fCstable\fP, your testing environment \fCtesting\fP, and in \fCunstable\fP you'd commit all your daily changes. .PP \fBNote:\fP .RS 4 Please note that there's no merging mechanism in FSVS; and as far as I'm concerned, there won't be. Subversion just gets automated merging mechanisms, and these should be fine for this usage too. (\fB4\fP) .RE .PP .SS "Thoughts about tagging" Tagging works just like normally; although you need to remember to tag more than a single branch. Maybe FSVS should get some knowledge about the subversion repository layout, so a \fCfsvs tag\fP would tag all repositories at once? It would have to check for duplicate tag-names (eg. on the \fCbase\fP -branch), and just keep it if it had the same copyfrom-source. .PP But how would tags be used? Define them as source URL, and checkout? Would be a possible case. .PP Or should \fCfsvs tag\fP do a \fImerge\fP into the repository, so that a single URL contains all files currently checked out, with copyfrom-pointers to the original locations? Would require using a single repository, as such pointers cannot be across different repositories. If the committed data includes the \fC$FSVS_CONF/\fP.../Urls file, the original layout would be known, too - although to use it a \fBsync-repos\fP would be necessary. .SH "Using a single repository" .PP A single repository would have to be partitioned in the various branches that are needed for bookkeeping; see these examples. .PP Depending on the number of machines it might make sense to put them in a 1- or 2 level deep hierarchy; named by the first character, like .PP .PP .nf machines/ A/ Axel/ Andreas/ B/ Berta/ G/ Gandalf/ .fi .PP .SS "Simple layout" Here only the base system gets branched and tagged; the machines simply backup their specific/localized data into the repository. .PP .PP .nf # For the base-system: trunk/ bin/ usr/ sbin/ tags/ tag-1/ branches/ branch-1/ # For the machines: machines/ machine1/ etc/ passwd HOSTNAME machine2/ etc/ passwd HOSTNAME .fi .PP .SS "Per-area" Here every part gets its \fCtrunk\fP, \fCbranches\fP and \fCtags:\fP .PP .PP .nf base/ trunk/ bin/ sbin/ usr/ tags/ tag-1/ branches/ branch-1/ machine1/ trunk/ etc/ passwd HOSTNAME tags/ tag-1/ branches/ machine2/ trunk/ etc/ passwd HOSTNAME tags/ branches/ .fi .PP .SS "Common trunk, tags, and branches" Here the base-paths \fCtrunk\fP, \fCtags\fP and \fCbranches\fP are shared: .PP .PP .nf trunk/ base/ bin/ sbin/ usr/ machine2/ etc/ passwd HOSTNAME machine1/ etc/ passwd HOSTNAME tags/ tag-1/ branches/ branch-1/ .fi .PP .SH "Other notes" .PP .SS "1" Conflicts should not be automatically merged. If two or more trees bring the same file, the file from the \fIhighest\fP tree wins - this way you always know the file data on your machines. It's better if a single software doesn't work, compared to a machine that no longer boots or is no longer accessible (eg. by SSH)). .PP So keep your base installation at highest priority, and you've got good chances that you won't loose control in case of conflicting files. .SS "2" If you don't know which files are different in your installs, .IP "\(bu" 2 install two machines, .IP "\(bu" 2 commit the first into fsvs, .IP "\(bu" 2 do a \fBsync-repos\fP on the second, .IP "\(bu" 2 and look at the \fBstatus\fP output. .PP .SS "3" As debian includes FSVS in the near future, it could be included on the next KNOPPIX, too! .PP Until then you'd need a custom boot CD, or copy the absolute minimum of files to the harddisk before recovery. .PP There's a utility \fCsvntar\fP available; it allows you to take a snapshot of a subversion repository directly into a \fC\fP.tar -file, which you can easily export to destination machine. (Yes, it knows about the meta-data properties FSVS uses, and stores them into the archive.) .SS "4" Why no file merging? Because all real differences are in the per-machine files -- the files that are in the \fCbase\fP repository are changed only on a single machine, and so there's an unidirectional flow. .PP BTW, how would you merge your binaries, eg. \fC/bin/ls\fP? .SH "Feedback" .PP If you've got any questions, ideas, wishes or other feedback, please tell us in the mailing list \fCusers [at] fsvs.tigris.org\fP. .PP Thank you! .SH "Author" .PP Generated automatically by Doxygen for fsvs from the source code. fsvs-fsvs-1.2.12/doc/fsvs-options.5000066400000000000000000000713511453631713700170630ustar00rootroot00000000000000.TH "FSVS - Options and configfile" 5 "11 Mar 2010" "Version trunk:2424" "fsvs" \" -*- nroff -*- .ad l .nh .SH NAME Further options for FSVS. \- .PP List of settings that modify FSVS' behaviour. List of settings that modify FSVS' behaviour. FSVS understands some options that modify its behaviour in various small ways. .SH "Overview" .PP .SS "This document" This document lists all available options in FSVS, in an \fBfull listing\fP and in \fBgroups\fP. .PP Furthermore you can see their \fBrelative priorities\fP and some \fBexamples\fP. .SS "Semantic groups" .PD 0 .IP "\(bu" 2 \fBOutput settings and entry filtering\fP .IP "\(bu" 2 \fBDiffing and merging on update\fP .IP "\(bu" 2 \fBOptions for commit\fP .IP "\(bu" 2 \fBPerformance and tuning related options\fP .IP "\(bu" 2 \fBBase configuration\fP .IP "\(bu" 2 \fBDebugging and diagnosing\fP .PP .SS "Sorted list of options" FSVS currently knows: .PD 0 .IP "\(bu" 2 \fCall_removed\fP - \fBTrimming the list of deleted entries\fP .IP "\(bu" 2 \fCauthor\fP - \fBAuthor\fP .IP "\(bu" 2 \fCchange_check\fP - \fBChange detection\fP .IP "\(bu" 2 \fCcolordiff\fP - \fBUsing colordiff\fP .IP "\(bu" 2 \fCcommit_to\fP - \fBDestination URL for commit\fP .IP "\(bu" 2 \fCconflict\fP - \fBHow to resolve conflicts on update\fP .IP "\(bu" 2 \fCconf\fP - \fBPath definitions for the config and WAA area\fP. .IP "\(bu" 2 \fCconfig_dir\fP - \fBConfiguration directory for the subversion libraries\fP. .IP "\(bu" 2 \fCcopyfrom_exp\fP - \fBAvoiding expensive compares on \fBcopyfrom-detect\fP\fP .IP "\(bu" 2 \fCdebug_output\fP - \fBDestination for debug output\fP .IP "\(bu" 2 \fCdebug_buffer\fP - \fBUsing a debug buffer\fP .IP "\(bu" 2 \fCdelay\fP - \fBWaiting for a time change after working copy operations\fP .IP "\(bu" 2 \fCdiff_prg\fP, \fCdiff_opt\fP, \fCdiff_extra\fP - \fBOptions relating to the 'diff' action\fP .IP "\(bu" 2 \fCdir_exclude_mtime\fP - \fBIgnore mtime-metadata changes for directories\fP .IP "\(bu" 2 \fCdir_sort\fP - \fBDirectory sorting\fP .IP "\(bu" 2 \fCempty_commit\fP - \fBDoing empty commits\fP .IP "\(bu" 2 \fCempty_message\fP - \fBAvoid commits without a commit message\fP .IP "\(bu" 2 \fCfilter\fP - \fBFiltering entries\fP, but see \fB-f\fP. .IP "\(bu" 2 \fCgroup_stats\fP - \fBGetting grouping/ignore statistics\fP. .IP "\(bu" 2 \fClimit\fP - \fB'fsvs log' revision limit\fP .IP "\(bu" 2 \fClog_output\fP - \fB'fsvs log' output format\fP .IP "\(bu" 2 \fCmerge_prg\fP, \fCmerge_opt\fP - \fBOptions regarding the 'merge' program\fP .IP "\(bu" 2 \fCmkdir_base\fP - \fBCreating directories in the repository above the URL\fP .IP "\(bu" 2 \fCpath\fP - \fBDisplaying paths\fP .IP "\(bu" 2 \fCsoftroot\fP - \fBUsing an alternate root directory\fP .IP "\(bu" 2 \fCstat_color\fP - \fBStatus output coloring\fP .IP "\(bu" 2 \fCstop_change\fP - \fBChecking for changes in a script\fP .IP "\(bu" 2 \fCverbose\fP - \fBVerbosity flags\fP .IP "\(bu" 2 \fCwarning\fP - \fBSetting warning behaviour\fP, but see \fB-W\fP. .IP "\(bu" 2 \fCwaa\fP - \fBwaa\fP. .PP .SS "Priorities for option setting" The priorities are .PD 0 .IP "\(bu" 2 Command line \fI\fP(highest) .IP "\(bu" 2 Environment variables. These are named as \fCFSVS_\fP\fI{upper-case option name}\fP. .IP "\(bu" 2 \fC$HOME/.fsvs/wc-dir/config\fP .IP "\(bu" 2 \fC$FSVS_CONF/wc-dir/config\fP .IP "\(bu" 2 \fC$HOME/.fsvs/config\fP .IP "\(bu" 2 \fC$FSVS_CONF/config\fP .IP "\(bu" 2 Default value, compiled in \fI\fP(lowest) .PP .PP \fBNote:\fP .RS 4 The \fC$HOME-dependent\fP configuration files are not implemented currently. Volunteers? .RE .PP Furthermore there are 'intelligent' run-time dependent settings, like turning off colour output when the output is redirected. Their priority is just below the command line - so they can always be overridden if necessary. .SS "Examples" Using the commandline: .PP .nf fsvs -o path=environment fsvs -opath=environment .fi .PP Using environment variables: .PP .nf FSVS_PATH=absolute fsvs st .fi .PP A configuration file, from \fC$FSVS_CONF/config\fP or in a WC-specific path below \fC$FSVS_CONF\fP: .PP .nf # FSVS configuration file path=wcroot .fi .PP .SH "Output settings and entry filtering" .PP .SS "Trimming the list of deleted entries" If you remove a directory, all entries below are implicitly known to be deleted, too. To make the \fBstatus\fP output shorter there's the \fCall_removed\fP option which, if set to \fCno\fP, will cause children of removed entries to be omitted. .PP Example for the config file: .PP .nf all_removed=no .fi .PP .SS "Ignore mtime-metadata changes for directories" When this option is enabled, directories where only the mtime changed are not reported on \fBstatus\fP anymore. .PP This is useful in situations where temporary files are created in directories, eg. by text editors. (Example: \fCVIM\fP swapfiles when no \fCdirectory\fP option is configured). .PP Example for the config file: .PP .nf dir_exclude_mtime=yes .fi .PP .SS "Directory sorting" If you'd like to have the output of \fBstatus\fP sorted, you can use the option \fCdir_sort=yes\fP. FSVS will do a run through the tree, to read the status of the entries, and then go through it again, but sorted by name. .PP \fBNote:\fP .RS 4 If FSVS aborts with an error during \fBstatus\fP output, you might want to turn this option off again, to see where FSVS stops; the easiest way is on the command line with \fC-odir_sort=no\fP. .RE .PP .SS "Filtering entries" Please see the command line parameter for \fB-f\fP, which is identical. .PP .PP .nf fsvs -o filter=mtime .fi .PP .SS "'fsvs log' revision limit" There are some defaults for the number of revisions that are shown on a \fC'fsvs log'\fP command: .PD 0 .IP "\(bu" 2 2 revisions given (\fC-rX:Y\fP): \fCabs\fP(X-Y)+1, ie. all revisions in that range. .IP "\(bu" 2 1 revision given: exactly that one. .IP "\(bu" 2 no revisions given: from \fCHEAD\fP to 1, with a maximum of 100. .PP .PP As this option can only be used to set an upper limit of revisions, it makes most sense for the no-revision-arguments case. .SS "'fsvs log' output format" You can modify aspects of the \fBfsvs log\fP output format by setting the \fClog_output\fP option to a combination of these flags: .PD 0 .IP "\(bu" 2 \fCcolor:\fP This uses color in the output, similar to \fCcg-log\fP (\fCcogito-log\fP); the header and separator lines are highlighted. .PP \fBNote:\fP .RS 4 This uses ANSI escape sequences, and tries to restore the default color; if you know how to do that better (and more compatible), please tell the developer mailing list. .RE .PP .IP "\(bu" 2 \fCindent:\fP Additionally you can shift the log message itself a space to the right, to make the borders clearer. .PP .PP Furthermore the value \fCnormal\fP is available; this turns off all special handling. .PP \fBNote:\fP .RS 4 If you start such an option, the value is reset; so if you specify \fClog_output=color\fP,indent in the global config file, and use \fClog_output=color\fP on the commandline, only colors are used. This is different to the \fBFiltering entries\fP option, which is cumulating. .RE .PP .SS "Displaying paths" You can specify how paths printed by FSVS should look like; this is used for the entry status output of the various actions, and for the diff header lines. .PP There are several possible settings, of which one can be chosen via the \fCpath\fP option. .PP .PD 0 .IP "\(bu" 2 \fCwcroot\fP .br This is the old, traditional FSVS setting, where all paths are printed relative to the working copy root. .PP .IP "\(bu" 2 \fCparameter\fP .br With this setting FSVS works like most other programs - it uses the first best-matching parameter given by the user, and appends the rest of the path. .br This is the new default. .PP \fBNote:\fP .RS 4 Internally FSVS still first parses all arguments, and then does a single run through the entries. So if some entry matches more than one parameter, it is printed using the first match. .RE .PP .IP "\(bu" 2 \fCabsolute\fP .br All paths are printed in absolute form. This is useful if you want to paste them into other consoles without worrying whether the current directory matches, or for using them in pipelines. .PP .PP The next two are nearly identical to \fCabsolute\fP, but the beginning of paths are substituted by environment variables. This makes sense if you want the advantage of full paths, but have some of them abbreviated. .PD 0 .IP "\(bu" 2 \fCenvironment\fP .br Match variables to directories after reading the known entries, and use this cached information. This is faster, but might miss the best case if new entries are found (which would not be checked against possible longer hits). .br Furthermore, as this works via associating environment variables to entries, the environment variables must at least match the working copy base - shorter paths won't be substituted. .IP "\(bu" 2 \fCfull-environment\fP .br Check for matches just before printing the path. .br This is slower, but finds the best fit. .PP \fBNote:\fP .RS 4 The string of the environment variables must match a directory name; the filename is always printed literally, and partial string matches are not allowed. Feedback wanted. .PP Only environment variables whose names start with \fCWC\fP are used for substitution, to avoid using variables like \fC$PWD\fP, \fC$OLDPWD\fP, \fC$HOME\fP and similar which might differ between sessions. Maybe the allowed prefixes for the environment variables should be settable in the configuration. Opinions to the users mailing list, please. .RE .PP .PP .PP Example, with \fC/\fP as working copy base: .PP .nf $ cd /etc $ fsvs -o path=wcroot st .mC. 1001 ./etc/X11/xorg.conf $ fsvs -o path=absolute st .mC. 1001 /etc/X11/xorg.conf $ fsvs -o path=parameters st .mC. 1001 X11/xorg.conf $ fsvs -o path=parameters st . .mC. 1001 ./X11/xorg.conf $ fsvs -o path=parameters st / .mC. 1001 /etc/X11/xorg.conf $ fsvs -o path=parameters st X11 .mC. 1001 X11/xorg.conf $ fsvs -o path=parameters st ../dev/.. .mC. 1001 ../dev/../etc/X11/xorg.conf $ fsvs -o path=parameters st X11 ../etc .mC. 1001 X11/xorg.conf $ fsvs -o path=parameters st ../etc X11 .mC. 1001 ../etc/X11/xorg.conf $ fsvs -o path=environ st .mC. 1001 ./etc/X11/xorg.conf $ WCBAR=/etc fsvs -o path=wcroot st .mC. 1001 $WCBAR/X11/xorg.conf $ WCBAR=/etc fsvs -o path=wcroot st / .mC. 1001 $WCBAR/X11/xorg.conf $ WCBAR=/e fsvs -o path=wcroot st .mC. 1001 /etc/X11/xorg.conf $ WCBAR=/etc WCFOO=/etc/X11 fsvs -o path=wcroot st .mC. 1001 $WCFOO/xorg.conf $ touch /etc/X11/xinit/xinitrc $ fsvs -o path=parameters st .mC. 1001 X11/xorg.conf .m.? 1001 X11/xinit/xinitrc $ fsvs -o path=parameters st X11 /etc/X11/xinit .mC. 1001 X11/xorg.conf .m.? 1001 /etc/X11/xinit/xinitrc .fi .PP .PP \fBNote:\fP .RS 4 At least for the command line options the strings can be abbreviated, as long as they're still identifiable. Please use the full strings in the configuration file, to avoid having problems in future versions when more options are available. .RE .PP .SS "Status output coloring" FSVS can colorize the output of the status lines; removed entries will be printed in red, new ones in green, and otherwise changed in blue. Unchanged (for \fC-v\fP) will be given in the default color. .PP For this you can set \fCstat_color=yes\fP; this is turned \fCoff\fP per default. .PP As with the other colorizing options this gets turned \fCoff\fP automatically if the output is not on a tty; on the command line you can override this, though. .SS "Checking for changes in a script" If you want to use FSVS in scripts, you might simply want to know whether anything was changed. .PP In this case use the \fCstop_on_change\fP option, possibly combined with \fBFiltering entries\fP; this gives you no output on \fCSTDOUT\fP, but an error code on the first change seen: .PP .nf fsvs -o stop_change=yes st /etc if fsvs status -o stop_change=yes -o filter=text /etc/init.d then echo No change found ... else echo Changes seen. fi .fi .PP .SS "Verbosity flags" If you want a bit more control about the data you're getting you can use some specific flags for the \fCverbose\fP options. .PP .PD 0 .IP "\(bu" 2 \fCnone\fP,veryquiet - reset the bitmask, don't display anything. .IP "\(bu" 2 \fCquiet\fP - only a few output lines. .IP "\(bu" 2 \fCchanges\fP - the characters showing what has changed for an entry. .IP "\(bu" 2 \fCsize\fP - the size for files, or the textual description (like \fC'dir'\fP). .IP "\(bu" 2 \fCpath\fP - the path of the file, formatted according to \fBthe path option\fP. .IP "\(bu" 2 \fCdefault\fP - The default value, ie. \fCchanges\fP, \fCsize\fP and \fCname\fP. .IP "\(bu" 2 \fCmeta\fP - One more than the default so it can be used via a single \fC'-v'\fP, it marks that the mtime and owner/group changes get reported as two characters.If \fC'-v'\fP is used to achieve that, even entries without changes are reported, unless overridden by \fBFiltering entries\fP. .IP "\(bu" 2 \fCurl\fP - Displays the entries' top priority URL .IP "\(bu" 2 \fCcopyfrom\fP - Displays the URL this entry has been copied from (see \fBcopy\fP). .IP "\(bu" 2 \fCgroup\fP - The group this entry belongs to, see \fBgroup\fP .IP "\(bu" 2 \fCurls\fP - Displays all known URLs of this entry .IP "\(bu" 2 \fCstacktrace\fP - Print the full stacktrace when reporting errors; useful for debugging. .IP "\(bu" 2 \fCall\fP - Sets all flags. Mostly useful for debugging. .PP .PP Please note that if you want to display \fBfewer\fP items than per default, you'll have to clear the bitmask first, like this: .PP .nf fsvs status -o verbose=none,changes,path .fi .PP .SH "Diffing and merging on update" .PP .SS "Options relating to the 'diff' action" The diff is not done internally in FSVS, but some other program is called, to get the highest flexibility. .PP There are several option values: .PD 0 .IP "\(bu" 2 \fCdiff_prg\fP: The executable name, default \fC'diff'\fP. .IP "\(bu" 2 \fCdiff_opt\fP: The default options, default \fC'-pu'\fP. .IP "\(bu" 2 \fCdiff_extra\fP: Extra options, no default. .PP .PP The call is done as .PP .nf $diff_prg $diff_opt $file1 --label '$label1' $file2 --label '$label2' $diff_extra .fi .PP .PP \fBNote:\fP .RS 4 In \fCdiff_opt\fP you should use only use command line flags without parameters; in \fCdiff_extra\fP you can encode a single flag with parameter (like \fC'-U5'\fP). If you need more flexibility, write a shell script and pass its name as \fCdiff_prg\fP. .RE .PP Advanced users might be interested in \fBexported environment variables\fP, too; with their help you can eg. start different \fCdiff\fP programs depending on the filename. .SS "Using colordiff" If you have \fCcolordiff\fP installed on your system, you might be interested in the \fCcolordiff\fP option. .PP It can take one of these values: .PD 0 .IP "\(bu" 2 \fCno\fP, \fCoff\fP or \fCfalse:\fP Don't use \fCcolordiff\fP. .IP "\(bu" 2 empty (default value): Try to use \fCcolordiff\fP as executable, but don't throw an error if it can't be started; just pipe the data as-is to \fCSTDOUT\fP. (\fIAuto\fP mode.) .IP "\(bu" 2 anything else: Pipe the output of the \fCdiff\fP program (see \fBOptions relating to the 'diff' action\fP) to the given executable. .PP .PP Please note that if \fCSTDOUT\fP is not a tty (eg. is redirected into a file), this option must be given on the command line to take effect. .SS "How to resolve conflicts on update" If you start an update, but one of the entries that was changed in the repository is changed locally too, you get a conflict. .PP There are some ways to resolve a conflict: .PD 0 .IP "\(bu" 2 \fClocal\fP - Just take the local entry, ignore the repository. .IP "\(bu" 2 \fCremote\fP - Overwrite any local change with the remote version. .PP .IP "\(bu" 2 \fCboth\fP - Keep the local modifications in the file renamed to \fC\fIfilename\fP.mine\fP, and save the repository version as \fC\fIfilename\fP.r\fIXXX\fP\fP, ie. put the revision number after the filename. .PP The conflict must be solved manually, and the solution made known to FSVS via the \fBresolve\fP command. .PP \fBNote:\fP .RS 4 As there's no known \fIgood\fP version after this renaming, a zero byte file gets created. .br Any \fBresolve\fP or \fBrevert\fP command would make that current, and the changes that are kept in \fC\fIfilename\fP.mine\fP would be lost! .br You should only \fBrevert\fP to the last repository version, ie. the data of \fC\fIfilename\fP.r\fIXXX\fP\fP. .RE .PP .IP "\(bu" 2 \fCmerge\fP - Call the program \fCmerge\fP with the common ancestor, the local and the remote version. .PP If it is a clean merge, no further work is necessary; else you'll get the (partly) merged file, and the two other versions just like with the \fCboth\fP variant, and (again) have to tell FSVS that the conflict is solved, by using the \fBresolve\fP command. .PP .PP \fBNote:\fP .RS 4 As in the subversion command line client \fCsvn\fP the auxiliary files are seen as new, although that might change in the future (so that they automatically get ignored). .RE .PP .SS "Options regarding the 'merge' program" Like with \fBdiff\fP, the \fCmerge\fP operation is not done internally in FSVS. .PP To have better control .PD 0 .IP "\(bu" 2 \fCmerge_prg\fP: The executable name, default \fC'merge'\fP. .IP "\(bu" 2 \fCmerge_opt\fP: The default options, default \fC'-A'\fP. .PP .PP The option \fC'-p'\fP is always used: .PP .nf $merge_prg $merge_opt -p $file1 $common $file2 .fi .PP .SH "Options for commit" .PP .SS "Author" You can specify an author to be used on commit. This option has a special behaviour; if the first character of the value is an \fC'$'\fP, the value is replaced by the environment variable named. .PP Empty strings are ignored; that allows an \fC/etc/fsvs/config\fP like this: .PP .nf author=unknown author=$LOGNAME author=$SUDO_USER .fi .PP where the last non-empty value is taken; and if your \fC\fP.authorized_keys has lines like .PP .nf environment='FSVS_AUTHOR=some_user' ssh-rsa ... .fi .PP that would override the config values. .PP \fBNote:\fP .RS 4 Your \fCsshd_config\fP needs the \fCPermitUserEnvironment\fP setting; you can also take a look at the \fCAcceptEnv\fP and \fCSendEnv\fP documentation. .RE .PP .SS "Destination URL for commit" If you defined multiple URLs for your working copy, FSVS needs to know which URL to commit to. .PP For this you would set \fCcommit_to\fP to the \fBname\fP of the URL; see this example: .PP .nf fsvs urls N:master,P:10,http://... N:local,P:20,file:///... fsvs ci /etc/passwd -m 'New user defined' -ocommit_to=local .fi .PP .SS "Doing empty commits" In the default settings FSVS will happily create empty commits, ie. revisions without any changed entry. These just have a revision number, an author and a timestamp; this is nice if FSVS is run via CRON, and you want to see when FSVS gets run. .PP If you would like to avoid such revisions, set this option to \fCno\fP; then such commits will be avoided. .PP Example: .PP .nf fsvs commit -o empty_commit=no -m 'cron' /etc .fi .PP .SS "Avoid commits without a commit message" If you don't like the behaviour that FSVS does commits with an empty message received from \fC$EDITOR\fP (eg if you found out that you don't want to commit after all), you can change this option to \fCno\fP; then FSVS won't allow empty commit messages. .PP Example for the config file: .PP .nf empty_message=no .fi .PP .SS "Creating directories in the repository above the URL" If you want to keep some data versioned, the first commit is normally the creation of the base directories \fBabove\fP the given URL (to keep that data separate from the other repository data). .PP Previously this had to be done manually, ie. with a \fCsvn mkdir $URL --parents\fP or similar command. .br With the \fCmkdir_base\fP option you can tell FSVS to create directories as needed; this is mostly useful on the first commit. .PP .PP .nf fsvs urls ... fsvs group 'group:ignore,./**' fsvs ci -m 'First post!' -o mkdir_base=yes .fi .PP .SS "Waiting for a time change after working copy operations" If you're using FSVS in automated systems, you might see that changes that happen in the same second as a commit are not seen with \fBstatus\fP later; this is because the timestamp granularity of FSVS is 1 second. .PP For backward compatibility the default value is \fCno\fP (don't delay). You can set it to any combination of .PD 0 .IP "\(bu" 2 \fCcommit\fP, .IP "\(bu" 2 \fCupdate\fP, .IP "\(bu" 2 \fCrevert\fP and/or .IP "\(bu" 2 \fCcheckout\fP; .PP for \fCyes\fP all of these actions are delayed until the clock seconds change. .PP Example how to set that option via an environment variable: .PP .nf export FSVS_DELAY=commit,revert .fi .PP .SH "Performance and tuning related options" .PP .SS "Change detection" This options allows to specify the trade-off between speed and accuracy. .PP A file with a changed size can immediately be known as changed; but if only the modification time is changed, this is not so easy. Per default FSVS does a MD5 check on the file in this case; if you don't want that, or if you want to do the checksum calculation for \fBevery\fP file (in case a file has changed, but its mtime not), you can use this option to change FSVS' behaviour. .PP On the command line there's a shortcut for that: for every \fC'-C'\fP another check in this option is chosen. .PP The recognized specifications are noneResets the check bitmask to 'no checks'. file_mtimeCheck files for modifications (via MD5) and directories for new entries, if the mtime is different - default dirCheck all directories for new entries, regardless of the timestamp. allfilesCheck \fBall\fP files with MD5 for changes (\fCtripwire\fP -like operation). fullAll available checks. .PP You can give multiple options; they're accumulated unless overridden by \fCnone\fP. .PP .nf fsvs -o change_check=allfiles status .fi .PP .PP \fBNote:\fP .RS 4 \fIcommit\fP and \fIupdate\fP set additionally the \fCdir\fP option, to avoid missing new files. .RE .PP .SS "Avoiding expensive compares on \\ref cpfd 'copyfrom-detect'" If you've got big files that are seen as new, doing the MD5 comparison can be time consuming. So there's the option \fCcopyfrom_exp\fP (for \fI'expensive'\fP, which takes the usual \fCyes\fP (default) and \fCno\fP arguments. .PP .PP .nf fsvs copyfrom-detect -o copyfrom_exp=no some_directory .fi .PP .SS "Getting grouping/ignore statistics" If you need to ignore many entries of your working copy, you might find that the ignore pattern matching takes some valuable time. .br In order to optimize the order of your patterns you can specify this option to print the number of tests and matches for each pattern. .PP .PP .nf $ fsvs status -o group_stats=yes -q Grouping statistics (tested, matched, groupname, pattern): 4705 80 ignore group:ignore,. .fi .PP .PP For optimizing you'll want to put often matching patterns at the front (to make them match sooner, and avoid unnecessary tests); but if you are using other groups than \fCignore\fP (like \fCtake\fP), you will have to take care to keep the patterns within a group together. .PP Please note that the first line shows how many entries were tested, and that the next lines differ by the number of matches entries for the current line, as all entries being tested against some pattern get tested for the next too, \fBunless they match the current pattern\fP. .PP This option is available for \fBstatus\fP and the \fBignore test\fP commands. .SH "Base configuration" .PP .SS "Path definitions for the config and WAA area" .PP The paths given here are used to store the persistent configuration data needed by FSVS; please see \fBFiles used by fsvs\fP and \fBPriorities for option setting\fP for more details, and the \fBUsing an alternate root directory\fP parameter as well as the \fBRecovery for a non-booting system\fP for further discussion. .PP .PP .nf FSVS_CONF=/home/user/.fsvs-conf fsvs -o waa=/home/user/.fsvs-waa st .fi .PP .PP \fBNote:\fP .RS 4 Please note that these paths can be given \fBonly\fP as environment variables (\fC$FSVS_CONF\fP resp. \fC$FSVS_WAA\fP) or as command line parameter; settings in config files are ignored. .RE .PP .SS "Configuration directory for the subversion libraries" This path specifies where the subversion libraries should take their configuration data from; the most important aspect of that is authentication data, especially for certificate authentication. .PP The default value is \fC$FSVS_CONF/svn/\fP. .PP \fC/etc/fsvs/config\fP could have eg. .PP .nf config_dir=/root/.subversion .fi .PP .PP Please note that this directory can hold an \fCauth\fP directory, and the \fCservers\fP and \fCconfig\fP files. .SS "Using an alternate root directory" This is a path that is prepended to \fC$FSVS_WAA\fP and \fC$FSVS_CONF\fP (or their default values, see \fBFiles used by fsvs\fP), if they do not already start with it, and it is cut off for the directory-name MD5 calculation. .PP When is that needed? Imagine that you've booted from some Live-CD like Knoppix; if you want to setup or restore a non-working system, you'd have to transfer all files needed by the FSVS binary to it, and then start in some kind of \fCchroot\fP environment. .PP With this parameter you can tell FSVS that it should load its libraries from the current filesystem, but use the given path as root directory for its administrative data. .PP This is used for recovery; see the example in \fBRecovery for a non-booting system\fP. .PP So how does this work? .PD 0 .IP "\(bu" 2 The internal data paths derived from \fC$FSVS_WAA\fP and \fC$FSVS_CONF\fP use the value given for \fCsoftroot\fP as a base directory, if they do not already start with it. .br (If that creates a conflict for you, eg. in that you want to use \fC/var\fP as the \fCsoftroot\fP, and your \fC$FSVS_WAA\fP should be \fC/var/fsvs\fP, you can make the string comparison fail by using \fC/./var\fP for either path.) .PP .IP "\(bu" 2 When a directory name for \fC$FSVS_CONF\fP or \fC$FSVS_WAA\fP is derived from some file path, the part matching \fCsoftroot\fP is cut off, so that the generated names match the situation after rebooting. .PP .PP Previously you'd have to \fBexport\fP your data back to the filesystem and call \fBurls\fP \fC'fsvs urls'\fP and FSVS \fBsync-repos\fP again, to get the WAA data back. .PP \fBNote:\fP .RS 4 A plain \fCchroot()\fP would not work, as some needed programs (eg. the decoder for update, see \fBSpecial property names\fP) would not be available. .PP The easy way to understand \fCsoftroot\fP is: If you want to do a \fCchroot()\fP into the given directory (or boot with it as \fC/\fP), you'll want this set. .PP As this value is used for finding the correct working copy root (by trying to find a \fBconf-path\fP, it cannot be set from a per-wc config file. Only the environment, global configuration or command line parameter make sense. .RE .PP .SH "Debugging and diagnosing" .PP The next two options could be set in the global configuration file, to automatically get the last debug messages when an error happens. .PP To provide an easy way to get on-line debugging again, \fCdebug_output\fP and \fCdebug_buffer\fP are both reset to non-redirected, on-line output, if more than a single \fC-d\fP is specified on the command line, like this: .PP .nf fsvs commit -m '...' -d -d filenames .fi .PP .PP In this case you'll get a message telling you about that. .SS "Destination for debug output" You can specify the debug output destination with the option \fCdebug_output\fP. This can be a simple filename (which gets truncated on open), or, if it starts with a \fC\fP|, a command that the output gets piped into. .PP If the destination cannot be opened (or none is given), debug output goes to \fCSTDOUT\fP (for easier tracing via \fCless\fP). .PP Example: .PP .nf fsvs -o debug_output=/tmp/debug.out -d st /etc .fi .PP .PP \fBNote:\fP .RS 4 That string is taken only once - at the first debug output line. So you have to use the correct order of parameters: \fC-o debug_output=... -d\fP. .RE .PP An example: writing the last 200 lines of debug output into a file. .PP .nf fsvs -o debug_output='| tail -200 > /tmp/debug.log' -d .... .fi .PP .SS "Using a debug buffer" With the \fCdebug_buffer\fP option you can specify the size of a buffer (in kB) that is used to capture the output, and which gets printed automatically if an error occurs. .PP This must be done \fBbefore\fP debugging starts, like with the \fBdebug_output\fP specification. .PP .PP .nf fsvs -o debug_buffer=128 ... .fi .PP .PP \fBNote:\fP .RS 4 If this option is specified in the configuration file or via the environment, only the buffer is allocated; if it is used on the command line, debugging is automatically turned on, too. .RE .PP .SS "Setting warning behaviour" Please see the command line parameter \fB-W\fP, which is identical. .PP .PP .nf fsvs -o warning=diff-status=ignore .fi .PP .SH "Author" .PP Generated automatically by Doxygen for fsvs from the source code. fsvs-fsvs-1.2.12/doc/fsvs-ssl-setup000066400000000000000000000111201453631713700171500ustar00rootroot00000000000000Repository Access with SSL Client Certificate (passwordless) ============================================================ This small guide explains the creation of a svn repository, that is accessible via https and client certificate authentication. Using client certificate authentication you won't neither need to supply a password on access nor you have to worry to store your password on that machine. Prerequisites: The basic configuration for access of to a repository via http is explained in http://svnbook.red-bean.com/en/1.5/svn-book.html#svn.serverconfig.httpd The steps are: a) install webdav and svn support b) configure apache2 to point to the repository c) setup of basic authentication For https access the additional steps are neccessary: a) enable ssl module for the webserver b) install ssl certificate and authority c) for passwordless access install the host key (pkcs12) If the repository is open to public it is recommended to get a certificate / host key from from an external ca-authority. Otherwise self-signed keys can be used. Creating self-signed keys ========================= Creation of self-signed keys can be done with the openssl-toolkit. It contains a script CA.pl to perform ca/certificate creation. Within Ubuntu/Debian the script can be found in /usr/lib/ssl/misc. CA.pl has a few options: $ CA.pl -h usage: CA -newcert|-newreq|-newreq-nodes|-newca|-sign|-verify usage: CA -signcert certfile keyfile|-newcert|-newreq|-newca|-sign|-verify To create a new authority use $ CA.pl -newca First a key is created. Afterwards a few questions about locality and company information will be asked. The ca-certificate and index files for ca-management are stored in ./default of the current directory. Creating the certificate is done via $ CA.pl -newcert This creates a new certificate. Both ca-authority, certificate and key will be used on the server where the repository is installed. Additionally a host certificate is created for the individual hosts to access the repository. $ CA.pl -newcert For use with subversion/fsvs the key needs first be converted to pkcs12. $ openssl pkcs12 -in newcert.pem -export -out $(hostname).p12 Replace $(hostname) with the hostname of your server. Installation of SSL certificate for SVN repository ================================================== A certificate .pem file contains both, the x509 certificate and the key. Before installation of the .pem file the password of the key should be removed. Otherwise on bootup the server will prompt for the password which is not convenient in HA environments. Of course the password should be removed from the servers' ssl certificate, in trusted environments, only. This command removes the password from a pem file. $ openssl rsa -in newcert.pem -out server.pem On Debian/Ubuntu, the ca-authority and the certificate should be placed in the /etc/ssl folder. The authority file should be moved to /etc/ssl/certs. The certificate that contains the key should be moved to /etc/ssl/private. Folders are created with installation of the openssl package. Configuration of CA-Authority and Certificate ============================================= The SSL configuration part for the apache server: SSLKeyFile /etc/ssl/private/newkey.pem SSLCertificate /etc/ssl/private/newkey.pem SSLAuthorityFile /etc/ssl/certs/ca.crt SSLCipherSuite HIGH:MEDIUM SSLVerifyClient require SSLVerifyDepth 1 SSLRequireSSL # ... SVN related config Setup Authentication ==================== Authentication is not necessary because we relay on the Client Certificate. Only issue left, is that the name of users who perform checkins will not be shown in commit messages. For this way one can use anonymous authentication. First check if module is enabled $ a2enmod authn_anon Global configuration for an host with fsvs-client: /etc/fsvs/svn/servers: [groups] fsvs = fsvs.repository.host [fsvs] ssl-client-cert-file = /etc/ssl/private/myhost.p12 ssl-client-cert-password = mysecretpass [global] ssl-authority-files = /etc/ssl/default/cacert.pem store-plaintext-passwords=yes The global svn access configuration takes place by default in /etc/fsvs/svn/servers. This can be changed on compile time with DEFAULT_CONFIGDIR_SUB in interface.h The configuration for the authentication credentials is stored in ~/.subversion. If the folder does not exists it will be created. Be aware that the initial creation tooks place with root privileges so if another svn client, running with user-only privileges, needs write access back this access should be restored e.g. via: $ chown -R username: ~/subversion. fsvs-fsvs-1.2.12/doc/fsvs-url-format.5000066400000000000000000000070501453631713700174530ustar00rootroot00000000000000.TH "FSVS - URL format" 5 "11 Mar 2010" "Version trunk:2424" "fsvs" \" -*- nroff -*- .ad l .nh .SH NAME Format of URLs \- .PP FSVS can use more than one URL; the given URLs are \fIoverlaid\fP according to their priority. FSVS can use more than one URL; the given URLs are \fIoverlaid\fP according to their priority. For easier managing they get a name, and can optionally take a target revision. .PP Such an \fIextended URL\fP has the form .PP .nf ['name:'{name},]['target:'{t-rev},]['prio:'{prio},]URL .fi .PP where URL is a standard URL known by subversion -- something like \fChttp://....\fP, \fCsvn://...\fP or \fCsvn+ssh://...\fP. .PP The arguments before the URL are optional and can be in any order; the URL must be last. .PP Example: .PP .nf name:perl,prio:5,svn://... .fi .PP or, using abbreviations, .PP .nf N:perl,P:5,T:324,svn://... .fi .PP .PP Please mind that the full syntax is in lower case, whereas the abbreviations are capitalized! .br Internally the \fC\fP: is looked for, and if the part before this character is a known keyword, it is used. .br As soon as we find an unknown keyword we treat it as an URL, ie. stop processing. .PP The priority is in reverse numeric order - the lower the number, the higher the priority. (See \fC\fBurl__current_has_precedence()\fP\fP ) .SH "Why a priority?" .PP When we have to overlay several URLs, we have to know \fBwhich\fP URL takes precedence - in case the same entry is in more than one. \fB(Which is \fBnot\fP recommended!)\fP .SH "Why a name?" .PP We need a name, so that the user can say \fB'commit all outstanding changes to the repository at URL x'\fP, without having to remember the full URL. After all, this URL should already be known, as there's a list of URLs to update from. .PP You should only use alphanumeric characters and the underscore here; or, in other words, \fC\\w\fP or \fC\fP[a-zA-Z0-9_]. (Whitespace, comma and semicolon get used as separators.) .SH "What can I do with the target revision?" .PP Using the target revision you can tell fsvs that it should use the given revision number as destination revision - so update would go there, but not further. Please note that the given revision number overrides the \fC-r\fP parameter; this sets the destination for all URLs. .PP The default target is \fCHEAD\fP. .PP \fBNote:\fP .RS 4 In subversion you can enter \fCURL@revision\fP - this syntax may be implemented in fsvs too. (But it has the problem, that as soon as you have a \fC@\fP in the URL, you \fBmust\fP give the target revision every time!) .RE .PP .SH "There's an additional internal number - why that?" .PP This internal number is not for use by the user. .br It is just used to have an unique identifier for an URL, without using the full string. .PP On my system the package names are on average 12.3 characters long (1024 packages with 12629 bytes, including newline): .PP .nf COLUMNS=200 dpkg-query -l | cut -c5- | cut -f1 -d' ' | wc .fi .PP .PP So if we store an \fIid\fP of the url instead of the name, we have approx. 4 bytes per entry (length of strings of numbers from 1 to 1024). Whereas using the needs name 12.3 characters, that's a difference of 8.3 per entry. .PP Multiplied with 150 000 entries we get about 1MB difference in filesize of the dir-file. Not really small ... .br And using the whole URL would inflate that much more. .PP Currently we use about 92 bytes per entry. So we'd (unnecessarily) increase the size by about 10%. .PP That's why there's an \fBurl_t::internal_number\fP. .SH "Author" .PP Generated automatically by Doxygen for fsvs from the source code. fsvs-fsvs-1.2.12/doc/fsvs.1000066400000000000000000001442761453631713700153750ustar00rootroot00000000000000.TH "FSVS - fast versioning tool" 1 "11 Mar 2010" "Version trunk:2424" "fsvs" \" -*- nroff -*- .ad l .nh .SH NAME Commands and command line parameters \- .PP fsvs is a client for subversion repositories; it is designed for fast versioning of big directory trees. fsvs is a client for subversion repositories; it is designed for fast versioning of big directory trees. .SH "SYNOPSIS" .PP \fCfsvs command [options] [args]\fP .PP The following commands are understood by FSVS: .SH "Local configuration and information:" .PP .IP "\fB\fBurls\fP\fP" 1c \fCDefine working copy base directories by their URL(s)\fP .IP "\fB\fBstatus\fP\fP" 1c \fCGet a list of changed entries\fP .IP "\fB\fBinfo\fP\fP" 1c \fCDisplay detailed information about single entries\fP .IP "\fB\fBlog\fP\fP" 1c \fCFetch the log messages from the repository\fP .IP "\fB\fBdiff\fP\fP" 1c \fCGet differences between files (local and remote)\fP .IP "\fB\fBcopyfrom-detect\fP\fP" 1c \fCAsk FSVS about probably copied/moved/renamed entries; see \fBcp\fP\fP .PP .SH "Defining which entries to take:" .PP .IP "\fB\fBignore\fP and \fBrign\fP\fP" 1c \fCDefine ignore patterns\fP .IP "\fB\fBunversion\fP\fP" 1c \fCRemove entries from versioning\fP .IP "\fB\fBadd\fP\fP" 1c \fCAdd entries that would be ignored\fP .IP "\fB\fBcp\fP, \fBmv\fP\fP" 1c \fCTell FSVS that entries were copied\fP .PP .SH "Commands working with the repository:" .PP .IP "\fB\fBcommit\fP\fP" 1c \fCSend changed data to the repository\fP .IP "\fB\fBupdate\fP\fP" 1c \fCGet updates from the repository\fP .IP "\fB\fBcheckout\fP\fP" 1c \fCFetch some part of the repository, and register it as working copy\fP .IP "\fB\fBcat\fP\fP" 1c \fCGet a file from the directory \fP .IP "\fB\fB\fCrevert\fP\fP and \fB\fCuncp\fP\fP\fP" 1c \fC\fCUndo local changes and entry markings\fP \fP .IP "\fB\fB\fCremote-status\fP\fP\fP" 1c \fC\fCAsk what an \fBupdate\fP would bring\fP \fP .PP .PP .SH "Property handling:" .PP \fC .IP "\fB\fBprop-set\fP\fP" 1c \fCSet user-defined properties\fP .IP "\fB\fBprop-get\fP\fP" 1c \fCAsk value of user-defined properties\fP .IP "\fB\fBprop-list\fP\fP" 1c \fCGet a list of user-defined properties\fP .PP \fP .PP .SH "Additional commands used for recovery and debugging:" .PP \fC .IP "\fB\fBexport\fP\fP" 1c \fCFetch some part of the repository\fP .IP "\fB\fBsync-repos\fP\fP" 1c \fCDrop local information about the entries, and fetch the current list from the repository.\fP .PP \fP .PP \fC .PP \fBNote:\fP .RS 4 Multi-url-operations are relatively new; there might be rough edges. .RE .PP The \fBreturn code\fP is \fC0\fP for success, or \fC2\fP for an error. \fC1\fP is returned if the option \fBChecking for changes in a script\fP is used, and changes are found; see also \fBFiltering entries\fP.\fP .PP .SH "Universal options" .PP .SS "-V -- show version" \fC \fC-V\fP makes FSVS print the version and a copyright notice, and exit.\fP .PP .SS "-d and -D -- debugging" \fC If FSVS was compiled using \fC--enable-debug\fP you can enable printing of debug messages (to \fCSTDOUT\fP) with \fC-d\fP. Per default all messages are printed; if you're only interested in a subset, you can use \fC-D\fP \fIstart-of-function-name\fP. .PP .nf fsvs -d -D waa_ status .fi .PP would call the \fIstatus\fP action, printing all debug messages of all WAA functions - \fCwaa__init\fP, \fCwaa__open\fP, etc.\fP .PP \fC For more details on the other debugging options \fBdebug_output\fP and \fBdebug_buffer\fP please see the options list.\fP .PP .SS "-N, -R -- recursion" \fC The \fC-N\fP and \fC-R\fP switches in effect just decrement/increment a counter; the behaviour is chosen depending on that. So a command line of \fC-N -N -N -R -R\fP is equivalent to \fC-3 +2 = -1\fP, this results in \fC-N\fP.\fP .PP .SS "-q, -v -- verbose/quiet" \fC \fC-v\fC/\fC-q\fC set/clear verbosity flags, and so give more/less output.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Please see \fBthe verbose option\fP for more details.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "-C -- checksum" \fC\fC\fC\fC\fC \fC-C\fP chooses to use more change detection checks; please see \fBthe change_check option\fP for more details.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "-f -- filter entries" \fC\fC\fC\fC\fC This parameter allows to do a bit of filtering of entries, or, for some operations, modification of the work done on given entries.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC It requires a specification at the end, which can be any combination of \fCany\fP, \fCtext\fP, \fCnew\fP, \fCdeleted\fP (or \fCremoved\fP), \fCmeta\fP, \fCmtime\fP, \fCgroup\fP, \fCmode\fP, \fCchanged\fP or \fCowner\fP; \fCdefault\fP or \fCdef\fP use the default value.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC By giving eg. the value \fCtext\fP, with a \fBstatus\fP action only entries that are new or changed are shown; with \fCmtime\fP,group only entries whose group or modification time has changed are printed.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 Please see \fBChange detection\fP for some more information. .PP If an entry gets replaced with an entry of a different type (eg. a directory gets replaced by a file), that counts as \fCdeleted\fP \fBand\fP \fCnew\fP. .RE .PP If you use \fC-v\fP, it's used as a \fCany\fP internally.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If you use the string \fCnone\fP, it resets the bitmask to \fBno\fP entries shown; then you can built a new mask. So \fCowner\fP,none,any,none,delete would show deleted entries. If the value after all commandline parsing is \fCnone\fP, it is reset to the default.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "-W warning=action -- set warnings" \fC\fC\fC\fC\fC Here you can define the behaviour for certain situations that should not normally happen, but which you might encounter.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The general format here is \fIspecification\fP = \fIaction\fP, where \fIspecification\fP is a string matching the start of at least one of the defined situations, and \fIaction\fP is one of these: .IP "\(bu" 2 \fIonce\fP to print only a single warning, .IP "\(bu" 2 \fIalways\fP to print a warning message \fBevery\fP time, .IP "\(bu" 2 \fIstop\fP to abort the program, .IP "\(bu" 2 \fIignore\fP to simply ignore this situation, or .IP "\(bu" 2 \fIcount\fP to just count the number of occurrences. .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If \fIspecification\fP matches more than one situation, all of them are set; eg. for \fImeta=ignore\fP all of \fImeta-mtime\fP, \fImeta-user\fP etc. are ignored.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If at least a single warning that is \fBnot\fP ignored is encountered during the program run, a list of warnings along with the number of messages it would have printed with the setting \fIalways\fP is displayed, to inform the user of possible problems.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The following situations can be handled with this: \fImeta-mtime\fP, \fImeta-user\fP, \fImeta-group\fP, \fImeta-umask\fP These warnings are issued if a meta-data property that was fetched from the repository couldn't be parsed. This can only happen if some other program or a user changes properties on entries. .br In this case you can use \fC-Wmeta=always\fP or \fC-Wmeta=count\fP, until the repository is clean again. .PP \fIno-urllist\fP This warning is issued if a \fBinfo\fP action is executed, but no URLs have been defined yet. .PP \fIcharset-invalid\fP If the function \fCnl_langinfo(3)\fP couldn't return the name of the current character encoding, a default of UTF-8 is used. You might need that for a minimal system installation, eg. on recovery. .PP \fIchmod-eperm\fP, \fIchown-eperm\fP If you update a working copy as normal user, and get to update a file which has another owner but which you may modify, you'll get errors because neither the user, group, nor mode can be set. .br This way you can make the errors non-fatal. .PP \fIchmod-other\fP, \fIchown-other\fP If you get another error than \fCEPERM\fP in the situation above, you might find these useful. .PP \fImixed-rev-wc\fP If you specify some revision number on a \fBrevert\fP, it will complain that mixed-revision working copies are not allowed. .br While you cannot enable mixed-revision working copies (I'm working on that) you can avoid being told every time. .PP \fIpropname-reserved\fP It is normally not allowed to set a property with the \fBprop-set\fP action with a name matching some reserved prefixes. .PP \fIignpat-wcbase\fP This warning is issued if an \fBabsolute ignore \fP pattern' does not match the working copy base directory. \\n See \\ref ignpat_shell_abs 'absolute shell patterns" for more details. .PP \fIdiff-status\fP GNU diff has defined that it returns an exit code 2 in case of an error; sadly it returns that also for binary files, so that a simply \fCfsvs diff some-binary-file text-file\fP would abort without printing the diff for the second file. .br Because of this FSVS currently ignores the exit status of diff per default, but this can be changed by setting this option to eg. \fIstop\fP. .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Also an environment variable FSVS_WARNINGS is used and parsed; it is simply a whitespace-separated list of option specifications.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "-u URLname[@revision[:revision]] -- select URLs" \fC\fC\fC\fC\fC Some commands can be reduced to a subset of defined URLs; the \fBupdate\fP command is a example.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If you have more than a single URL in use for your working copy, \fCupdate\fP normally updates \fBall\fP entries from \fBall\fP URLs. By using this parameter you can tell FSVS to update only the specified URLs.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The parameter can be used repeatedly; the value can have multiple URLs, separated by whitespace or one of \fC',;'\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP .nf fsvs up -u base_install,boot@32 -u gcc .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This would get \fCHEAD\fP of \fCbase_install\fP and \fCgcc\fP, and set the target revision of the \fCboot\fP URL \fBfor this command\fP at 32.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "-o [name[=value]] -- other options" \fC\fC\fC\fC\fC This is used for setting some seldom used option, for which default can be set in a configuration file (to be implemented, currently only command-line).\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC For a list of these please see \fBFurther options for FSVS.\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SH "Signals" .PP \fC\fC\fC\fC\fC If you have a running FSVS, and you want to change its verbosity, you can send the process either \fCSIGUSR1\fP (to make it more verbose) or \fCSIGUSR2\fP (more quiet).\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "add" .PP \fC\fC\fC\fC\fC .PP .nf fsvs add [-u URLNAME] PATH [PATH...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC With this command you can explicitly define entries to be versioned, even if they have a matching ignore pattern. They will be sent to the repository on the next commit, just like other new entries, and will therefore be reported as \fINew\fP .\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The \fC-u\fP option can be used if you're have more than one URL defined for this working copy and want to have the entries pinned to the this URL.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "Example" \fC\fC\fC\fC\fC Say, you're versioning your home directory, and gave an ignore pattern of \fC./.*\fP to ignore all \fC.*\fP entries in your home-directory. Now you want \fC.bashrc\fP, \fC.ssh/config\fP, and your complete \fC.kde3-tree\fP saved, just like other data.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC So you tell fsvs to not ignore these entries: .PP .nf fsvs add .bashrc .ssh/config .kde3 .fi .PP Now the entries below \fC.kde3\fP would match your earlier \fC./.*\fP pattern (as a match at the beginning is sufficient), so you have to insert a negative ignore pattern (a \fItake\fP pattern): .PP .nf fsvs ignore prepend t./.kde3 .fi .PP Now a \fCfsvs st\fP would show your entries as \fINew\fP , and the next commit will send them to the repository.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "unversion" .PP \fC\fC\fC\fC\fC .PP .nf fsvs unversion PATH [PATH...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command flags the given paths locally as removed. On the next commit they will be deleted in the repository, and the local information of them will be removed, but not the entries themselves. So they will show up as \fINew\fP again, and you get another chance at ignoring them.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "Example" \fC\fC\fC\fC\fC Say, you're versioning your home directory, and found that you no longer want \fC.bash_history\fP and \fC.sh_history\fP versioned. So you do .PP .nf fsvs unversion .bash_history .sh_history .fi .PP and these files will be reported as \fCd\fP (will be deleted, but only in the repository).\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Then you do a .PP .nf fsvs commit .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Now fsvs would report these files as \fCNew\fP , as it does no longer know anything about them; but that can be cured by .PP .nf fsvs ignore './.*sh_history' .fi .PP Now these two files won't be shown as \fINew\fP , either.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The example also shows why the given paths are not just entered as separate ignore patterns - they are just single cases of a (probably) much broader pattern.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 If you didn't use some kind of escaping for the pattern, the shell would expand it to the actual filenames, which is (normally) not what you want. .RE .PP \fP\fP\fP\fP\fP .SH "_build_new_list" .PP \fC\fC\fC\fC\fC This is used mainly for debugging. It traverses the filesystem and builds a new entries file. In production it should not be used; as neither URLs nor the revision of the entries is known, information is lost by calling this function!\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Look at \fBsync-repos\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "delay" .PP \fC\fC\fC\fC\fC This command delays execution until time has passed at least to the next second after writing the data files used by FSVS (\fBdir\fP and \fBurls\fP).\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command is for use in scripts; where previously the \fBdelay\fP option was used, this can be substituted by the given command followed by the \fCdelay\fP command.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The advantage against the \fBdelay\fP option is that read-only commands can be used in the meantime.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC An example: .PP .nf fsvs commit /etc/X11 -m 'Backup of X11' ... read-only commands, like 'status' fsvs delay /etc/X11 ... read-write commands, like 'commit' .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The optional path can point to any path in the WC.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC In the testing framework it is used to save a bit of time; in normal operation, where FSVS commands are not so tightly packed, it is normally preferable to use the \fBdelay\fP option.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "cat" .PP \fC\fC\fC\fC\fC .PP .nf fsvs cat [-r rev] path .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Fetches a file repository, and outputs it to \fCSTDOUT\fP. If no revision is specified, it defaults to BASE, ie. the current local revision number of the entry.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "checkout" .PP \fC\fC\fC\fC\fC .PP .nf fsvs checkout [path] URL [URLs...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Sets one or more URLs for the current working directory (or the directory \fCpath\fP), and does an \fBcheckout\fP of these URLs.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Example: .PP .nf fsvs checkout . http://svn/repos/installation/machine-1/trunk .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The distinction whether a directory is given or not is done based on the result of URL-parsing -- if it looks like an URL, it is used as an URL. .br Please mind that at most a single path is allowed; as soon as two non-URLs are found an error message is printed.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If no directory is given, \fC'.'\fP is used; this differs from the usual subversion usage, but might be better suited for usage as a recovery tool (where versioning \fC/\fP is common). Opinions welcome.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The given \fCpath\fP must exist, and \fBshould\fP be empty -- FSVS will abort on conflicts, ie. if files that should be created already exist. .br If there's a need to create that directory, please say so; patches for some parameter like \fC-p\fP are welcome.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC For a format definition of the URLs please see the chapter \fBFormat of URLs\fP and the \fBurls\fP and \fBupdate\fP commands.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Furthermore you might be interested in \fBUsing an alternate root directory\fP and \fBRecovery for a non-booting system\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "commit" .PP \fC\fC\fC\fC\fC .PP .nf fsvs commit [-m 'message'|-F filename] [-v] [-C [-C]] [PATH [PATH ...]] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Commits (parts of) the current state of the working copy into the repository.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "Example" \fC\fC\fC\fC\fC The working copy is \fC/etc\fP , and it is set up and committed already. .br Then \fC/etc/hosts\fP and \fC/etc/inittab\fP got modified. Since these are non-related changes, you'd like them to be in separate commits.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC So you simply run these commands: .PP .nf fsvs commit -m 'Added some host' /etc/hosts fsvs commit -m 'Tweaked default runlevel' /etc/inittab .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If the current directory is \fC/etc\fP you could even drop the \fC/etc/\fP in front, and use just the filenames.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Please see \fBstatus\fP for explanations on \fC-v\fP and \fC-C\fP . .br For advanced backup usage see also \fBthe \fP commit-pipe property".\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SH "cp" .PP \fC\fC\fC\fC\fC .PP .nf fsvs cp [-r rev] SRC DEST fsvs cp dump fsvs cp load .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The \fCcopy\fP command marks \fCDEST\fP as a copy of \fCSRC\fP at revision \fCrev\fP, so that on the next commit of \fCDEST\fP the corresponding source path is sent as copy source.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The default value for \fCrev\fP is \fCBASE\fP, ie. the revision the \fCSRC\fP (locally) is at.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Please note that this command works \fBalways\fP on a directory \fBstructure\fP - if you say to copy a directory, the \fBwhole\fP structure is marked as copy. That means that if some entries below the copy are missing, they are reported as removed from the copy on the next commit. .br (Of course it is possible to mark files as copied, too; non-recursive copies are not possible, but can be emulated by having parts of the destination tree removed.)\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 TODO: There will be differences in the exact usage - \fCcopy\fP will try to run the \fCcp\fP command, whereas \fCcopied\fP will just remember the relation. .RE .PP If this command are used without parameters, the currently defined relations are printed; please keep in mind that the \fBkey\fP is the destination name, ie. the 2nd line of each pair!\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The input format for \fCload\fP is newline-separated - first a \fCSRC\fP line, followed by a \fCDEST\fP line, then an line with just a dot (\fC'.'\fP) as delimiter. If you've got filenames with newlines or other special characters, you have to give the paths as arguments.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Internally the paths are stored relative to the working copy base directory, and they're printed that way, too.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Later definitions are \fBappended\fP to the internal database; to undo mistakes, use the \fBuncopy\fP action.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 \fBImportant:\fP User-defined properties like \fBfsvs:commit-pipe\fP are \fBnot\fP copied to the destinations, because of space/time issues (traversing through entire subtrees, copying a lot of property-files) and because it's not sure that this is really wanted. \fBTODO:\fP option for copying properties? .PP As subversion currently treats a rename as copy+delete, the \fBmv\fP command is an alias to \fBcp\fP. .RE .PP If you have a need to give the filenames \fCdump\fP or \fCload\fP as first parameter for copyfrom relations, give some path, too, as in \fC'./dump'\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 The source is internally stored as URL with revision number, so that operations like these .PP .nf $ fsvs cp a b $ rm a/1 $ fsvs ci a $ fsvs ci b .fi .PP work - FSVS sends the old (too recent!) revision number as source, and so the local filelist stays consistent with the repository. .br But it is not implemented (yet) to give an URL as copyfrom source directly - we'd have to fetch a list of entries (and possibly the data!) from the repository. .RE .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "copyfrom-detect" .PP \fC\fC\fC\fC\fC .PP .nf fsvs copyfrom-detect [paths...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command tells FSVS to look through the new entries, and see whether it can find some that seem to be copied from others already known. .br It will output a list with source and destination path and why it could match.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This is just for information purposes and doesn't change any FSVS state, (TODO: unless some option/parameter is set).\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The list format is \fBon purpose\fP incompatible with the \fCload\fP syntax, as the best match normally has to be taken manually.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If \fBverbose\fP is used, an additional value giving the percentage of matching blocks, and the count of possibly copied entries is printed.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Example: .PP .nf $ fsvs copyfrom-list -v newfile1 md5:oldfileA newfile2 md5:oldfileB md5:oldfileC md5:oldfileD newfile3 inode:oldfileI manber=82.6:oldfileF manber=74.2:oldfileG manber=53.3:oldfileH ... 3 copyfrom relations found. .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The abbreviations are: \fImd5\fP The \fBMD5\fP of the new file is identical to that of one or more already committed files; there is no percentage. .PP \fIinode\fP The \fBdevice/inode\fP number is identical to the given known entry; this could mean that the old entry has been renamed or hardlinked. \fBNote:\fP Not all filesystems have persistent inode numbers (eg. NFS) - so depending on your filesystems this might not be a good indicator! .PP \fIname\fP The entry has the same name as another entry. .PP \fImanber\fP Analysing files of similar size shows some percentage of (variable-sized) \fBcommon blocks\fP (ignoring the order of the blocks). .PP \fIdirlist\fP The new directory has similar files to the old directory. .br The percentage is (number_of_common_entries)/(files_in_dir1 + files_in_dir2 - number_of_common_entries). .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 \fBmanber\fP matching is not implemented yet. .PP If too many possible matches for an entry are found, not all are printed; only an indicator \fC...\fP is shown at the end. .RE .PP \fP\fP\fP\fP\fP .SH "uncp" .PP \fC\fC\fC\fC\fC .PP .nf fsvs uncopy DEST [DEST ...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The \fCuncopy\fP command removes a \fCcopyfrom\fP mark from the destination entry. This will make the entry unknown again, and reported as \fCNew\fP on the next invocations.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Only the base of a copy can be un-copied; if a directory structure was copied, and the given entry is just implicitly copied, this command will return an error.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This is not folded in \fBrevert\fP, because it's not clear whether \fCrevert\fP on copied, changed entries should restore the original copyfrom data or remove the copy attribute; by using another command this is no longer ambiguous.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Example: .PP .nf $ fsvs copy SourceFile DestFile # Whoops, was wrong! $ fsvs uncopy DestFile .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "diff" .PP \fC\fC\fC\fC\fC .PP .nf fsvs diff [-v] [-r rev[:rev2]] [-R] PATH [PATH...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command gives you diffs between local and repository files.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC With \fC-v\fP the meta-data is additionally printed, and changes shown.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If you don't give the revision arguments, you get a diff of the base revision in the repository (the last commit) against your current local file. With one revision, you diff this repository version against your local file. With both revisions given, the difference between these repository versions is calculated.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC You'll need the \fCdiff\fP program, as the files are simply passed as parameters to it.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The default is to do non-recursive diffs; so \fCfsvs diff .\fP will output the changes in all files \fBin the current directory\fP and below.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The output for special files is the diff of the internal subversion storage, which includes the type of the special file, but no newline at the end of the line (which \fCdiff\fP complains about).\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC For entries marked as copy the diff against the (clean) source entry is printed.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Please see also \fBOptions relating to the 'diff' action\fP and \fBUsing colordiff\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "export" .PP \fC\fC\fC\fC\fC .PP .nf fsvs export REPOS_URL [-r rev] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If you want to export a directory from your repository \fBwithout\fP storing any FSVS-related data you can use this command.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This restores all meta-data - owner, group, access mask and modification time; its primary use is for data recovery.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The data gets written (in the correct directory structure) below the current working directory; if entries already exist, the export will stop, so this should be an empty directory.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "help" .PP \fC\fC\fC\fC\fC .PP .nf help [command] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command shows general or specific \fBhelp\fP (for the given command). A similar function is available by using \fC-h\fP or \fC-\fP? after a command.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "groups" .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP .nf fsvs groups dump|load fsvs groups [prepend|append|at=n] group-definition [group-def ...] fsvs ignore [prepend|append|at=n] pattern [pattern ...] fsvs groups test [-v|-q] [pattern ...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command adds patterns to the end of the pattern list, or, with \fCprepend\fP, puts them at the beginning of the list. With \fCat=x\fP the patterns are inserted at the position \fCx\fP , counting from 0.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The difference between \fCgroups\fP and \fCignore\fP is that \fCgroups\fP \fBrequires\fP a group name, whereas the latter just assumes the default group \fCignore\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC For the specification please see the related \fBdocumentation\fP .\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fCfsvs dump\fP prints the patterns to \fCSTDOUT\fP . If there are special characters like \fCCR\fP or \fCLF\fP embedded in the pattern \fBwithout encoding\fP (like \fC\\r\fP or \fC\\n\fP), the output will be garbled.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The patterns may include \fC*\fP and \fC\fP? as wildcards in one directory level, or \fC**\fP for arbitrary strings.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC These patterns are only matched against new (not yet known) files; entries that are already versioned are not invalidated. .br If the given path matches a new directory, entries below aren't found, either; but if this directory or entries below are already versioned, the pattern doesn't work, as the match is restricted to the directory.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC So: .PP .nf fsvs ignore ./tmp .fi .PP ignores the directory \fCtmp\fP; but if it has already been committed, existing entries would have to be unmarked with \fBfsvs unversion\fP. Normally it's better to use .PP .nf fsvs ignore ./tmp/** .fi .PP as that takes the directory itself (which might be needed after restore as a mount point anyway), but ignore \fBall\fP entries below. .br Currently this has the drawback that mtime changes will be reported and committed; this is not the case if the whole directory is ignored.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Examples: .PP .nf fsvs group group:unreadable,mode:4:0 fsvs group 'group:secrets,/etc/*shadow' fsvs ignore /proc fsvs ignore /dev/pts fsvs ignore './var/log/*-*' fsvs ignore './**~' fsvs ignore './**/*.bak' fsvs ignore prepend 'take,./**.txt' fsvs ignore append 'take,./**.svg' fsvs ignore at=1 './**.tmp' fsvs group dump fsvs group dump -v echo './**.doc' | fsvs ignore load # Replaces the whole list .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 Please take care that your wildcard patterns are not expanded by the shell! .RE .PP \fP\fP\fP\fP\fP .SS "Testing patterns" \fC\fC\fC\fC\fC To see more easily what different patterns do you can use the \fCtest\fP subcommand. The following combinations are available: .PD 0 .IP "\(bu" 2 \fCfsvs groups test \fIpattern\fP\fP Tests \fBonly\fP the given pattern against all new entries in your working copy, and prints the matching paths. The pattern is not stored in the pattern list. .IP "\(bu" 2 \fCfsvs groups test\fP .br Uses the already defined patterns on the new entries, and prints the group name, a tab, and the path. .br With \fC-v\fP you can see the matching pattern in the middle column, too. .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC By using \fC-q\fP you can avoid getting the whole list; this makes sense if you use the \fBgroup_stats\fP option at the same time.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "rign" .PP \fC\fC\fC\fC\fC .PP .nf fsvs rel-ignore [prepend|append|at=n] path-spec [path-spec ...] fsvs ri [prepend|append|at=n] path-spec [path-spec ...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If you keep the same repository data at more than one working copy on the same machine, it will be stored in different paths - and that makes absolute ignore patterns infeasible. But relative ignore patterns are anchored at the beginning of the WC root - which is a bit tiring to type if you're deep in your WC hierarchy and want to ignore some files.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC To make that easier you can use the \fCrel-ignore\fP (abbreviated as \fCri\fP) command; this converts all given path-specifications (which may include wildcards as per the shell pattern specification above) to WC-relative values before storing them.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Example for \fC/etc\fP as working copy root: .PP .nf fsvs rel-ignore '/etc/X11/xorg.conf.*' cd /etc/X11 fsvs rel-ignore 'xorg.conf.*' .fi .PP Both commands would store the pattern './X11/xorg.conf.*'.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 This works only for \fBshell patterns\fP. .RE .PP For more details about ignoring files please see the \fBignore\fP command and \fBSpecification of groups and patterns\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "info" .PP \fC\fC\fC\fC\fC .PP .nf fsvs info [-R [-R]] [PATH...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Use this command to show information regarding one or more entries in your working copy. .br You can use \fC-v\fP to obtain slightly more information.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This may sometimes be helpful for locating bugs, or to obtain the URL and revision a working copy is currently at.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Example: .PP .nf $ fsvs info URL: file: .... 200 . Type: directory Status: 0x0 Flags: 0x100000 Dev: 0 Inode: 24521 Mode: 040755 UID/GID: 1000/1000 MTime: Thu Aug 17 16:34:24 2006 CTime: Thu Aug 17 16:34:24 2006 Revision: 4 Size: 200 .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The default is to print information about the given entry only. With a single \fC-R\fP you'll get this data about \fBall\fP entries of a given directory; with another \fC-R\fP you'll get the whole (sub-)tree.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "log" .PP \fC\fC\fC\fC\fC .PP .nf fsvs log [-v] [-r rev1[:rev2]] [-u name] [path] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command views the revision log information associated with the given \fIpath\fP at its topmost URL, or, if none is given, the highest priority URL.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The optional \fIrev1\fP and \fIrev2\fP can be used to restrict the revisions that are shown; if no values are given, the logs are given starting from \fCHEAD\fP downwards, and then a limit on the number of revisions is applied (but see the \fBlimit\fP option).\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If you use the \fB-v\fP -option, you get the files changed in each revision printed, too.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC There is an option controlling the output format; see the \fBlog_output option\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Optionally the name of an URL can be given after \fC-u\fP; then the log of this URL, instead of the topmost one, is shown.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC TODOs: .IP "\(bu" 2 \fC--stop-on-copy\fP .IP "\(bu" 2 Show revision for \fBall\fP URLs associated with a working copy? In which order? .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "prop-get" .PP \fC\fC\fC\fC\fC .PP .nf fsvs prop-get PROPERTY-NAME PATH... .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Prints the data of the given property to \fCSTDOUT\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 Be careful! This command will dump the property \fBas it is\fP, ie. with any special characters! If there are escape sequences or binary data in the property, your terminal might get messed up! .br If you want a safe way to look at the properties, use prop-list with the \fC-v\fP parameter. .RE .PP \fP\fP\fP\fP\fP .SH "prop-set" .PP \fC\fC\fC\fC\fC .PP .nf fsvs prop-set [-u URLNAME] PROPERTY-NAME VALUE PATH... .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command sets an arbitrary property value for the given path(s).\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 Some property prefixes are reserved; currently everything starting with \fCsvn:\fP throws a (fatal) warning, and \fCfsvs:\fP is already used, too. See \fBSpecial property names\fP. .RE .PP If you're using a multi-URL setup, and the entry you'd like to work on should be pinned to a specific URL, you can use the \fC-u\fP parameter; this is like the \fBadd\fP command, see there for more details.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "prop-del" .PP \fC\fC\fC\fC\fC .PP .nf fsvs prop-del PROPERTY-NAME PATH... .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command removes a property for the given path(s).\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC See also \fBprop-set\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "prop-list" .PP \fC\fC\fC\fC\fC .PP .nf fsvs prop-list [-v] PATH... .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Lists the names of all properties for the given entry. .br With \fC-v\fP, the value is printed as well; special characters will be translated, as arbitrary binary sequences could interfere with your terminal settings.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If you need raw output, post a patch for \fC--raw\fP, or write a loop with \fBprop-get\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "remote-status" .PP \fC\fC\fC\fC\fC .PP .nf fsvs remote-status PATH [-r rev] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command looks into the repository and tells you which files would get changed on an \fBupdate\fP - it's a dry-run for \fBupdate\fP .\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Per default it compares to \fCHEAD\fP, but you can choose another revision with the \fC-r\fP parameter.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Please see the \fBupdate\fP documentation for details regarding multi-URL usage.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "resolve" .PP \fC\fC\fC\fC\fC .PP .nf fsvs resolve PATH [PATH...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC When FSVS tries to update local files which have been changed, a conflict might occur. (For various ways of handling these please see the \fBconflict\fP option.)\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command lets you mark such conflicts as resolved.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "revert" .PP \fC\fC\fC\fC\fC .PP .nf fsvs revert [-rRev] [-R] PATH [PATH...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command undoes local modifications: .IP "\(bu" 2 An entry that is marked to be unversioned gets this flag removed. .IP "\(bu" 2 For a already versioned entry (existing in the repository) the local entry is replaced with its repository version, and its status and flags are cleared. .IP "\(bu" 2 An entry that is a \fBmodified\fP copy destination gets reverted to the copy source data. .IP "\(bu" 2 Manually added entries are changed back to \fI'N'\fPew.\fB\fP .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Please note that implicitly copied entries, ie. entries that are marked as copied because some parent directory is the base of a copy, \fBcan not\fP be un-copied; they can only be reverted to their original (copied-from) data, or removed.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If you want to undo a \fCcopy\fP operation, please see the \fBuncopy\fP command.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC See also \fBHOWTO: Understand the entries' statii\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If a directory is given on the command line \fBall versioned entries in this directory\fP are reverted to the old state; this behaviour can be modified with \fB-R/-N\fP, or see below.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The reverted entries are printed, along with the status they had \fBbefore\fP the revert (because the new status is per definition \fIunchanged\fP).\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If a revision is given, the entries' data is taken from this revision; furthermore, the \fBnew\fP status of that entry is shown.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 Please note that mixed revision working copies are not (yet) possible; the \fIBASE\fP revision is not changed, and a simple \fCrevert\fP without a revision arguments gives you that. .br By giving a revision parameter you can just choose to get the text from a different revision. .RE .PP \fP\fP\fP\fP\fP .SS "Difference to update" \fC\fC\fC\fC\fC If something doesn't work as it should in the installation you can revert entries until you are satisfied, and directly \fBcommit\fP the new state.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC In contrast, if you \fBupdate\fP to an older version, you .IP "\(bu" 2 cannot choose single entries (no mixed revision working copies yet), .IP "\(bu" 2 and you cannot commit the old version with changes, as the 'skipped' (later) changes will create conflicts in the repository. .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "Currently only known entries are handled." \fC\fC\fC\fC\fC If you need a switch (like \fC--delete\fP in \fCrsync(1)\fP ) to remove unknown (new, not yet versioned) entries, to get the directory in the exact state it is in the repository, please tell the \fCdev@\fP mailing list.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "Removed directory structures" \fC\fC\fC\fC\fC If a path is specified whose parent is missing, \fCfsvs\fP complains. .br We plan to provide a switch (probably \fC-p\fP), which would create (a sparse) tree up to this entry.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "Recursive behaviour" \fC\fC\fC\fC\fC When the user specifies a non-directory entry (file, device, symlink), this entry is reverted to the old state.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC If the user specifies a directory entry, these definitions should apply: command line switchresult \fC-N\fP this directory only (meta-data), none this directory, and direct children of the directory, \fC-R\fP this directory, and the complete tree below. \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "Working with copied entries" \fC\fC\fC\fC\fC If an entry is marked as copied from another entry (and not committed!), a \fCrevert\fP will fetch the original copyfrom source. To undo the copy setting use the \fBuncopy\fP command.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "status" .PP \fC\fC\fC\fC\fC .PP .nf fsvs status [-C [-C]] [-v] [-f filter] [PATHs...] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command shows the entries that have been changed locally since the last commit.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The most important output formats are: .IP "\(bu" 2 A status columns of four (or, with \fC-v\fP , six) characters. There are either flags or a '.' printed, so that it's easily parsed by scripts -- the number of columns is only changed by \fB-q, -v -- verbose/quiet\fP. .IP "\(bu" 2 The size of the entry, in bytes, or \fC'dir'\fP for a directory, or \fC'dev'\fP for a device. .IP "\(bu" 2 The path and name of the entry, formatted by the \fBpath\fP option. .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Normally only changed entries are printed; with \fC-v\fP all are printed, but see the \fBfilter\fP option for more details.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The status column can show the following flags: .IP "\(bu" 2 \fC 'D'\fP and \fC'N'\fP are used for \fIdeleted\fP and \fInew\fP entries. .IP "\(bu" 2 \fC 'd'\fP and \fC'n'\fP are used for entries which are to be unversioned or added on the next commit; the characters were chosen as \fIlittle delete\fP (only in the repository, not removed locally) and \fIlittle new\fP (although \fBignored\fP). See \fBadd\fP and \fBunversion\fP. .br If such an entry does not exist, it is marked with an \fC'!'\fP in the last column -- because it has been manually marked, and so the removal is unexpected. .IP "\(bu" 2 A changed type (character device to symlink, file to directory etc.) is given as \fC'R'\fP (replaced), ie. as removed and newly added. .IP "\(bu" 2 If the entry has been modified, the change is shown as \fC'C'\fP. .br If the modification or status change timestamps (mtime, ctime) are changed, but the size is still the same, the entry is marked as possibly changed (a question mark \fC'\fP?' in the last column) - but see \fBchange detection\fP for details. .IP "\(bu" 2 A \fC'x'\fP signifies a conflict. .IP "\(bu" 2 The meta-data flag \fC'm'\fP shows meta-data changes like properties, modification timestamp and/or the rights (owner, group, mode); depending on the \fB-v/-q\fP command line parameters, it may be split into \fC'P'\fP (properties), \fC't'\fP (time) and \fC'p'\fP (permissions). .br If \fC'P'\fP is shown for the non-verbose case, it means \fBonly\fP property changes, ie. the entries filesystem meta-data is unchanged. .IP "\(bu" 2 A \fC'+'\fP is printed for files with a copy-from history; to see the URL of the copyfrom source, see the \fBverbose\fP option. .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Here's a table with the characters and their positions: .PP .nf * Without -v With -v * .... ...... * NmC? NtpPC? * DPx! D x! * R + R + * d d * n n * .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Furthermore please take a look at the \fBstat_color\fP option, and for more information about displayed data the \fBverbose\fP option.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "sync-repos" .PP \fC\fC\fC\fC\fC .PP .nf fsvs sync-repos [-r rev] [working copy base] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command loads the file list afresh from the repository. .br A following commit will send all differences and make the repository data identical to the local.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This is normally not needed; the only use cases are .IP "\(bu" 2 debugging and .IP "\(bu" 2 recovering from data loss in the \fB$FSVS_WAA\fP area. .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC It might be of use if you want to backup two similar machines. Then you could commit one machine into a subdirectory of your repository, make a copy of that directory for another machine, and \fCsync\fP this other directory on the other machine.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC A commit then will transfer only _changed_ files; so if the two machines share 2GB of binaries (\fC/usr\fP , \fC/bin\fP , \fC/lib\fP , ...) then these 2GB are still shared in the repository, although over time they will deviate (as both committing machines know nothing of the other path with identical files).\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This kind of backup could be substituted by two or more levels of repository paths, which get \fIoverlaid\fP in a defined priority. So the base directory, which all machines derive from, will be committed from one machine, and it's no longer necessary for all machines to send identical files into the repository.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The revision argument should only ever be used for debugging; if you fetch a filelist for a revision, and then commit against later revisions, problems are bound to occur.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 There's issue 2286 in subversion which describes sharing identical files in the repository in unrelated paths. By using this relaxes the storage needs; but the network transfers would still be much larger than with the overlaid paths. .RE .PP \fP\fP\fP\fP\fP .SH "update" .PP \fC\fC\fC\fC\fC .PP .nf fsvs update [-r rev] [working copy base] fsvs update [-u url@rev ...] [working copy base] .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC This command does an update on the current working copy; per default for all defined URLs, but you can restrict that via \fB-u\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC It first reads all filelist changes from the repositories, overlays them (so that only the highest-priority entries are used), and then fetches all necessary changes.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "Updating to zero" \fC\fC\fC\fC\fC If you start an update with a target revision of zero, the entries belonging to that URL will be removed from your working copy, and the URL deleted from your URL list. .br This is a convenient way to replace an URL with another. .br \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 As FSVS has no full mixed revision support yet, it doesn't know whether under the removed entry is a lower-priority one with the same path, which should get visible now. .br Directories get changed to the highest priority URL that has an entry below (which might be hidden!). .RE .PP Because of this you're advised to either use that only for completely distinct working copies, or do a \fBsync-repos\fP (and possibly one or more \fBrevert\fP calls) after the update.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC\fP\fP\fP\fP\fP .SH "urls" .PP \fC\fC\fC\fC\fC .PP .nf fsvs urls URL [URLs...] fsvs urls dump fsvs urls load .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Initializes a working copy administrative area and connects \fCthe\fP current working directory to \fCREPOS_URL\fP. All commits and updates will be done to this directory and against the given URL.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Example: .PP .nf fsvs urls http://svn/repos/installation/machine-1/trunk .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC For a format definition of the URLs please see the chapter \fBFormat of URLs\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 If there are already URLs defined, and you use that command later again, please note that as of 1.0.18 \fBthe older URLs are not overwritten\fP as before, but that the new URLs are \fBappended\fP to the given list! If you want to start afresh, use something like .PP .nf true | fsvs urls load .fi .PP .RE .PP \fP\fP\fP\fP\fP .SS "Loading URLs" \fC\fC\fC\fC\fC You can load a list of URLs from \fCSTDIN\fP; use the \fCload\fP subcommand for that.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Example: .PP .nf ( echo 'N:local,prio:10,http://svn/repos/install/machine-1/trunk' ; echo 'P:50,name:common,http://svn/repos/install/common/trunk' ) | fsvs urls load .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC Empty lines are ignored.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "Dumping the defined URLs" \fC\fC\fC\fC\fC To see which URLs are in use for the current WC, you can use \fCdump\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC As an optional parameter you can give a format statement: \fCp\fP priority \fCn\fP name \fCr\fP current revision \fCt\fP target revision \fCR\fP readonly-flag \fCu\fP URL \fCI\fP internal number for this URL \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 That's not a real \fCprintf()-format\fP; only these and a few \fC\\\fP sequences are recognized. .RE .PP Example: .PP .nf fsvs urls dump ' %u %n:%p\\n' http://svn/repos/installation/machine-1/trunk local:10 http://svn/repos/installation/common/trunk common:50 .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC The default format is \fC'name:%n,prio:%p,target:%t,ro:%r,%u\\\\n'\fP; for a more readable version you can use \fB-v\fP.\fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC \fP\fP\fP\fP\fP .SS "Loading URLs" \fC\fC\fC\fC\fC You can change the various parameters of the defined URLs like this: .PP .nf # Define an URL fsvs urls name:url1,target:77,readonly:1,http://anything/... # Change values fsvs urls name:url1,target:HEAD fsvs urls readonly:0,http://anything/... fsvs urls name:url1,prio:88,target:32 .fi .PP \fP\fP\fP\fP\fP .PP \fC\fC\fC\fC\fC .PP \fBNote:\fP .RS 4 FSVS as yet doesn't store the whole tree structures of all URLs. So if you change the priority of an URL, and re-mix the directory trees that way, you'll need a \fBsync-repos\fP and some \fBrevert\fP commands. I'd suggest to avoid this, until FSVS does handle that case better. .RE .PP \fP\fP\fP\fP\fP .SH "Author" .PP Generated automatically by Doxygen for fsvs from the source code. fsvs-fsvs-1.2.12/doc/notice.txt000066400000000000000000000004441453631713700163400ustar00rootroot00000000000000Many of the files in this directory are autogenerated from the comments in the source files. It might be better to change them; but I'll accept documentation patches, too. (I just have to put the changes back into the source files). If you want to help, just ask on the dev@ mailing list. fsvs-fsvs-1.2.12/example/000077500000000000000000000000001453631713700152025ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/README000066400000000000000000000007351453631713700160670ustar00rootroot00000000000000 This is an example setup for versioning /etc in debian-based systems. --- NO WARRANTY OF ANY KIND --- BEWARE! You might end up putting configuration files, that should be kept secret, into the repository! I defined filtering for the few I know - but you'd better check yourself! If you want to see the fruits of your versioning efforts, put yourself into the sysver group, and use subcommander or any other GUI you like. TODO: Write some better documentation. fsvs-fsvs-1.2.12/example/debian/000077500000000000000000000000001453631713700164245ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/debian/README000066400000000000000000000005161453631713700173060ustar00rootroot00000000000000The described files are part of a monitoring-etc-setup on a Debian/Ubuntu linuxhost. - ./apt.conf.d contains configuration for the apt-hook - ./cron.d contains a trigger for the fsvs cron-job - ./etc contains fsvs config files - ./etc/ssl, fsvs config file for access of secured repositories - ./ignore, a sample ignore ruleset fsvs-fsvs-1.2.12/example/debian/apt.conf.d/000077500000000000000000000000001453631713700203565ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/debian/apt.conf.d/75fsvs000066400000000000000000000000761453631713700214410ustar00rootroot00000000000000Dpkg::Post-Invoke { "/usr/share/fsvs/scripts/apt-hook.py"; }; fsvs-fsvs-1.2.12/example/debian/cron.d/000077500000000000000000000000001453631713700176075ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/debian/cron.d/fsvs000066400000000000000000000003751453631713700205200ustar00rootroot00000000000000# # Cron Job for fsvs # SHELL=/bin/sh MAILTO="" MAILFROM="From: FSVS Monitoring " MAILSUBJECT="fsvs file monitoring on localhost" # */1 * * * * root /usr/share/fsvs/scripts/fsvs-cron 2>&1 | mail -a "$MAILFROM" -s "$MAILSUBJECT" $MAILTO fsvs-fsvs-1.2.12/example/debian/etc/000077500000000000000000000000001453631713700171775ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/debian/etc/config000066400000000000000000000000221453631713700203610ustar00rootroot00000000000000author=$SUDO_USER fsvs-fsvs-1.2.12/example/debian/etc/ssl/000077500000000000000000000000001453631713700200005ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/debian/etc/ssl/servers000066400000000000000000000003351453631713700214150ustar00rootroot00000000000000[groups] fsvs = fsvs.repository.host [fsvs] ssl-client-cert-file = /etc/ssl/private/newcert.p12 ssl-client-cert-password = 1k3kl0aU [global] ssl-authority-files = /etc/ssl/default/cacert.pem store-plaintext-passwords=yes fsvs-fsvs-1.2.12/example/debian/ignore000066400000000000000000000002601453631713700176300ustar00rootroot00000000000000ignore,m:004:000 ignore,/**.gz ignore,/**.bz2 ignore,/**.zip ignore,/**.rar ignore,/etc/fsvs ignore,/etc/resolv.conf ignore,/etc/mtab ignore,/etc/adjtime take,/etc/ ignore,/** fsvs-fsvs-1.2.12/example/debian/scripts/000077500000000000000000000000001453631713700201135ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/debian/scripts/apt-hook.py000077500000000000000000000050501453631713700222120ustar00rootroot00000000000000#!/usr/bin/env python import sys, commands, subprocess from os import stat from os import path import string msg_prfx = 'fsvs-apt-hook_' def getLine(list): result = [] for i in list: if ('Removing' in i) or ('Setting up' in i) or ('Purging' in i) or ('Configuring' in i): line = string.replace(i, '\r', '') result.append(line) return result def getLastAptAction(): logfn = '/var/log/apt/term.log' try: FILE = open(logfn, 'r') except: print 'could not open file' lineList = FILE.readlines() length = len(lineList) FILE.close() result = [] curline = lineList[-1] if 'Log ended:' in curline: cond = False i = 1 while cond == False and (length-i)>0: i+=1 curline = lineList[length-i] if not 'Log started:' in curline: result.insert(1,curline) else: cond = True msg = getLine(result) msg.insert(0, msg_prfx + 'last-apt-action:\n') return(msg) def getDpkgFiles(): cmd = 'dpkg-deb --contents %s' % pkg_file print cmd try: out = commands.getoutput(cmd) except: print 'exception running %s' % cmd exit() list = string.split(out, '\n') print list[1] """ gets "fsvs st" state for working copy / """ def getFsvsStatus(): cmd = 'fsvs st /' out = commands.getoutput(cmd) list = string.split(out, '\n') return list def getConfigChanges(): list = getFsvsStatus() if len(list) > 0: print('The following is a list of files that are changed on dpkg-tasks:') for i in list: print i res = raw_input('Do you want to commit these files? (y/N)') if res.lower() == 'y': return True else: return False def ciConfigChanges(msg): ci_file = '/tmp/fsvs_cm' try: FILE = open(ci_file, 'w') except: print 'could not open file %s' % ci_file for line in msg: FILE.write(line) FILE.close() args =['fsvs', 'ci', '/', '-F', ci_file] res = subprocess.Popen(args) def checkFsvsEnviron(): """ check fsvs bin availability """ if not path.exists('/usr/bin/fsvs'): print msg_prfx + 'error: no instance of fsvs found' quit() """ check fsvs configuration """ cmd = 'fsvs / urls dump' if not len(commands.getoutput(cmd)) > 0: print msg_prfx + 'error: no urls defined for /' quit() """ check fsvs connectivitiy to repo """ cmd = 'fsvs / remote-status' if commands.getstatusoutput(cmd) == '1': print msg_prfx + 'error: no repo available' quit() if __name__ == '__main__': checkFsvsEnviron() commitmsg = getLastAptAction() if getConfigChanges(): ciConfigChanges(commitmsg) fsvs-fsvs-1.2.12/example/debian/scripts/fsvs-cron000066400000000000000000000011301453631713700217510ustar00rootroot00000000000000#!/bin/sh set -e FSVS_BIN=$(which fsvs) FSVS_OPTS="-ostop_change=true -odir_exclude_mtime=true -ofilter=mtime,text,owner,mode,group,new,deleted" if ! $FSVS_BIN st / $FSVS_OPTS;then echo "fsvs has detected changes in monitored directories." echo "" echo "changed files:" echo "---------------------------------------------------" echo "" $FSVS_BIN st / echo "" echo "user last logged in:" echo "---------------------------------------------------" echo "" last -n 3 echo "" echo "diff of the files changed:" echo "----------------------------------------------------" echo "" $FSVS_BIN diff / fi fsvs-fsvs-1.2.12/example/etc/000077500000000000000000000000001453631713700157555ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/etc/apt/000077500000000000000000000000001453631713700165415ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/etc/apt/apt.conf.d/000077500000000000000000000000001453631713700204735ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/etc/apt/apt.conf.d/50fsvs-system-versioning000066400000000000000000000001301453631713700252410ustar00rootroot00000000000000DPkg::Post-Invoke ""; DPkg::Post-Invoke:: "/var/lib/fsvs-versioning/scripts/commit.sh"; fsvs-fsvs-1.2.12/example/etc/cron.daily/000077500000000000000000000000001453631713700200175ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/etc/cron.daily/fsvs-versioning000066400000000000000000000001161453631713700231020ustar00rootroot00000000000000#!/bin/sh /var/lib/fsvs-versioning/scripts/commit.sh "Commit per cron.daily" fsvs-fsvs-1.2.12/example/etc/fsvs/000077500000000000000000000000001453631713700167365ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/etc/fsvs/groups/000077500000000000000000000000001453631713700202555ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/etc/fsvs/groups/unreadable000066400000000000000000000010101453631713700222720ustar00rootroot00000000000000# This is an example for a FSVS group definition file. # See fsvs(1) for more details. # # This file is used for unreadable files, ie. files without the others-read # bit set. # There are two main choices for them: # - ignore them # - or keep them versioned, but encrypted. # As long as the "ignore" line is present, the entries will be ignored. ignore # If you want to encrypt the data, you have to change the example key-ID to # the one you want to use. auto-prop fsvs:commit-pipe gpg -er root 0x12345678 fsvs-fsvs-1.2.12/example/setup.sh000077500000000000000000000044471453631713700167120ustar00rootroot00000000000000#!/bin/sh location=/var/lib/fsvs-versioning/repository scripts=/var/lib/fsvs-versioning/scripts group=sysver set -e set -x cd /etc # Ignore if group already exists addgroup $group || true if fsvs info > /dev/null 2>&1 then echo Already configured for /etc. else if ! svnlook info -r0 $location >/dev/null 2>&1 then # Keep the data secret mkdir -m 2770 -p $location # BDB is faster than FSFS, especially for many small files in # many revisions. svnadmin create --fs-type bdb $location # branches might not be needed, but tags could make sense. svn mkdir file://$location/trunk file://$location/tags -m "create trunk and tags" # We keep the directory structure 1:1, so it could easily be # moved to include the full system. # # Note: If we'd do the versioning at the root, we'd have either # to exclude everything except /etc (tricky, and error-prone), or # have some take pattern - but then new ignore patterns (by other # packages) couldn't simply be appended. svn mkdir file://$location/trunk/etc -m "create etc" chown 0.$group $location -R fi # Create local filelist, to make "fsvs ps" work. fsvs checkout file://$location/trunk/etc conf_path=`fsvs info . | grep Conf-Path | cut -f2 -d:` fsvs ignore '/etc/**.dpkg-old' '/etc/**.dpkg-new' '/etc/**.dpkg-dist' '/etc/**.dpkg-bak' fsvs ignore '/etc/**.bak' '/etc/**.old' '/etc/**~' '/**.swp' # easy to remake, no big deal (?) fsvs ignore '/etc/ssh/ssh_host_*key' # Not used? fsvs ignore /etc/apt/secring.gpg fsvs ignore /etc/mtab fsvs ignore /etc/ld.so.cache /etc/adjtime # Just compiled data? fsvs ignore '/etc/selinux/*.pp' # unknown whether that should be backuped. fsvs ignore '/etc/identd.key' fsvs ignore '/etc/ppp/*-secrets' fsvs ps fsvs:commit-pipe /var/lib/fsvs-versioning/scripts/remove-password-line.pl ddclient.conf || true # Are there non-shadow systems? # fsvs ignore './shadow' './gshadow' fsvs ps fsvs:commit-pipe /var/lib/fsvs-versioning/scripts/shadow-clean.pl shadow gshadow # Match entries that are not world-readable. fsvs group 'group:unreadable,m:4:0' # Lock-files are not needed, are they? fsvs ignore './**.lock' './**.LOCK' # Should we commit the current ignore list? # fsvs commit -m "Initial import" # Should we ignore the "Urls" file changing? # Having it in shows which revision /etc was at. fi fsvs-fsvs-1.2.12/example/var/000077500000000000000000000000001453631713700157725ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/var/lib/000077500000000000000000000000001453631713700165405ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/var/lib/fsvs-versioning/000077500000000000000000000000001453631713700217025ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/var/lib/fsvs-versioning/scripts/000077500000000000000000000000001453631713700233715ustar00rootroot00000000000000fsvs-fsvs-1.2.12/example/var/lib/fsvs-versioning/scripts/commit.sh000077500000000000000000000007221453631713700252210ustar00rootroot00000000000000#!/bin/sh # So that the defined group can access the data umask 007 # In case the process calling apt-get had some paths defined, they might # not be what FSVS expects. # Re-set the defaults. export FSVS_CONF=/etc/fsvs export FSVS_WAA=/var/spool/fsvs/ # Possibly run this script or FSVS via env(1)? # Would clean *all* FSVS_* variables. # Tell the author as "apt", because we're called by apt-get. fsvs ci -o author=apt /etc -m "${1:-Auto-commit after dpkg}" -q fsvs-fsvs-1.2.12/example/var/lib/fsvs-versioning/scripts/remove-password-line.pl000077500000000000000000000001651453631713700300150ustar00rootroot00000000000000#!/usr/bin/perl while (<>) { # No substitution value, could be used wrongly s#^(\s*password\s*=).*#\1#; print; } fsvs-fsvs-1.2.12/example/var/lib/fsvs-versioning/scripts/shadow-clean.pl000077500000000000000000000003151453631713700262750ustar00rootroot00000000000000#!/usr/bin/perl # Replaces the password in shadow-like files # Keeps single-character values (for deactivated etc.) while (<>) { @f=split(/(:)/); $f[2]='-' if length($f[2]) > 1; print join("", @f); } fsvs-fsvs-1.2.12/src/000077500000000000000000000000001453631713700143365ustar00rootroot00000000000000fsvs-fsvs-1.2.12/src/.vimrc000066400000000000000000000006001453631713700154530ustar00rootroot00000000000000set exrc " ignore FSVS messages in "make run-tests" set efm^=%-Gcommitted\ revision%\\t%\\d%#\ on\ %.%#\ as\ %.%# set efm^=%-GAn\ error\ occurred\ at\ %.%# set tags+=tags set tags+=src/tags set errorformat^=\ \ "%f":1:\ (%s set errorformat^=\ \ in\ %m\ \[%f:%l\] set efm^=%-Gmake%.%#Makefile%.%#run%.%#tests%.%#Fehler%.%# execute "source " . expand(":h") . "/.vimrc.syntax" fsvs-fsvs-1.2.12/src/.ycm_extra_conf.py.in000066400000000000000000000003251453631713700203730ustar00rootroot00000000000000def Settings( **kwargs ): return { 'flags': [ '-x', 'c', '-Wall', '-Wextra', '-Werror', '-DPCRE2_CODE_UNIT_WIDTH=8', '-Iconfig/', '-Iinclude/', '-I@APR_PATH@' ], } fsvs-fsvs-1.2.12/src/Makefile.in000066400000000000000000000176731453631713700164210ustar00rootroot00000000000000########################################################################### # Copyright (C) 2005-2009 Philipp Marek. # # # # This program is free software; you can redistribute it and/or modify # # it under the terms of the GNU General Public License version 2 as # # published by the Free Software Foundation. # ########################################################################### ################################ Definitions ################################ DIR := /usr/share/doc VERSION := $(shell git describe --tags --always) CFLAGS := @CFLAGS@ CFLAGS += -Wall -funsigned-char -Os -DFSVS_VERSION='"$(VERSION)"' -Wno-deprecated-declarations LDFLAGS := @LDFLAGS@ FSVS_LDFLAGS = $(LDFLAGS) BASELIBS := -lsvn_subr-1 -lsvn_delta-1 -lsvn_ra-1 -lpcre2-8 -lgdbm -ldl EXTRALIBS := @EXTRALIBS@ WAA_CHARS?= @WAA_WC_MD5_CHARS@ ifdef RPATH LDFLAGS += -Wl,-rpath,$(RPATH) endif ifeq (@ENABLE_DEBUG@, 1) # The component tests need all local variables, ie. no optimization. CFLAGS += -O0 CFLAGS += -DDEBUG -g LDFLAGS += -g ifeq (@ENABLE_GCOV@, 1) CFLAGS += -fprofile-arcs -ftest-coverage LDFLAGS += -fprofile-arcs endif endif # CFLAGS += -m64 -Wpadded # LDFLAGS += -m64 C_FILES := $(sort $(wildcard *.c)) H_FILES := $(wildcard *.h) D_FILES := $(C_FILES:%.c=.%.d) DEST := fsvs ################################ Targets ################################### ifeq (@CHROOTER_JAIL@, ) all: deps tags check-version check-dox $(DEST) lsDEST else all: tools/fsvs-chrooter endif check-version: config.h fsvs.c @dev/check-version-output.pl $^ check-dox: options.c dox/options.dox @dev/check-option-docs.pl $^ tags: $(C_FILES) $(wildcard *.h) @echo " $@" @-ctags $^ @echo ":au BufNewFile,BufRead *.c syntax keyword Constant" $(shell grep -v "^!" < $@ | cut -f1 | grep _) > .vimrc.syntax .IGNORE: tags clean: rm -f *.o *.s $(D_FILES) $(DEST) 2> /dev/null || true lsDEST: $(DEST) @ls -la $< version: @echo $(VERSION) version-nnl: @perl -e '$$_=shift; s/\s+$$//; print;' $(VERSION) .SILENT: version.nnl version .PHONY: version-nnl version ################################ Distribution ############################### bindir = @bindir@ exec_prefix= @exec_prefix@ prefix = @prefix@ mandir = @mandir@ install: mkdir -p $(DESTDIR)/etc/fsvs $(DESTDIR)/var/spool/fsvs $(DESTDIR)$(bindir) $(DESTDIR)/etc/fsvs/svn/auth/svn.simple $(DESTDIR)/etc/fsvs/svn/auth/svn.ssl.server $(DESTDIR)/etc/fsvs/svn/auth/svn.ssl.client-passphrase install -m 0755 $(DEST) $(DESTDIR)/$(bindir) # install -m 0644 ../doc/fsvs.1 $(DESTDIR)/(mandir) # No automatic rebuild (?) #../doc/USAGE: $(C_FILES) $(H_FILES) #.PHONY: ../doc/USAGE DOXDIR=../doxygen/html/ MANDIR=../doxygen/man/man1/ MANDEST=../doc/ DOXFLAG=../doxygen/html/index.html $(DOXFLAG): ( cat doxygen-data/Doxyfile-man ; echo PROJECT_NUMBER=$(VERSION)) | doxygen - ( cat doxygen-data/Doxyfile ; echo PROJECT_NUMBER=$(VERSION)) | doxygen - # Change the /§* to the correct /* cd $(DOXDIR) && perl -i.bak -pe '1 while s#([/*])\xc2?\xa7([/*])#\1\2#;' *.html cd $(MANDIR) && perl -i.bak -pe '1 while s#([/*])\xc2?\xa7([/*])#\1\2#;' *.? rm $(DOXDIR)/*.bak $(DOXDIR)/html-doc.zip || true cd $(DOXDIR)/.. && zip -rq9 html-doc.zip html -x 'html/.svn/*' && tar -cf html-doc.tar --exclude .svn html && bzip2 -vkf9 html-doc.tar && gzip -vf9 html-doc.tar $(DOXDIR)/group__cmds.html: $(DOXFLAG) touch $@ $(DOXDIR)/group__ignpat.html: $(DOXFLAG) touch $@ # Fix for badly generated man page (Doxygen) # Some other idea? Is there some other workaround? $(MANDEST)/fsvs.1: $(MANDIR)/cmds.1 tools/man-repair.pl $@ "FSVS - fast versioning tool" < $< $(MANDEST)/fsvs-howto-backup.5: $(MANDIR)/howto_backup.1 tools/man-repair.pl $@ "FSVS - Backup HOWTO" < $< $(MANDEST)/fsvs-howto-master_local.5: $(MANDIR)/howto_master_local.1 tools/man-repair.pl $@ "FSVS - Master/Local HOWTO" < $< $(MANDEST)/fsvs-options.5: $(MANDIR)/options.1 tools/man-repair.pl $@ "FSVS - Options and configfile" < $< $(MANDEST)/fsvs-url-format.5: $(MANDIR)/url_format.1 tools/man-repair.pl $@ "FSVS - URL format" < $< $(MANDEST)/fsvs-groups.5: $(MANDIR)/groups_spec.1 tools/man-repair.pl $@ "FSVS - Group definitions" < $< $(MANDEST)/fsvs-ignore-patterns.5: $(MANDIR)/ignpat.1 tools/man-repair.pl $@ "FSVS - Ignore definitions" < $< ../doc/USAGE: $(DOXDIR)/group__cmds.html dev/dox2txt.pl $< > $@ ../doc/IGNORING: $(DOXDIR)/group__ignpat.html dev/dox2txt.pl $< > $@ doc.g-c: ../doc/USAGE # Generate static text strings ( cat $< ; echo "end" ) | dev/make_doc.pl > $@ docs: $(DOXFLAG) ../doc/USAGE ../doc/IGNORING doc.g-c docs: $(MANDEST)/fsvs.1 $(MANDEST)/fsvs-options.5 docs: $(MANDEST)/fsvs-url-format.5 $(MANDEST)/fsvs-groups.5 docs: $(MANDEST)/fsvs-howto-backup.5 $(MANDEST)/fsvs-howto-master_local.5 .PHONY: docs $(DOXFLAG) ################################ Rules ###################################### %.o: %.c @echo " CC $<" @$(CC) $(CPPFLAGS) $(CFLAGS) -c -o $@ $< # if the Makefile has changed, the output will (at least sometimes) # change, too. $(DEST): $(C_FILES:%.c=%.o) @echo " Link $@" @$(CC) $(FSVS_LDFLAGS) $(LDLIBS) $(LIBS) -o $@ $^ $(BASELIBS) $(EXTRALIBS) ifeq (@ENABLE_RELEASE@, 1) -strip $@ endif # For debugging: generate preprocessed, generate assembler %.s: %.c $(CC) $(CFLAGS) -S -fverbose-asm -o $@ $< || true %.P : %.c $(CC) $(CFLAGS) -E -o $@ $< ############################### Dependencies ################################ deps: $(D_FILES) .%.d: %.c @echo " deps for $<" @$(CC) $(CPPFLAGS) $(CFLAGS) -MM $< | perl -pe 's#\bdoc.g-c\b##' > $@ include $(D_FILES) tools/fsvs-chrooter: tools/fsvs-chrooter.c tools/fsvs-chrooter: interface.h config.h ############################### GCov Usage ################################ ifeq (@ENABLE_GCOV@, 1) GCOV_FILES := $(C_FILES:%.c=%.c.gcov) GCOV_SMRY_FILES := $(GCOV_FILES:%.gcov=%.gcov.smry) GCOV_DATA := $(C_FILES:%.c=%.gcda) $(C_FILES:%.c=%.gcno) gcov: $(GCOV_FILES) @dev/gcov-summary.pl $(GCOV_SMRY_FILES) %.c.gcov: %.c @gcov -f $< > $<.gcov.smry # -b -c gcov-clean: rm -f *.gcov *.gcov.smry *.gcda 2> /dev/null || true gcov-unused-funcs: grep -B1 ":0.00%" *.gcov.smry .PHONY: gcov gcov-clean endif ################################ Statistics ################################# diffstat: svk diff | diffstat count: @echo "sum of lines: "`cat $(C_FILES) $(H_FILES) | wc -l -` @echo "sum w/o comments, {, }, empty lines: "`perl -e 'undef $$/; while (<>) { 1 while s#//.*##; 1 while s#/\\*[\\x00-\\xff]*?\\*/##; 1 while s#\s*[{}]\s*##; $$c++ while s#[\r\n]+# #; }; sub END { print $$c,"\n" } ' $(C_FILES) $(H_FILES)` revcount: count @last_rev=$(shell svk info | grep Revision | cut -d" " -f2) ; echo "number of edits up to revision $$last_rev:" ; for r in `seq 2 $$last_rev` ; do svk diff -r`expr $$r - 1`:$$r /svn2/trunk ; done | perl -pe 's#\ssrc/# #g;' | diffstat structs: $(DEST) @for a in `perl -ne 'print $$1,"\n" if m#^\s*struct\s+(\w+)\s+{\s*$$#' $(C_FILES) $(H_FILES)` ; do printf "%-30s " "struct $$a" ; gdb --batch -ex "printf \"\t%6d\", sizeof(struct $$a)" $(DEST) | cut -f2 -d= ; done 2>&1 | sort -k3 -n .PHONY: revcount count diffstat ################################ Testing #################################### run-tests: $(DEST) WAA_CHARS=$(WAA_CHARS) $(MAKE) -C ../tests BINARY=$(shell pwd)/$(DEST) VERBOSE=$(VERBOSE) $(TESTS) ifeq (@ENABLE_GCOV@, 1) # I don't know why, but gcov wants to open the .gcda and .gcno # files Read-Write. I filed a bug report for this. # If the tests are run as root (which is currently necessary because # of the devices and device-tests), the normal user who compiled # the sources will not be allowed to open this files ... # # Not all files have code .. and so not all files (of the generated list) # will exist; therefore "true". -@chmod 777 $(GCOV_DATA) > /dev/null 2>&1 endif ext-tests: $(DEST) dev/permutate-all-tests .PHONY: run-tests ext-tests ################################ -- THE END -- ############################## ## vi: ts=8 sw=8 fsvs-fsvs-1.2.12/src/ac_list.c000066400000000000000000000122601453631713700161210ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #include "global.h" #include "actions.h" #include "status.h" #include "commit.h" #include "update.h" #include "export.h" #include "log.h" #include "cat.h" #include "ignore.h" #include "cp_mv.h" #include "sync.h" #include "checkout.h" #include "diff.h" #include "url.h" #include "add_unvers.h" #include "props.h" #include "info.h" #include "revert.h" #include "remote.h" #include "resolve.h" #include "build.h" /** \file * List of actions, their command line names, and corresponding flags. */ /** Array of command name pointers. * The \c acl at the beginning means ACtion List. */ static const char *acl_status[] = { "status", NULL }, *acl_commit[] = { "commit", "checkin", "ci", NULL }, *acl_update[] = { "update", NULL }, *acl_export[] = { "export", NULL }, *acl_build[] = { "_build-new-list", NULL }, *acl_delay[] = { "delay", NULL }, *acl_remote[] = { "remote-status", "rs", NULL }, *acl_ignore[] = { "ignore", NULL }, *acl_rign[] = { "rel-ignore", "ri", "r-i", NULL }, *acl_groups[] = { "groups", "groupings", "grps", NULL }, *acl_add[] = { "add", NULL }, *acl_copyfr[] = { "copyfrom-detect", "copy-detect", NULL }, *acl_cp[] = { "copy", "move", "cp", "mv", NULL }, *acl_uncp[] = { "uncopy", NULL }, *acl_unvers[] = { "unversion", NULL }, *acl_log[] = { "log", NULL }, *acl_cat[] = { "cat", NULL }, *acl_resolv[] = { "resolved", NULL }, *acl_checko[] = { "checkout", "co", NULL }, *acl_sync_r[] = { "sync-repos", NULL }, *acl_revert[] = { "revert", "undo", NULL }, *acl_prop_l[] = { "prop-list", "pl", NULL }, *acl_prop_g[] = { "prop-get", "pg", NULL }, *acl_prop_s[] = { "prop-set", "ps", NULL }, *acl_prop_d[] = { "prop-del", "pd", NULL }, *acl_diff[] = { "diff", NULL }, *acl_help[] = { "help", "?", NULL }, *acl_info[] = { "info", NULL }, /** \todo: remove initialize */ *acl_urls[] = { "urls", "initialize", NULL }; /* A generated file. */ #include "doc.g-c" /** This \#define is used to save us from writing the member names, in * order to get a nice tabular layout. * Simply writing the initializations in structure order is not good; * a simple re-arrange could make problems. */ #define ACT(nam, _work, _act, ...) \ { .name=acl_##nam, .help_text=hlp_##nam, \ .work=_work, .local_callback=_act, \ __VA_ARGS__ } /** Use the progress uninitializer */ #define UNINIT .local_uninit=st__progress_uninit /** Store update-pipe strings */ #define DECODER .needs_decoder=1 /** Commands obeys filtering via -f */ #define FILTER .only_opt_filter=1 /** Wants a current value in estat::st */ #define STS_WRITE .overwrite_sts_st=1 /** waa__update_dir() may look for new entries */ #define DIR_UPD .do_update_dir=1 /** Action doesn't write into WAA, may be used by unprivileged user */ #define RO .is_readonly=1 /** -. */ struct actionlist_t action_list[]= { /* The first action is the default. */ ACT(status, st__work, st__action, FILTER, STS_WRITE, DIR_UPD, RO), ACT(commit, ci__work, ci__action, UNINIT, FILTER, DIR_UPD), ACT(update, up__work, st__progress, UNINIT, DECODER), ACT(export, exp__work, NULL, .is_import_export=1, DECODER), ACT(unvers, au__work, au__action, .i_val=RF_UNVERSION, STS_WRITE), ACT( add, au__work, au__action, .i_val=RF_ADD, STS_WRITE), ACT( diff, df__work, NULL, DECODER, STS_WRITE, RO), ACT(sync_r, sync__work, NULL, .repos_feedback=sync__progress, .keep_user_prop=1), ACT( urls, url__work, NULL), ACT(revert, rev__work, NULL, UNINIT, DECODER, .keep_children=1), ACT(groups, ign__work, NULL, .i_val=0, DIR_UPD), ACT(ignore, ign__work, NULL, .i_val=HAVE_GROUP, DIR_UPD), ACT( rign, ign__rign, NULL, .i_val=HAVE_GROUP, DIR_UPD), ACT(copyfr, cm__detect, st__progress, UNINIT, DIR_UPD, STS_WRITE), ACT( cp, cm__work, NULL), ACT( cat, cat__work, NULL), ACT( uncp, cm__uncopy, NULL), ACT(resolv, res__work, res__action, .is_compare=1), ACT( log, log__work, NULL, RO), ACT(checko, co__work, NULL, DECODER, .repos_feedback=st__rm_status), ACT( build, bld__work, st__status, DIR_UPD), ACT( delay,delay__work, st__status, RO), /* For help we set import_export, to avoid needing a WAA * (default /var/spool/fsvs) to exist. */ ACT( help, ac__Usage, NULL, .is_import_export=1, RO), ACT( info, info__work, info__action, RO), ACT(prop_g,prp__g_work, NULL, RO), ACT(prop_s,prp__s_work, NULL, .i_val=FS_NEW), ACT(prop_d,prp__s_work, NULL, .i_val=FS_REMOVED), ACT(prop_l,prp__l_work, NULL, RO), ACT(remote, up__work, NULL, .is_compare=1, .repos_feedback=st__rm_status), }; /** -. */ const int action_list_count = sizeof(action_list)/sizeof(action_list[0]); /** -. */ struct actionlist_t *action=action_list; fsvs-fsvs-1.2.12/src/actions.c000066400000000000000000000052551453631713700161510ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #include #include #include "actions.h" #include "est_ops.h" #include "global.h" #include "checksum.h" #include "options.h" #include "waa.h" /** \file * Common functions for action (name) handling. */ /** This wrapper-callback for the current action callback calculates the * path and fills in the \c entry_type for the current \a sts, if * necessary. */ int ac__dispatch(struct estat *sts) { int status; status=0; if (!action->local_callback) goto ex; /* We cannot really test the type here; on update we might only know that * it's a special file, but not which type exactly. */ #if 0 BUG_ON(!( S_ISDIR(sts->st.mode) || S_ISREG(sts->st.mode) || S_ISCHR(sts->st.mode) || S_ISBLK(sts->st.mode) || S_ISLNK(sts->st.mode) ), "%s has mode 0%o", sts->name, sts->st.mode); #endif if (ops__allowed_by_filter(sts) || (sts->entry_status & FS_CHILD_CHANGED)) { /* If * - we want to see all entries, * - there's no parent that could be removed ("." is always there), or * - the parent still exists, * we print the entry. */ if (opt__get_int(OPT__ALL_REMOVED) || !sts->parent || (sts->parent->entry_status & FS_REPLACED)!=FS_REMOVED) STOPIF( action->local_callback(sts), NULL); } else DEBUGP("%s is not the entry you're looking for", sts->name); ex: return status; } /** Given a string \a cmd, return the corresponding action entry. * Used by commandline parsing - finding the current action, and * which help text to show. */ int act__find_action_by_name(const char *cmd, struct actionlist_t **action_p) { int i, status; struct actionlist_t *action_v; int match_nr; char const* const* cp; size_t len; status=0; len=strlen(cmd); match_nr=0; action_v=action_list; for (i=action_list_count-1; i >=0; i--) { cp=action_list[i].name; while (*cp) { if (strncmp(*cp, cmd, len) == 0) { action_v=action_list+i; /* If it's am exact match, we're done. * Needed for "co" (checkout) vs. "commit". */ if (len == strlen(*cp)) goto done; match_nr++; break; } cp++; } } STOPIF_CODE_ERR( match_nr <1, ENOENT, "!Action \"%s\" not found. Try \"help\".", cmd); STOPIF_CODE_ERR( match_nr >=2, EINVAL, "!Action \"%s\" is ambiguous. Try \"help\".", cmd); done: *action_p=action_v; ex: return status; } fsvs-fsvs-1.2.12/src/actions.h000066400000000000000000000121711453631713700161510ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __ACTION_H__ #define __ACTION_H__ #include "global.h" /** \file * Action handling header file. */ /** \anchor callbacks \name callbacks Action callbacks. */ /** @{ */ /** Callback that gets called for each entry. * * Entries get read from the entry list in global [device, inode] order; in * the normal action callback (\ref actionlist_t::local_callback and \ref * actionlist_t::repos_feedback) the parent entries are handled \b after child * entries (but the parent \c struct \ref estat "estats" exist, of course), * so that the list of children is correct. * * * See also \ref waa__update_tree. * * The full (wc-based) path can be built as required by \ref * ops__build_path().*/ /* unused, wrong doc * As in the entry list file (\ref dir) there is a callback \ref * actionlist_t::early_entry that's done \b before the child entries; * Clearing \ref estat::do_this_entry and \ref estat::do_tree in this * callback will skip calling \ref actionlist_t::local_callback for this and * the child entries (see \ref ops__set_to_handle_bits()). */ typedef int (action_t)(struct estat *sts); /** Callback for initializing the action. */ typedef int (work_t)(struct estat *root, int argc, char *argv[]); /** One after all progress has been made. */ typedef int (action_uninit_t)(void); /** @} */ /** The action wrapper. */ action_t ac__dispatch; /** The always allowed action - printing general or specific help. */ work_t ac__Usage; /** For convenience: general help, and help for the current action. */ #define ac__Usage_dflt() do { ac__Usage(NULL, 0, NULL); } while (0) /** Print help for the current action. */ #define ac__Usage_this() do { ac__Usage(NULL, 1, (char**)action->name); } while (0) /** Definition of an \c action. */ struct actionlist_t { /** Array of names this action will be called on the command line. */ const char** name; /** The function doing the setup, tear down, and in-between - the * worker main function. * * See \ref callbacks. */ work_t *work; /** The output function for repository accesses. * Currently only used in cb__record_changes(). * * See \ref callbacks. */ action_t *repos_feedback; /** The local callback. * Called for each entry, just after it's been checked for changes. * Should give the user feedback about individual entries and what * happens with them. * * For directories this gets called when they're finished; so immediately * for empty directories, or after all children are loaded. * \note A removed directory is taken as empty (as no more elements are * here) - this is used in \ref revert so that revert gets called twice * (once for restoring the directory itself, and again after its * populated). * * See \ref callbacks. */ action_t *local_callback; /** The progress reporter needs a callback to clear the line after printing * the progress. */ action_uninit_t *local_uninit; /** A pointer to the verbose help text. */ char const *help_text; /** Flag for usage in the action handler itself. */ int i_val; /** Is this an import or export, ie do we need a WAA? * We don't cache properties, manber-hashes, etc., if is_import_export * is set. */ int is_import_export:1; /** This is set if it's a compare operation (remote-status). * The properties are parsed, but instead of writing them into the * \c struct \c estat they are compared, and \c entry_status set * accordingly. */ int is_compare:1; /** Whether we need fsvs:update-pipe cached. * Do we install files from the repository locally? Then we need to know * how to decode them. * We don't do that in every case, to avoid wasting memory. */ int needs_decoder:1; /** Whether the entries should be filtered on opt_filter. */ int only_opt_filter:1; /** Whether user properties should be stored in estat::user_prop while * running cb__record_changes(). */ int keep_user_prop:1; /** Makes ops__update_single_entry() keep the children of removed * directories. */ int keep_children:1; /** Says that we want the \c estat::st overwritten while looking for * local changes. */ int overwrite_sts_st:1; /** Whether waa__update_dir() may happen. * (It must not for updates, as we'd store local changes as "from * repository"). */ int do_update_dir:1; /** Says that this is a read-only operation (like "status"). */ int is_readonly:1; }; /** Find the action structure by name. * * Returns in \c * \a action_p the action matching (a prefix of) \a cmd. * */ int act__find_action_by_name(const char *cmd, struct actionlist_t **action_p); /** Array of all known actions. */ extern struct actionlist_t action_list[]; /** Gets set to the \e current action after commandline parsing. */ extern struct actionlist_t *action; /** How many actions we know. */ extern const int action_list_count; #endif fsvs-fsvs-1.2.12/src/add_unvers.c000066400000000000000000000167001453631713700166400ustar00rootroot00000000000000/*********************************************************************** * Copyright (C) 2005-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #include #include "global.h" #include "add_unvers.h" #include "status.h" #include "ignore.h" #include "warnings.h" #include "est_ops.h" #include "url.h" #include "helper.h" #include "waa.h" /** \file * \ref add and \ref unversion action. * */ /** * \addtogroup cmds * * \section add * * \code * fsvs add [-u URLNAME] PATH [PATH...] * \endcode * * With this command you can explicitly define entries to be versioned, * even if they have a matching ignore pattern. * They will be sent to the repository on the next commit, just like * other new entries, and will therefore be reported as \e New . * * The \c -u option can be used if you're have more than one URL defined * for this working copy and want to have the entries pinned to the this * URL. * * \subsection add_ex Example * Say, you're versioning your home directory, and gave an ignore pattern * of ./.* to ignore all .* entries in your home-directory. * Now you want .bashrc, .ssh/config, and your complete * .kde3-tree saved, just like other data. * * So you tell fsvs to not ignore these entries: * \code * fsvs add .bashrc .ssh/config .kde3 * \endcode * Now the entries below .kde3 would match your earlier * ./.* pattern (as a match at the beginning is sufficient), * so you have to insert a negative ignore pattern (a \e take pattern): * \code * fsvs ignore prepend t./.kde3 * \endcode * Now a fsvs st would show your entries as * \e New , and the next commit will send them to the repository. * * */ /** * \addtogroup cmds * * \section unversion * * \code * fsvs unversion PATH [PATH...] * \endcode * * This command flags the given paths locally as removed. * On the next commit they will be deleted in the repository, and the local * information of them will be removed, but not the entries themselves. * So they will show up as \e New again, and you get another chance * at ignoring them. * * \subsection unvers_ex Example * * Say, you're versioning your home directory, and found that you no longer * want .bash_history and .sh_history versioned. * So you do * \code * fsvs unversion .bash_history .sh_history * \endcode * and these files will be reported as \c d (will be deleted, but only in the * repository). * * Then you do a * \code * fsvs commit * \endcode * * Now fsvs would report these files as \c New , as it does no longer know * anything about them; but that can be cured by * \code * fsvs ignore "./.*sh_history" * \endcode * Now these two files won't be shown as \e New , either. * * The example also shows why the given paths are not just entered as * separate ignore patterns - they are just single cases of a * (probably) much broader pattern. * * \note If you didn't use some kind of escaping for the pattern, the shell * would expand it to the actual filenames, which is (normally) not what you * want. * * */ /** \defgroup howto_add_unv Semantics for an added/unversioned entry * \ingroup userdoc * * Here's a small overview for the \ref add and \ref unversion actions. * * - Unversion: * The entry to-be-unversioned will be deleted from the repository and the * local waa cache, but not from disk. It should match an ignore pattern, * so that it doesn't get committed the next time. * - Add: * An added entry is required on commit - else the user told to store * something which does not exist, and that's an error. * * \section add_unvers_status Status display * * *
Exists in fs -> YES NO *
not seen before \c N \c - *
committed \c C, \c R \c D *
unversioned \c d \c d (or D?, or with !?) *
added \c n \c n (with !) *
* * * If an entry is added, then unversioned, we remove it completely * from our list. We detect that by the RF_NOT_COMMITTED flag. * Likewise for an unversioned, then added, entry. * * Please see also the \ref add command and the \ref unversion command. * */ /** General function for \ref add and \ref unversion actions. * This one really handles the entries. */ int au__action(struct estat *sts) { int status; int old; int mask=RF_ADD | RF_UNVERSION; STOPIF_CODE_ERR(!sts->parent, EINVAL, "!Using %s on the working copy root doesn't make sense.", action->name[0]); status=0; /* This should only be done once ... but as it could be called by others, * without having action->i_val the correct value, we check on every * call. * After all it's just two compares, and only for debugging ... */ BUG_ON( action->i_val != RF_UNVERSION && action->i_val != RF_ADD ); old=sts->flags & mask; /* We set the new value for output, and possibly remove the entry * afterwards. */ sts->flags = (sts->flags & ~mask) | action->i_val; DEBUGP("changing flags: has now %X", sts->flags); STOPIF( st__status(sts), NULL); /* If we have an entry which was added *and* unversioned (or vice versa), * but * 1) has never been committed, we remove it from the list; * 2) is a normal, used entry, we delete the flags. * * Should we print "....." in case 2? Currently we show that it's being * added/unversioned again. */ if (((sts->flags ^ old) & mask) == mask) { if (!sts->repos_rev) STOPIF( ops__delete_entry(sts->parent, sts, UNKNOWN_INDEX, UNKNOWN_INDEX), NULL); else sts->flags &= ~mask; } if (sts->flags & RF_ADD) { /* Get the group. */ STOPIF( ign__is_ignore(sts, &old), NULL); /* We don't want to know whether it's ignored, so we just discard the * ignore flag. */ STOPIF( ops__apply_group(sts, NULL, NULL), NULL); /* And we don't want to ignore it, even if ops__apply_group() only * found an ignore pattern, thank you so much. */ sts->to_be_ignored=0; } if ((sts->flags & mask) == RF_ADD) sts->url=current_url; ex: return status; } /** -. * */ int au__prepare_for_added(void) { int status; STOPIF( url__load_list(NULL, 0), NULL); STOPIF( url__mark_todo(), NULL); STOPIF_CODE_ERR( url__parm_list_used>1, EINVAL, "!At most a single destination URL may be given."); if (url__parm_list_used) { STOPIF(url__find_by_name(url__parm_list[0], ¤t_url), "!No URL with name \"%s\" defined.", url__parm_list[0]); DEBUGP("URL to add to: %s", current_url->url); } else current_url=NULL; /* We need the groups, to assign the auto-props. */ STOPIF( ign__load_list(NULL), NULL); ex: return status; } /** -. * */ int au__work(struct estat *root, int argc, char *argv[]) { int status; char **normalized; /* *Only* do the selected elements. */ opt_recursive=-1; /* Would it make sense to do "-=2" instead, so that the user could override * that and really add/unversion more than single elements? */ STOPIF( waa__find_common_base(argc, argv, &normalized), NULL); STOPIF( au__prepare_for_added(), NULL); STOPIF( waa__read_or_build_tree(root, argc, normalized, argv, NULL, 0), NULL); STOPIF( waa__output_tree(root), NULL); ex: return status; } fsvs-fsvs-1.2.12/src/add_unvers.h000066400000000000000000000013671453631713700166500ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2006-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __ADD_UNVERS_H__ #define __ADD_UNVERS_H__ #include "actions.h" /** \file * \ref add and \ref unversion action header file. * */ /** For adding/unversioning files. */ work_t au__work; /** Worker function. */ action_t au__action; /** In case we need to handle new entries we might have to assign an URL to * them. */ int au__prepare_for_added(void); #endif fsvs-fsvs-1.2.12/src/build.c000066400000000000000000000071721453631713700156100ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2006-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #include #include "global.h" #include "waa.h" #include "helper.h" #include "options.h" #include "url.h" #include "build.h" /** \file * \ref _build_new_list action file. * * */ /** \addtogroup cmds * * \section _build_new_list _build_new_list * * This is used mainly for debugging. * It traverses the filesystem and builds a new entries file. * In production it should not be used; as neither URLs nor the revision of * the entries is known, information is lost by calling this function! * * Look at \ref sync-repos. */ /** Traverse the filesystem, build a tree, and store it as WC. * Doesn't do anything with the repository. */ int bld__work(struct estat *root, int argc, char *argv[]) { int status; STOPIF( waa__find_base(root, &argc, &argv), NULL); STOPIF( url__load_list(NULL, 0), NULL); /* If there are any URLs, we use the lowest-priority. * Any sync-repos will correct that. */ current_url=urllist[urllist_count-1]; root->do_userselected = 1; opt_recursive=1; STOPIF( waa__build_tree(root), NULL); DEBUGP("build tree, now saving"); STOPIF( waa__output_tree(root), NULL); ex: return status; } /** \addtogroup cmds * \section delay * * This command delays execution until time has passed at least to the next * second after writing the data files used by FSVS (\ref dir "dir" and * \ref urls "urls"). * * This command is for use in scripts; where previously the \ref delay * "delay" option was used, this can be substituted by the given command * followed by the \c delay command. * * The advantage against the \ref o_delay "delay" option is that read-only * commands can be used in the meantime. * * An example: * \code * fsvs commit /etc/X11 -m "Backup of X11" * ... read-only commands, like "status" * fsvs delay /etc/X11 * ... read-write commands, like "commit" * \endcode * * The optional path can point to any path in the WC. * * In the testing framework it is used to save a bit of time; in normal * operation, where FSVS commands are not so tightly packed, it is normally * preferable to use the \ref o_delay "delay" option. * */ /** Waits until the \c dir and \c Urls files have been modified in the * past, ie their timestamp is lower than the current time (rounded to * seconds.) */ int delay__work(struct estat *root, int argc, char *argv[]) { int status; int i; time_t last; struct sstat_t st; char *filename, *eos; char *list[]= { WAA__DIR_EXT, WAA__URLLIST_EXT }; STOPIF( waa__find_base(root, &argc, &argv), NULL); if (opt__is_verbose() > 0) printf("Waiting on WC root \"%s\"\n", wc_path); last=0; for(i=0; i last) last=st.mtim.tv_sec; } } DEBUGP("waiting until %llu", (t_ull)last); opt__set_int(OPT__DELAY, PRIO_MUSTHAVE, -1); STOPIF( hlp__delay(last, 1), NULL); ex: return status; } fsvs-fsvs-1.2.12/src/build.h000066400000000000000000000015611453631713700156110ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2008 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __BUILD_H__ #define __BUILD_H__ #include "actions.h" #include "global.h" /** \file * \ref _build_new_list action header file. * * This action is not normally used; as it throws away data from the WAA * it is dangerous if simply called without \b exactly knowing the * implications. * * The only use is for debugging - all other disaster recovery is better done * via \c sync-repos. * */ /** Build action. */ work_t bld__work; /** Delay action. */ work_t delay__work; #endif fsvs-fsvs-1.2.12/src/cache.c000066400000000000000000000143621453631713700155530ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2008 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #include #include #include #include "global.h" #include "cache.h" /** \file * Some small caching primitives. * * We have to do some caching - neither the APR-functions nor glibc caches * results of \c getpwnam() and similar. * On update or commit we call them many, many times ... there it's good to * have these values cached. * * It's not necessary for performance; but simply getting a \c char* back * from some function and using it, knowing that it's valid for a few more * calls of the same function, eases life tremendously. * * \todo Convert the other caches. * * \todo Let the \c apr_uid_get() calls from \c update.c go into that - * but they need a hash or something like that. Maybe reverse the test and * look whether the number (eg. uid) matches the string (username)? * */ /** -. * If a struct \ref cache_entry_t is used as a string, this might be * useful. * * If memory should be allocated, but not copied, specify \a data as \c * NULL. * For \a len \c ==-1 calls \c strlen(). * * If \a copy_old_data is set, old value in this cache entry is kept. * * Please note that memory may have to be reallocated, causing \c *cache to * change! */ int cch__entry_set(struct cache_entry_t **cache, cache_value_t id, const char *data, int len, int copy_old_data, char **copy) { int status; struct cache_entry_t *ce; int alloc_len; if (len == -1) len=strlen(data); status=0; ce=*cache; alloc_len=len+sizeof(struct cache_entry_t); if (!ce || alloc_len > ce->len || (ce->len - len) > 1024) { /* Round up a bit (including the struct). */ alloc_len = (alloc_len + 96-1) & ~64; if (copy_old_data) STOPIF( hlp__realloc( &ce, alloc_len), NULL); else { /* Note: realloc() copies the old data to the new location, but most * of the time we'd overwrite it completely just afterwards. */ free(*cache); STOPIF( hlp__alloc( &ce, alloc_len), NULL); } ce->len = alloc_len-sizeof(struct cache_entry_t)-1; *cache=ce; } ce->id = id; if (data) memcpy(ce->data, data, len); /* Just to be safe ... */ ce->data[len]=0; if (copy) *copy=ce->data; ex: return status; } /** -. * Can return \c ENOENT if not found. */ int cch__find(struct cache_t *cache, cache_value_t id, int *index, char **data, int *len) { int i; for(i=0; iused; i++) if (cache->entries[i]->id == id) { if (data) *data= cache->entries[i]->data; if (len) *len= cache->entries[i]->len; if (index) *index=i; return 0; } return ENOENT; } /** -. * * The given data is just inserted into the cache and marked as LRU. * An old entry is removed if necessary. */ int cch__add(struct cache_t *cache, cache_value_t id, const char *data, int len, char **copy) { int i; if ( cache->used >= cache->max) { i=cache->lru+1; if (i >= cache->max) i=0; } else i= cache->used++; cache->lru=i; /* Set data */ return cch__entry_set(cache->entries + i, id, data, len, 0, copy); } /** -. * \a id is a distinct numeric value for addressing this item. * The entry is set as LRU, eventually discarding older entries. * */ int cch__set_by_id(struct cache_t *cache, cache_value_t id, const char *data, int len, int copy_old_data, char **copy) { int i; /* Entry with same ID gets overwritten. */ if (cch__find(cache, id, &i, NULL, NULL) == ENOENT) { return cch__add(cache, id, data, len, copy); } /* Found, move to LRU */ cch__set_active(cache, i); /* Set data */ return cch__entry_set(cache->entries + i, id, data, len, copy_old_data, copy); } /** -. * */ void cch__set_active(struct cache_t *cache, int i) { struct cache_entry_t *tmp, **entries; entries=cache->entries; /* observe these 2 cases: */ if (i < cache->lru) { /* from | 6 5 i 3 2 1 LRU 9 8 7 | * to | 6 5 3 2 1 LRU i 9 8 7 | * -> move [i+1 to LRU] to i, i is the new LRU. */ tmp=entries[i]; memmove(entries+i, entries+i+1, (cache->lru-i) * sizeof(entries[0])); entries[cache->lru]=tmp; } else if (i > cache->lru) { /* from | 2 1 LRU 9 8 7 i 5 4 3 | * to | 2 1 LRU i 9 8 7 5 4 3 | * -> move [LRU+1 to i] to LRU+2; LRU++ */ cache->lru++; BUG_ON(cache->lru >= cache->max); /* lru is already incremented */ tmp=entries[i]; memmove(entries+cache->lru+1, entries+cache->lru, (i-cache->lru) * sizeof(entries[0])); entries[cache->lru]=tmp; } } /** A simple hash. * Copies the significant bits ' ' .. 'Z' (or, really, \\x20 .. \\x60) of * at most 6 bytes of \a stg into a packed bitfield, so that 30bits are * used. */ cache_value_t cch___string_to_cv(const char *stg) { union { cache_value_t cv; struct { unsigned int c0:5; unsigned int c1:5; unsigned int c2:5; unsigned int c3:5; unsigned int c4:5; unsigned int c5:5; unsigned int ignore_me:2; }; } __attribute__((packed)) result; result.cv=0; if (*stg) { result.c0 = *(stg++) - 0x20; if (*stg) { result.c1 = *(stg++) - 0x20; if (*stg) { result.c2 = *(stg++) - 0x20; if (*stg) { result.c3 = *(stg++) - 0x20; if (*stg) { result.c4 = *(stg++) - 0x20; if (*stg) { result.c5 = *(stg++) - 0x20; } } } } } } return result.cv; } /** -. * */ int cch__hash_find(struct cache_t *cache, const char *key, cache_value_t *data) { int status; cache_value_t id; int i; id=cch___string_to_cv(key); DEBUGP("looking for %lX = %s", id, key); if (cch__find(cache, id, &i, NULL, NULL) == 0 && strcmp(key, cache->entries[i]->data) == 0) { *data = cache->entries[i]->hash_data; DEBUGP("found %s=%ld", key, *data); status=0; } else status=ENOENT; return status; } /** -. * */ int cch__hash_add(struct cache_t *cache, const char *key, cache_value_t value) { int status; cache_value_t id; id=cch___string_to_cv(key); STOPIF( cch__add(cache, id, key, strlen(key), NULL), NULL); cache->entries[ cache->lru ]->hash_data = value; ex: return status; } fsvs-fsvs-1.2.12/src/cache.h000066400000000000000000000106661453631713700155630ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __CACHE_H__ #define __CACHE_H__ #include "helper.h" /** \file * Cache header file. */ /** Type of data we're caching; must be size-compatible with a pointer, as * such is stored in some cases (eg ops__build_path()). * */ typedef long cache_value_t; /** What an internal cache entry looks like. * Is more or less a buffer with (allocated) length; the real length is * normally specified via some \\0 byte, by the caller. (A string.) */ struct cache_entry_t { /** ID of entry */ cache_value_t id; /** User-data for hashes */ cache_value_t hash_data; /** Length of data */ int len; #if 0 /** Measurement of accesses */ short accessed; #endif /** Copy of data. */ char data[1]; }; #define CACHE_DEFAULT (4) /** Cache structure. * The more \b active an entry is, the more at the start of the array. * * If a \c struct \ref cache_t is allocated, its \c .max member should be * set to the default \ref CACHE_DEFAULT value. * * For a \c struct \ref cache_t* the function \ref cch__new_cache() must be * used. */ struct cache_t { /** For how many entries is space allocated? */ int max; /** How many entries are used. */ int used; /** Which entry was the last accessed. * * If the array of entries looked like this, with \c B accessed after \c * C after \c D: * \dot * digraph { * rank=same; * D -> C -> B -> Z -> Y -> ppp -> E; * ppp [label="..."]; * B [label="B=LRU", style=bold]; * } * \enddot * After setting a new entry \c A it looks like that: * \dot * digraph { * rank=same; * D -> C -> B -> A -> Y -> ppp -> E; * ppp [label="..."]; * A [label="A=LRU", style=bold]; * } * \enddot * */ int lru; /** Cache entries, \c NULL terminated. */ struct cache_entry_t *entries[CACHE_DEFAULT+1]; }; /** Adds a copy of the given data (\a id, \a data with \a len) to the \a * cache; return the new allocated data pointer in \a copy. * */ int cch__add(struct cache_t *cache, cache_value_t id, const char *data, int len, char **copy); /** Find an entry, return index and/or others. */ int cch__find(struct cache_t *cache, cache_value_t id, int *index, char **data, int *len); /** Copy the given data into the given cache entry. */ int cch__entry_set(struct cache_entry_t **cache, cache_value_t id, const char *data, int len, int copy_old_data, char **copy); /** Look for the same \a id in the \a cache, and overwrite or append the * given data. */ int cch__set_by_id(struct cache_t *cache, cache_value_t id, const char *data, int len, int copy_old_data, char **copy); /** Makes the given index the head of the LRU list. */ void cch__set_active(struct cache_t *cache, int index); /** Create a new \a cache, with a user-defined size. * * I'd liked to do something like * \code * static struct cache_t *cache=cch__new_cache(32); * \endcode * but that couldn't return error codes (eg. \c ENOMEM). * We'd need something like exceptions .... * * So I take the easy route with an inline function. Additional cost: a * single "test if zero". * * \note Another way could have been: * \code * static struct cache_t *cache; * static int status2=cch__new_cache(&cache, 32); * * if (status2) { status=status2; goto ex; } * * ex: * return status; * \endcode * But that's not exactly "better", and still does a "test if zero" on each * run. * * */ __attribute__((gnu_inline, always_inline)) static inline int cch__new_cache(struct cache_t **cache, int max) { int status, len; status=0; if (!*cache) { len= sizeof(struct cache_entry_t*)*(max-CACHE_DEFAULT)+ sizeof(struct cache_t); STOPIF( hlp__alloc( cache, len), NULL); memset(*cache, 0, len); (*cache)->max=max; } ex: return status; } /** Interpret the \a cache as a hash, look for the \a key, returning the * \ref cache_entry_t::hash_data or \c ENOENT. */ int cch__hash_find(struct cache_t *cache, const char *key, cache_value_t *data); /** Interpret the \a cache as a hash and store the given \a value to the \a * key. */ int cch__hash_add(struct cache_t *cache, const char *key, cache_value_t value); #endif fsvs-fsvs-1.2.12/src/cat.c000066400000000000000000000037461453631713700152630ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2008-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #include #include #include "global.h" #include "waa.h" #include "revert.h" #include "url.h" #include "est_ops.h" /** \file * \ref cat action. * * \todo \code * fsvs cat [-r rev] [-u URLname] path * fsvs cat [-u URLname:rev] path * \endcode * */ /** * \addtogroup cmds * * \section cat * * \code * fsvs cat [-r rev] path * \endcode * * Fetches a file repository, and outputs it to \c STDOUT. * If no revision is specified, it defaults to BASE, ie. the current local * revision number of the entry. * */ /** -. * Main function. */ int cat__work(struct estat *root, int argc, char *argv[]) { int status; char **normalized; struct estat *sts; struct svn_stream_t *output; svn_error_t *status_svn; status=0; STOPIF_CODE_ERR( argc != 1, EINVAL, "!Exactly a single path must be given."); STOPIF_CODE_ERR( opt_target_revisions_given > 1, EINVAL, "!At most a single revision is allowed."); STOPIF( waa__find_common_base(argc, argv, &normalized), NULL); STOPIF( url__load_list(NULL, 0), NULL); STOPIF( waa__input_tree( root, NULL, NULL), NULL); STOPIF( ops__traverse(root, normalized[0], OPS__FAIL_NOT_LIST, 0, &sts), NULL); current_url=sts->url; STOPIF_CODE_ERR( !current_url, EINVAL, "!For this entry no URL is known."); STOPIF( url__open_session(NULL, NULL), NULL); STOPIF_SVNERR( svn_stream_for_stdout, (&output, global_pool)); STOPIF( rev__get_text_to_stream( normalized[0], opt_target_revisions_given ? opt_target_revision : sts->repos_rev, DECODER_UNKNOWN, output, NULL, NULL, NULL, global_pool), NULL); ex: return status; } fsvs-fsvs-1.2.12/src/cat.h000066400000000000000000000007621453631713700152630ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2008 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __CACHE_H__ #define __CACHE_H__ /** \file * \ref cat action header file. */ work_t cat__work; #endif fsvs-fsvs-1.2.12/src/checkout.c000066400000000000000000000075541453631713700163220ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2007-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #include #include #include #include #include #include #include "url.h" #include "waa.h" #include "helper.h" #include "commit.h" #include "export.h" /** \file * \ref checkout action * */ /** \addtogroup cmds * * \section checkout * * \code * fsvs checkout [path] URL [URLs...] * \endcode * * Sets one or more URLs for the current working directory (or the * directory \c path), and does an \ref checkout of these URLs. * * Example: * \code * fsvs checkout . http://svn/repos/installation/machine-1/trunk * \endcode * * The distinction whether a directory is given or not is done based on the * result of URL-parsing -- if it looks like an URL, it is used as an URL. * \n Please mind that at most a single path is allowed; as soon as two * non-URLs are found an error message is printed. * * If no directory is given, \c "." is used; this differs from the usual * subversion usage, but might be better suited for usage as a recovery * tool (where versioning \c / is common). Opinions welcome. * * The given \c path must exist, and \b should be empty -- FSVS will * abort on conflicts, ie. if files that should be created already exist. * \n If there's a need to create that directory, please say so; patches * for some parameter like \c -p are welcome. * * For a format definition of the URLs please see the chapter \ref * url_format and the \ref urls and \ref update commands. * * Furthermore you might be interested in \ref o_softroot and \ref * howto_backup_recovery. * */ /** -. * Writes the given URLs into the WAA, and gets the files from the * repository. */ int co__work(struct estat *root, int argc, char *argv[]) { int status; int l; char *path; time_t delay_start; path=NULL; /* The allocation uses calloc(), so current_rev is initialized to 0. */ STOPIF( url__allocate(argc+1), NULL); /* Append URLs. */ for(l=0; larg=path ? path : "."; STOPIF( waa__save_cwd( &path, NULL, 0), NULL); STOPIF( waa__create_working_copy(path), NULL); free(path); /* We don't use the loop above, because the user might give the same URL * twice - and we'd overwrite the fetched files. */ for(l=0; lcurrent_rev = target_revision; STOPIF( ci__set_revision(root, target_revision), NULL); printf("Checked out %s at revision\t%ld.\n", urllist[l]->url, urllist[l]->current_rev); } /* Store where we are ... */ delay_start=time(NULL); STOPIF( url__output_list(), NULL); STOPIF( waa__output_tree(root), NULL); STOPIF( hlp__delay(delay_start, DELAY_CHECKOUT), NULL); ex: return status; } fsvs-fsvs-1.2.12/src/checkout.h000066400000000000000000000011111453631713700163060ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2006-2008 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __CHECKOUT_H__ #define __CHECKOUT_H__ #include "actions.h" #include "global.h" /** \file * Header file for the \ref checkout action. */ /** \ref checkout action. */ work_t co__work; #endif fsvs-fsvs-1.2.12/src/checksum.c000066400000000000000000000746501453631713700163200ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #include #include #include #include #include #include #include "checksum.h" #include "helper.h" #include "global.h" #include "est_ops.h" #include "waa.h" /** \file * CRC, manber functions. */ #define MAPSIZE (32*1024*1024) /** CRC table. * We calculate it once, and reuse it. */ struct t_manber_parms { AC_CV_C_UINT32_T values[256]; }; /** Everything needed to calculate manber-hashes out of a stream. * */ struct t_manber_data { /** The entry this calculation is for. */ struct estat *sts; /** The stream we're filtering. */ svn_stream_t *input; /** Start of the current block. */ off_t last_fpos; /** The current position in the file. Is always >= \a last_fpos. */ off_t fpos; /** MD5-Context of full file. */ apr_md5_ctx_t full_md5_ctx; /** MD5 of full file. */ md5_digest_t full_md5; /** MD5-Context of current block. */ apr_md5_ctx_t block_md5_ctx; /** MD5 of last block. */ md5_digest_t block_md5; /** The file descriptor where the manber-block-MD5s will be written to. */ int manber_fd; /** The internal manber-state. */ AC_CV_C_UINT32_T state; /** The previous manber-state. */ AC_CV_C_UINT32_T last_state; /** Count of bytes in backtrack buffer. */ int bktrk_bytes; /** The last byte in the rotating backtrack-buffer. */ int bktrk_last; /** The backtrack buffer. */ unsigned char backtrack[CS__MANBER_BACKTRACK]; /** Flag to see whether we're in a zero-bytes block. * If there are large blocks with only \c \\0 in them, we don't CRC * or MD5 them - just output as zero blocks with a MD5 of \c \\0*16. * Useful for sparse files. */ int data_bits; }; /** The precalculated CRC-table. */ struct t_manber_parms manber_parms; /** \b The Manber-structure. * Currently only a single instance of manber-hashing runs at once, * so we simply use a static structure. */ static struct t_manber_data cs___manber; /** The write format string for \ref md5s. */ const char cs___mb_wr_format[]= "%s %08x %10llu %10llu\n"; /** The read format string for \ref md5s. */ const char cs___mb_rd_format[]= "%*s%n %x %llu %llu\n"; /** The maximum line length in \ref md5s : * - MD5 as hex (constant-length), * - state as hex (constant-length), * - offset of block, * - length of block, * - \\n, * - \\0 * */ #define MANBER_LINELEN (APR_MD5_DIGESTSIZE*2+1 + 8+1 + 10+1 +10+1 + 1) /** Initializes a Manber-data structure from a struct \a estat. */ int cs___manber_data_init(struct t_manber_data *mbd, struct estat *sts); /** Returns the position of the last byte of a manber-block. */ int cs___end_of_block(const unsigned char *data, int maxlen, int *eob, struct t_manber_data *mb_f); /** Hex-character to ascii. * Faster than sscanf(). * Returns -1 on error. */ static int cs__hex2val(char ch) { /* I thought a bit whether I should store the values+1, ie. keep most of * the array as 0 - but that doesn't save any memory, it only takes more * time. * Sadly the various is*() functions (like isxdigit()) don't seem to * include that information yet, and I couldn't find some table in the * glibc sources. * (I couldn't find anything in that #define mess, TBH.) */ static const signed char values[256]={ /* 0x00 */ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /* 0x20 = space ... */ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /* 0x30 = "012345" */ +0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -1, -1, -1, -1, -1, -1, /* 0x40 = "@ABCD" ... */ -1, 10, 11, 12, 13, 14, 15, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /* 0x60 = "`abcd" ... */ -1, 10, 11, 12, 13, 14, 15, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, /* 0x80 */ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, }; /* To avoid having problems with "negative" characters */ return values[ ch & 0xff ]; } /** -. * Faster than sscanf(). * Returns -1 on error. */ inline int cs__two_ch2bin(char *stg) { return (cs__hex2val(stg[0]) << 4) | (cs__hex2val(stg[1]) << 0); } /** -. * Exactly the right number of characters must be present. */ int cs__char2md5(const char *input, char **eos, md5_digest_t md5) { int i, status, x, y; status=0; for(i=0; i= sizeof(stg)/sizeof(stg[0])) last=0; cur=stg[last]; return cs__md5tohex(md5, cur); } /** Finish manber calculations. * * Calculates the full-file MD5 hash, and copies it into the associated * struct \a estat . - see comment at \a cs__new_manber_filter() . */ int cs___finish_manber(struct t_manber_data *mb_f) { int status; status=0; STOPIF( apr_md5_final(mb_f->full_md5, & mb_f->full_md5_ctx), "apr_md5_final failed"); if (mb_f->sts) memcpy(mb_f->sts->md5, mb_f->full_md5, sizeof(mb_f->sts->md5)); mb_f->sts=NULL; ex: return status; } /** * -. * \param sts Which entry to check * \param fullpath The path to the file (optionally, else \c NULL). If the * file has been checked already and fullpath is \c NULL, a debug message * can write \c (null), as then even the name calculation * is skipped. * \param result is set to \c 0 for identical to old and \c >0 for * changed. * As a special case this function returns \c <0 for don't know * if the file is unreadable due to a \c EACCESS. * * \note Performance optimization * In normal circumstances not the whole file has to be read to get the * result. On update a checksum is written for each manber-block of about * 128k (but see \ref CS__APPROX_BLOCKSIZE_BITS); as soon as one is seen as * changed the verification is stopped. * */ int cs__compare_file(struct estat *sts, char *fullpath, int *result) { int i, status, fh; unsigned length_mapped, map_pos, hash_pos; off_t current_pos; struct cs__manber_hashes mbh_data; unsigned char *filedata; int do_manber; char *cp; struct sstat_t actual; md5_digest_t old_md5 = { 0 }; static struct t_manber_data mb_dat; /* Default is "don't know". */ if (result) *result = -1; /* It doesn't matter whether we test this or old_rev_mode_packed - if * they're different, this entry was replaced, and we never get here. */ if (S_ISDIR(sts->st.mode)) return 0; fh=-1; /* hash already done? */ if (sts->change_flag != CF_UNKNOWN) { DEBUGP("change flag for %s: %d", fullpath, sts->change_flag); goto ret_result; } status=0; if (!fullpath) STOPIF( ops__build_path(&fullpath, sts), NULL); DEBUGP("checking for modification on %s", fullpath); DEBUGP("hashing %s",fullpath); memcpy(old_md5, sts->md5, sizeof(old_md5)); /* We'll open and read the file now, so the additional lstat() doesn't * really hurt - and it makes sure that we see the current values (or at * least the _current_ ones :-). */ STOPIF( hlp__lstat(fullpath, &actual), NULL); if (S_ISREG(actual.mode)) { do_manber=1; /* Open the file and read the stream from there, comparing the blocks * as necessary. * If a difference is found, stop, and mark file as different. */ /* If this call returns ENOENT, this entry simply has no md5s-file. * We'll have to MD5 it completely. */ if (actual.size < CS__MIN_FILE_SIZE) do_manber=0; else { status=cs__read_manber_hashes(sts, &mbh_data); if (status == ENOENT) do_manber=0; else STOPIF(status, "reading manber-hash data for %s", fullpath); } hash_pos=0; STOPIF( cs___manber_data_init(&mb_dat, sts), NULL ); /* We map windows of the file into main memory. Never more than 256MB. */ current_pos=0; fh=open(fullpath, O_RDONLY); /* We allow a single special case on error handling: EACCES, which * could simply mean that the file has mode 000. */ if (fh<0) { /* The debug statement might change errno, so we have to save the * value. */ status=errno; DEBUGP("File %s is unreadable: %d", fullpath, status); if (status == EACCES) { status=0; goto ex; } /* Can that happen? */ if (!status) status=EBUSY; STOPIF(status, "open(\"%s\", O_RDONLY) failed", fullpath); } status=0; while (current_pos < actual.size) { if (actual.size-current_pos < MAPSIZE) length_mapped=actual.size-current_pos; else length_mapped=MAPSIZE; DEBUGP("mapping %u bytes from %llu", length_mapped, (t_ull)current_pos); filedata=mmap(NULL, length_mapped, PROT_READ, MAP_SHARED, fh, current_pos); STOPIF_CODE_ERR( filedata == MAP_FAILED, errno, "comparing the file %s failed (mmap)", fullpath); map_pos=0; while (map_posmd5[0] ^= 0x1; i=-2; break; } DEBUGP("block #%u ok...", hash_pos); hash_pos++; /* If this gets true (which it should never), we must not * print the hash values etc. ... The index [hash_pos] is outside * the array boundaries. */ if (hash_pos > mbh_data.count) goto changed; } /* We have to reset the blocks even if we have no manber hashes ... * so the eg. data_bits value gets reset. */ STOPIF( cs___end_of_block(NULL, 0, NULL, &mb_dat), NULL ); map_pos+=i; } STOPIF_CODE_ERR( munmap((void*)filedata, length_mapped) == -1, errno, "unmapping of file failed"); current_pos+=length_mapped; if (i==-2) break; } STOPIF( cs___finish_manber( &mb_dat), NULL); } else if (S_ISLNK(sts->st.mode)) { STOPIF( ops__link_to_string(sts, fullpath, &cp), NULL); apr_md5(sts->md5, cp, strlen(cp)); } else { DEBUGP("nothing to hash for %s", fullpath); } sts->change_flag = memcmp(old_md5, sts->md5, sizeof(sts->md5)) == 0 ? CF_NOTCHANGED : CF_CHANGED; DEBUGP("change flag for %s set to %d", fullpath, sts->change_flag); ret_result: if (result) *result = sts->change_flag == CF_CHANGED; DEBUGP("comparing %s=%d: md5 %s", fullpath, sts->change_flag == CF_CHANGED, cs__md5tohex_buffered(sts->md5)); status=0; ex: if (fh>=0) close(fh); return status; } /** -. * If a file has been committed, this is where various checksum-related * uninitializations can happen. */ int cs__set_file_committed(struct estat *sts) { int status; status=0; if (S_ISDIR(sts->st.mode)) goto ex; /* Now we can drop the check flag. */ sts->flags &= ~(RF_CHECK | RF_PUSHPROPS); sts->repos_rev=SET_REVNUM; ex: return status; } /* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Stream functions and callbacks * for manber-filtering * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ int cs___manber_data_init(struct t_manber_data *mbd, struct estat *sts) { int status; memset(mbd, 0, sizeof(*mbd)); mbd->manber_fd=-1; BUG_ON(mbd->sts, "manber structure already in use!"); mbd->sts=sts; mbd->fpos= mbd->last_fpos= 0; apr_md5_init(& mbd->full_md5_ctx); STOPIF( cs___end_of_block(NULL, 0, NULL, mbd), NULL ); ex: return status; } void cs___manber_init(struct t_manber_parms *mb_d) { int i; AC_CV_C_UINT32_T p; if (mb_d->values[0]) return; /* Calculate the CS__MANBER_BACKTRACK power of the prime */ /* TODO: speedup like done in RSA - log2(power) */ for(p=1,i=0; ivalues[i]=(i*p) & CS__MANBER_MODULUS; } /* This function finds the position which * * a b c d e f g h i j k l m n * |..Block.1..| |..Block.2... * Here it would return h; ie. the number of characters * found in this data block belonging to the current block. * * If the whole data buffer belongs to the current block -1 is returned * in *eob. * */ int cs___end_of_block(const unsigned char *data, int maxlen, int *eob, struct t_manber_data *mb_f) { int status; int i; status=0; if (!data) { DEBUGP("manber reinit"); mb_f->state=0; mb_f->last_state=0; mb_f->bktrk_bytes=0; mb_f->bktrk_last=0; mb_f->data_bits=0; apr_md5_init(& mb_f->block_md5_ctx); memset(mb_f->block_md5, 0, sizeof(mb_f->block_md5)); cs___manber_init(&manber_parms); goto ex; } *eob = -1; i=0; /* If we haven't had at least this many bytes in the current block, * read up to this amount. */ while (ibktrk_bytes < CS__MANBER_BACKTRACK) { /* In this initialization, we simply \c OR the bytes together. * On block end detection we see if this is at least a * \c CS__MANBER_BACKTRACK bytes long zero-byte block. */ mb_f->data_bits |= data[i]; mb_f->state = (mb_f->state * CS__MANBER_PRIME + data[i] ) % CS__MANBER_MODULUS; mb_f->backtrack[ mb_f->bktrk_last ] = data[i]; /* The reason why CS__MANBER_BACKTRACK must be a power of 2: * bitwise-AND is much faster than a modulo. * In this loop the & is redundant - in a new block we should * start from bktrk_last==0, but the AND is only 1 or 2 cycles, * and we hope that gcc optimizes that. */ mb_f->bktrk_last = ( mb_f->bktrk_last + 1 ) & (CS__MANBER_BACKTRACK - 1); mb_f->bktrk_bytes++; i++; } if (!mb_f->data_bits) { /* No bits in the data set - only zeroes so far. * Look for the next non-zero byte; there's a block border. */ /* memchr is the exact opposite of what we need. */ while (ilast_state gets the previous CRC, and this gets stored. * This is because the ->state has, on a block border, a lot of * zeroes (per definition); so we store the previous value, which * may be better suited for comparison. If the blocks are equal * up to byte N, they're equal up to N-1, too. */ /* This need not be calculated in the previous loop, as we do no * border-checking there. Only here, in this loop, * is the value needed. */ mb_f->last_state=mb_f->state; mb_f->state = (mb_f->state*CS__MANBER_PRIME + data[i] - manber_parms.values[ mb_f->backtrack[ mb_f->bktrk_last ] ] ) % CS__MANBER_MODULUS; mb_f->backtrack[ mb_f->bktrk_last ] = data[i]; mb_f->bktrk_last = ( mb_f->bktrk_last + 1 ) & (CS__MANBER_BACKTRACK - 1); /* This value has already been used. */ i++; /* special value ? */ if ( !(mb_f->state & CS__MANBER_BITMASK) ) { *eob=i; apr_md5_update(& mb_f->block_md5_ctx, data, i); apr_md5_final( mb_f->block_md5, & mb_f->block_md5_ctx); DEBUGP("manber found a border: %u %08X %08X %s", i, mb_f->last_state, mb_f->state, cs__md5tohex_buffered(mb_f->block_md5)); break; } } /* Update md5 up to current byte. */ if (*eob == -1) apr_md5_update(& mb_f->block_md5_ctx, data, i); } /* Update file global information */ apr_md5_update(& mb_f->full_md5_ctx, data, i); mb_f->fpos += (*eob == -1) ? maxlen : *eob; ex: DEBUGP("on return at fpos=%llu: %08X (databits=%2x)", (t_ull)mb_f->fpos, mb_f->state, mb_f->data_bits); return status; } int cs___update_manber(struct t_manber_data *mb_f, const unsigned char *data, apr_size_t len) { int status; int eob, i; /* MD5 as hex (constant-length), * state as hex (constant-length), * offset of block, length of block, * \n, \0, reserve */ char buffer[MANBER_LINELEN+10]; char *filename; status=0; /* We tried to avoid doing this calculation for small files. * * But: this does not work. * As *on update* we don't know how many bytes we'll * have to process, and the buffer size is not specified, * we might be legitimately called with single-byte values. * * On my machine I get for commit/update requests of 100kB * (yes, 102400 bytes), so I thought to have at least some chance. * But: svn_ra_get_file uses 16k blocks ... * * A full solution would either have to * - know how many bytes we get (update; on commit we know), or to * - buffer all to-be-written-data unless we have more bytes * than limited. * The second variant might be easier. * * As we now check on end-of-stream for the size and remove the file if * necessary, this is currently deactivated. */ DEBUGP("got a block with %llu bytes", (t_ull)len); while (1) { #if 0 /* If first call, and buffer smaller than 32kB, avoid this calling ... */ if (mb_f->fpos == 0 && len<32*1024) eob=-1; else #endif STOPIF( cs___end_of_block(data, len, &eob, mb_f), NULL ); if (eob == -1) { DEBUGP("block continues after %lu.", (unsigned long)mb_f->fpos); break; } data += eob; len -= eob; DEBUGP("block ends after %lu; size %lu bytes (border=%u).", (unsigned long)mb_f->fpos, (unsigned long)(mb_f->fpos - mb_f->last_fpos), eob); /* write new line to data file */ i=sprintf(buffer, cs___mb_wr_format, cs__md5tohex_buffered(mb_f->block_md5), mb_f->last_state, (t_ull)mb_f->last_fpos, (t_ull)(mb_f->fpos - mb_f->last_fpos)); BUG_ON(i > sizeof(buffer)-3, "Buffer too small - stack overrun"); if (mb_f->manber_fd == -1) { /* The file has not been opened yet. * Do it now. * */ STOPIF( ops__build_path(&filename, mb_f->sts), NULL); STOPIF( waa__open_byext(filename, WAA__FILE_MD5s_EXT, WAA__WRITE, & cs___manber.manber_fd), NULL ); DEBUGP("now doing manber-hashing for %s...", filename); } STOPIF_CODE_ERR( write( mb_f->manber_fd, buffer, i) != i, errno, "writing to manber hash file"); /* re-init manber state */ STOPIF( cs___end_of_block(NULL, 0, NULL, mb_f), NULL ); mb_f->last_fpos = mb_f->fpos; } ex: return status; } svn_error_t *cs___mnbs_close(void *baton); svn_error_t *cs___mnbs_read(void *baton, char *data, apr_size_t *len) { int status; svn_error_t *status_svn; struct t_manber_data *mb_f=baton; status=0; /* Get the bytes, then process them. */ STOPIF_SVNERR( svn_stream_read, (mb_f->input, data, len) ); if (*len && data) STOPIF( cs___update_manber(mb_f, (unsigned char*)data, *len), NULL); else STOPIF_SVNERR( cs___mnbs_close, (baton)); ex: RETURN_SVNERR(status); } svn_error_t *cs___mnbs_write(void *baton, const char *data, apr_size_t *len) { int status; svn_error_t *status_svn; struct t_manber_data *mb_f=baton; status=0; /* We first write to the output stream, to know how many bytes could * be processed. Then we use that bytes. * If we just processed the incoming bytes, then fewer would get written, * and the remaining would be re-done we'd hash them twice. */ STOPIF_SVNERR( svn_stream_write, (mb_f->input, data, len) ); if (*len && data) STOPIF( cs___update_manber(mb_f, (unsigned char*)data, *len), NULL); else STOPIF_SVNERR( cs___mnbs_close, (baton)); ex: RETURN_SVNERR(status); } svn_error_t *cs___mnbs_close(void *baton) { int status; svn_error_t *status_svn; struct t_manber_data *mb_f=baton; status=0; /* If there have been less than CS__MIN_FILE_SIZE bytes, we * don't keep that file. */ if (mb_f->manber_fd != -1) { STOPIF( waa__close(mb_f->manber_fd, mb_f->fpos < CS__MIN_FILE_SIZE ? ECANCELED : status != 0), NULL ); mb_f->manber_fd=-1; } if (mb_f->input) { STOPIF_SVNERR( svn_stream_close, (mb_f->input) ); mb_f->input=NULL; } STOPIF( cs___finish_manber(mb_f), NULL); ex: RETURN_SVNERR(status); } /** -. * On commit and update we run the stream through a filter, to create the * manber-hash data (see \ref md5s) on the fly. * * \note * We currently give the caller no chance to say whether he wants the * full MD5 or not. * If we ever need to let him decide, he must either * - save the old MD5 * - or (better!) says where the MD5 should be stored - this pointer * would replace \c mb_f->full_md5 . * */ int cs__new_manber_filter(struct estat *sts, svn_stream_t *stream_input, svn_stream_t **filter_stream, apr_pool_t *pool) { int status; svn_stream_t *new_str; char *filename; status=0; STOPIF( cs___manber_data_init(&cs___manber, sts), "manber-data-init failed"); cs___manber.input=stream_input; new_str=svn_stream_create(&cs___manber, pool); STOPIF_ENOMEM( !new_str ); svn_stream_set_read(new_str, cs___mnbs_read); svn_stream_set_write(new_str, cs___mnbs_write); svn_stream_set_close(new_str, cs___mnbs_close); STOPIF( ops__build_path(&filename, sts), NULL); DEBUGP("initiating MD5 streaming for %s", filename); *filter_stream=new_str; /* The file with the hashes for the blocks is not immediately opened. * only when we detect that we have at least a minimum file size * we do the whole calculation.*/ ex: return status; } /** \defgroup md5s_overview Overview * \ingroup perf * * When we compare a file with its last version, we read all the * manber-hashes into memory. * When we use them on commit for constructing a delta stream, we'll * have to have them sorted and/or indexed for fast access; then we * can't read them from disk or something like that. * * * \section md5s_perf Performance considerations * * \subsection md5s_count Count of records, memory requirements * * We need about 16+4+8 (28, with alignment 32) bytes per hash value, * and that's for approx. 128kB. So a file of * 1M needs 8*32 => 512 bytes, * 1G needs 8k*32 => 512 kB, * 1T needs 8M*32 => 512 MB. * If this is too much, you'll have to increase CS__APPROX_BLOCKSIZE_BITS * and use bigger blocks. * * (Although, if you've got files greater than 1TB, you'll have other * problems than getting >512MB RAM) * And still, there's (nearly) always a swap space ... * * * \subsection md5s_alloc Allocation * * To avoid the costs of the unused 4 bytes (which accumulate to 32MB * on a 1TB file) and to get the manber-hashes better into * L2 cache (only the 32bit value - the rest is looked up after we * found the correct hash) we allocate 3 memory regions - * one for each data. * * * \subsection Reading for big files * * It's a tiny bit disturbing that we read the whole file at once and not * as-needed. * On the hypothetical 1TB file we'll be reading 512MB before seeing that * the first block had changed... * Perhaps this should be expanded later, to say something like "already * open, return next N entries" - file handle, last N, etc. should be stored * in struct cs__manber_hashes. * For now I'll just ignore that, as a (practical) 4GB file (a DVD) * will lead to 2MB read - and on average we'll find a difference after * 2GB more reading. * * A better way than read()/lseek() would be mmap of binary files. * But that wouldn't allow to have the data compressed. * I don't know if that matters; the 1TB file has * 8M lines * 60 Bytes => 480MB on ASCII-data. * * If we assume that the CRCs and lengths can be compressed away, * we still need the offsets and MD5s, so we would still end with * 8M * 24 Bytes, ie. at least 196MB. * I don't think that's a necessity. * * * \section Hash-collisions on big files * * Please note, that on 1TB files you'll have 8M of hash blocks, which * have a significant collision-chance on 32bit hash values! * (look up discussion about rsync and its bugs on block change detection). * We'd either have to use bigger blocks or a bigger hash value - * the second might be easier and better, esp. as files this big should * be on a 64bit platform, where a 64bit hash won't be slow. * * * \section The last block * * The last block in a file ends per definition *not* on a manber-block- * border (or only per chance). This block is not written into the md5s file. * The data is verified by the full-file MD5 that we've been calculating. * * \todo When we do a rsync-copy from the repository, we'll have to look at * that again! Either we write the last block too, or we'll have to ask for * the last few bytes extra. * * */ /** -. * \param sts The entry whose md5-data to load * \param data An allocated struct \c cs__manber_hashes; its arrays get * allocated, and, on error, deallocated. * If no error code is returned, freeing of the arrays has to be done by * the caller. * */ int cs__read_manber_hashes(struct estat *sts, struct cs__manber_hashes *data) { int status; char *filename; int fh, i, spp; unsigned estimated, count; t_ull length, start; char buffer[MANBER_LINELEN+10], *cp; AC_CV_C_UINT32_T value; status=0; memset(data, 0, sizeof(*data)); fh=-1; STOPIF( ops__build_path(&filename, sts), NULL); /* It's ok if there's no md5s file. simply return ENOENT. */ status=waa__open_byext(filename, WAA__FILE_MD5s_EXT, WAA__READ, &fh); if (status == ENOENT) goto ex; STOPIF( status, "reading md5s-file for %s", filename); DEBUGP("reading manber-hashes for %s", filename); /* We don't know in advance how many lines (i.e. manber-hashes) * there will be. * So we just interpolate from the file size and the (near-constant) * line-length and add a bit for good measure. * The rest is freed as soon as we've got all entries. */ length=lseek(fh, 0, SEEK_END); STOPIF_CODE_ERR( length==-1, errno, "Cannot get length of file %s", filename); STOPIF_CODE_ERR( lseek(fh, 0, SEEK_SET), errno, "Cannot seek in file %s", filename); /* We add 5%; due to integer arithmetic the factors have to be separated */ estimated = length*21/(MANBER_LINELEN*20)+4; DEBUGP("estimated %u manber-hashes from filelen %llu", estimated, length); /* Allocate memory ... */ STOPIF( hlp__calloc( &data->hash, estimated, sizeof(*data->hash)), NULL); STOPIF( hlp__calloc( & data->md5, estimated, sizeof( *data->md5)), NULL); STOPIF( hlp__calloc( & data->end, estimated, sizeof( *data->end)), NULL); count=0; while (1) { i=read(fh, buffer, sizeof(buffer)-1); STOPIF_CODE_ERR( i==-1, errno, "reading manber-hash data"); if (i==0) break; /* ensure strchr() stops */ buffer[i]=0; cp=strchr(buffer, '\n'); STOPIF_CODE_ERR(!cp, EINVAL, "line %u is invalid", count+1 ); /* reposition to start of next line */ STOPIF_CODE_ERR( lseek(fh, -i // start of this line + (cp-buffer) // end of this line + 1, // over \n SEEK_CUR) == -1, errno, "lseek back failed"); *cp=0; i=sscanf(buffer, cs___mb_rd_format, &spp, &value, &start, &length); STOPIF_CODE_ERR( i != 3, EINVAL, "cannot parse line %u for %s", count+1, filename); data->hash[count]=value; data->end[count]=start+length; buffer[spp]=0; STOPIF( cs__char2md5(buffer, NULL, data->md5[count]), NULL); count++; BUG_ON(count > estimated, "lines should have syntax errors - bug in estimation."); } data->count=count; DEBUGP("read %u entry tuples.", count); if (estimated-count > 3) { DEBUGP("reallocating..."); /* reallocate memory = free */ STOPIF( hlp__realloc( &data->hash, count*sizeof(*data->hash)), NULL); STOPIF( hlp__realloc( & data->md5, count*sizeof(* data->md5)), NULL); STOPIF( hlp__realloc( & data->end, count*sizeof(* data->end)), NULL); } /* The index is not always needed. Don't generate now. */ ex: if (status) { IF_FREE(data->hash); IF_FREE(data->md5); IF_FREE(data->end); } if (fh != -1) STOPIF_CODE_ERR( close(fh) == -1, errno, "Cannot close manber hash file (fd=%d)", fh); return status; } fsvs-fsvs-1.2.12/src/checksum.h000066400000000000000000000040671453631713700163200ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __CHECKSUM_H__ #define __CHECKSUM_H__ #include "global.h" #include "interface.h" #include /** \file * CRC, manber function header file. */ /** This structure is used for one big file. * It stores the CRCs and MD5s of the manber-blocks of this file. */ struct cs__manber_hashes { /** The manber-hashes */ AC_CV_C_UINT32_T *hash; /** The MD5-digests */ md5_digest_t *md5; /** The position of the first byte of the next block, ie. * N for a block which ends at byte N-1. */ off_t *end; /** The index into the above arrays - sorted by manber-hash. */ AC_CV_C_UINT32_T *index; /** Number of manber-hash-entries stored */ unsigned count; }; /** Checks whether a file has changed. */ int cs__compare_file(struct estat *sts, char *fullpath, int *result); /** Puts the hex string of \a md5 into \a dest, and returns \a dest. */ char* cs__md5tohex(const md5_digest_t md5, char *dest); /** Converts an MD5 digest to an ASCII string in a self-managed buffer. */ char *cs__md5tohex_buffered(const md5_digest_t md5); /** Converts an ASCII string to an MD5 digest. */ int cs__char2md5(const char *input, char **eos, md5_digest_t md5); /** Callback for the checksum layer. */ int cs__set_file_committed(struct estat *sts); /** Creates a \c svn_stream_t pipe, which writes the checksums of the * manber hash blocks to the \ref md5s file. */ int cs__new_manber_filter(struct estat *sts, svn_stream_t *stream_input, svn_stream_t **filter_stream, apr_pool_t *pool); /** Reads the \ref md5s file into memory. */ int cs__read_manber_hashes(struct estat *sts, struct cs__manber_hashes *data); /** Hex-character pair to ascii. */ int cs__two_ch2bin(char *stg); #endif fsvs-fsvs-1.2.12/src/commit.c000066400000000000000000001077731453631713700160110ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2009,2015 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ /** \file * \ref commit action. * * This is a bit hairy in that the order in which we process files (sorted * by inode, not in the directory structure) is not allowed * for a subversion editor. * * We have to read the complete tree, get the changes and store what we * want to do, and send these changes in a second run. * * * \section commit_2_revs Committing two revisions at once * Handling identical files; using hardlinks; creating two revisions on * commit. * * There are some use-cases where we'd like to store the data only a single * time in the repository, so that multiple files are seen as identical: * - hardlinks should be stored as hardlink; but subversion doesn't allow * something like that currently. Using some property pointing to the * "original" file would be some way; but for compatibility with other * subversion clients the data would have to be here, too. \n * Using copy-from would mess up the history of the file. * - Renames of changed files. Subversion doesn't accept copy-from links to * new files; we'd have to create two revisions: one with the data, and * the other with copyfrom information (or the other way around). * */ /** \addtogroup cmds * * \section commit * * \code * fsvs commit [-m "message"|-F filename] [-v] [-C [-C]] [PATH [PATH ...]] * \endcode * * Commits (parts of) the current state of the working copy into the * repository. * * * \subsection Example * * The working copy is \c /etc , and it is set up and committed already. \n * Then \c /etc/hosts and \c /etc/inittab got modified. Since these are * non-related changes, you'd like them to be in separate commits. * * So you simply run these commands: * \code * fsvs commit -m "Added some host" /etc/hosts * fsvs commit -m "Tweaked default runlevel" /etc/inittab * \endcode * * If the current directory is \c /etc you could even drop the \c /etc/ in * front, and use just the filenames. * * Please see \ref status for explanations on \c -v and \c -C . \n * For advanced backup usage see also \ref FSVS_PROP_COMMIT_PIPE * "the commit-pipe property". */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "global.h" #include "status.h" #include "checksum.h" #include "waa.h" #include "cache.h" #include "est_ops.h" #include "props.h" #include "options.h" #include "ignore.h" #include "cp_mv.h" #include "racallback.h" #include "url.h" #include "helper.h" /** Typedef needed for \a ci___send_user_props(). See there. */ typedef svn_error_t *(*change_any_prop_t) (void *baton, const char *name, const svn_string_t *value, apr_pool_t *pool); /** Counts the entries committed on the current URL. */ unsigned committed_entries; /** Remembers the to-be-made path in the repository, in UTF-8. */ char *missing_path_utf8; /** The precalculated length. */ int missing_path_utf8_len; /** -. * */ int ci__set_revision(struct estat *this, svn_revnum_t rev) { uint32_t i; /* should be benchmarked. * perhaps use better locality by doing levels at once. */ if (this->url == current_url) this->repos_rev=rev; if (S_ISDIR(this->st.mode)) for(i=0; ientry_count; i++) ci__set_revision(this->by_inode[i], rev); return 0; } /** Callback for successful commits. * * This is the only place that gets the new revision number * told. * * \c svn_ra.h does not tell whether these strings are really UTF8. I think * they must be ASCII, except if someone uses non-ASCII-user names ... * which nobody does. */ svn_error_t * ci__callback ( svn_revnum_t new_revision, const char *utf8_date, const char *utf8_author, void *baton) { struct estat *root UNUSED=baton; int status; status=0; if (opt__verbosity() > VERBOSITY_VERYQUIET) printf("committed revision\t%ld on %s as %s\n", new_revision, utf8_date, utf8_author); /* recursively set the new revision */ // STOPIF( ci__set_revision(root, new_revision), NULL); current_url->current_rev = new_revision; //ex: RETURN_SVNERR(status); } /** -. * * This callback is called by input_tree and build_tree. */ int ci__action(struct estat *sts) { int status; char *path; STOPIF( ops__build_path(&path, sts), NULL); STOPIF_CODE_ERR( sts->flags & RF_CONFLICT, EBUSY, "!The entry \"%s\" is still marked as conflict.", path); if (sts->entry_status || (sts->flags & RF___COMMIT_MASK) ) ops__mark_parent_cc(sts, entry_status); STOPIF( st__progress(sts), NULL); ex: return status; } /** Removes the flags saying that this entry was copied, recursively. * * Does stop on new copy-bases. * * Is needed because a simple "cp -a" wouldn't even go down into * the child-entries - there's nothing to do there! */ void ci___unset_copyflags(struct estat *root) { struct estat **sts; /* Delete the RF_ADD and RF_COPY_BASE flag, but set the FS_NEW status * instead. */ root->flags &= ~(RF_ADD | RF_COPY_BASE | RF_COPY_SUB); /* Set the current url for this entry. */ root->url=current_url; if (ops__has_children(root)) { sts=root->by_inode; while (*sts) { if (! ( (*sts)->flags & RF_COPY_BASE) ) { ci___unset_copyflags(*sts); } sts++; } } } #define TEST_FOR_OUT_OF_DATE(_sts, _s_er, ...) \ do { if (_s_er) { \ if (_s_er->apr_err == SVN_ERR_FS_TXN_OUT_OF_DATE) \ { \ char *filename; \ if (ops__build_path(&filename, _sts)) \ filename="(internal error)"; \ STOPIF( EBUSY, \ "!The entry \"%s\" is out-of-date;\n" \ "Please update your working copy.", \ filename); \ goto ex; \ } \ STOPIF( EBUSY, __VA_ARGS__); \ } } while (0) /* Convenience function; checks for \c FSVS_PROP_COMMIT_PIPE. * By putting that here we can avoid sending most of the parameters. */ // inline int send_a_prop(void *baton, int store_encoder, struct estat *sts, change_any_prop_t function, char *key, svn_string_t *value, apr_pool_t *pool) { int status; svn_error_t *status_svn; status=0; /* We could tell the parent whether we need this property value, to avoid * copying and freeing; but it's no performance problem, I think. */ if (store_encoder && strcmp(key, propval_commitpipe) == 0) { if (value) STOPIF( hlp__strdup( &sts->decoder, value->data), NULL); else sts->decoder=NULL; } status_svn=function(baton, key, value, pool); TEST_FOR_OUT_OF_DATE(sts, status_svn, "send user props"); ex: return status; } /** Send the user-defined properties. * * The property table is left cleaned up, ie. any deletions that were * ordered by the user have been done -- no properties with \c * prp__prop_will_be_removed() will be here. * * If \a store_encoder is set, \c sts->decoder gets set from the value of * the commit-pipe. * * \c auto-props from groupings are sent, too. * */ int ci___send_user_props(void *baton, struct estat *sts, change_any_prop_t function, int store_encoder, apr_pool_t *pool) { int status; datum key, value; hash_t db; svn_string_t *str; db=NULL; /* First do auto-props. */ STOPIF( ops__apply_group(sts, &db, pool), NULL); /* Do user-defined properties. * Could return ENOENT if none. */ if (db) { status=prp__first(db, &key); while (status==0) { STOPIF( prp__fetch(db, key, &value), NULL); if (hlp__is_special_property_name(key.dptr)) { DEBUGP("ignoring %s - should not have been taken?", key.dptr); } else if (prp__prop_will_be_removed(value)) { DEBUGP("removing property %s", key.dptr); STOPIF( send_a_prop(baton, store_encoder, sts, function, key.dptr, NULL, pool), NULL); STOPIF( hsh__register_delete(db, key), NULL); } else { DEBUGP("sending property %s=(%d)%.*s", key.dptr, value.dsize, value.dsize, value.dptr); str=svn_string_ncreate(value.dptr, value.dsize-1, pool); STOPIF( send_a_prop(baton, store_encoder, sts, function, key.dptr, str, pool), NULL); } status=prp__next( db, &key, &key); } /* Anything but ENOENT spells trouble. */ if (status != ENOENT) STOPIF(status, NULL); status=0; } /* A hsh__close() does the garbage collection. */ STOPIF( hsh__close(db, status), NULL); ex: return status; } /** Send the meta-data-properties for \a baton. * * We hope that group/user names are ASCII; the names of "our" properties * are known, and contain no characters above \\x80. * * We get the \a function passed, because subversion has different property * setters for files and directories. * * If \a props is not \c NULL, we return the properties' handle. */ svn_error_t *ci___set_props(void *baton, struct estat *sts, change_any_prop_t function, apr_pool_t *pool) { const char *ccp; svn_string_t *str; int status; svn_error_t *status_svn; status=0; /* The unix-mode property is not sent for a symlink, as there's no * lchmod(). */ if (!S_ISLNK(sts->st.mode)) { /* mode */ str=svn_string_createf (pool, "0%03o", (int)(sts->st.mode & 07777)); status_svn=function(baton, propname_umode, str, pool); if (status_svn) goto error; } /* owner */ str=svn_string_createf (pool, "%lu %s", (unsigned long)sts->st.uid, hlp__get_uname(sts->st.uid, "") ); status_svn=function(baton, propname_owner, str, pool); if (status_svn) goto error; /* group */ str=svn_string_createf (pool, "%lu %s", (unsigned long)sts->st.gid, hlp__get_grname(sts->st.gid, "") ); status_svn=function(baton, propname_group, str, pool); if (status_svn) goto error; /* mtime. Extra const char * needed. */ ccp=(char *)svn_time_to_cstring ( apr_time_make( sts->st.mtim.tv_sec, sts->st.mtim.tv_nsec/1000), pool); str=svn_string_create(ccp, pool); status_svn=function(baton, propname_mtime, str, pool); if (status_svn) goto error; ex: RETURN_SVNERR(status); error: TEST_FOR_OUT_OF_DATE(sts, status_svn, "set meta-data"); goto ex; } /** Commit function for non-directory entries. * * Here we handle devices, symlinks and files. * * The given \a baton is already for the item; we got it from \a add_file * or \a open_file. * We just have to put data in it. * */ svn_error_t *ci__nondir(const svn_delta_editor_t *editor, struct estat *sts, void *baton, apr_pool_t *pool) { svn_txdelta_window_handler_t delta_handler; void *delta_baton; svn_error_t *status_svn; svn_stream_t *s_stream; char *cp; char *filename; int status; svn_string_t *stg; apr_file_t *a_stream; svn_stringbuf_t *str; struct encoder_t *encoder; int transfer_text, has_manber; hash_t db; str=NULL; a_stream=NULL; s_stream=NULL; encoder=NULL; STOPIF( ops__build_path(&filename, sts), NULL); /* The only "real" information symlinks have is the target * they point to. We don't set properties which won't get used on * update anyway - that saves a tiny bit of space. * What we need to send (for symlinks) are the user-defined properties. * */ /* Should we possibly send the properties only if changed? Would not make * much difference, bandwidth-wise. */ /* if ((sts->flags & RF_PUSHPROPS) || (sts->entry_status & (FS_META_CHANGED | FS_NEW)) ) */ STOPIF( ci___send_user_props(baton, sts, editor->change_file_prop, 1, pool), NULL); STOPIF_SVNERR( ci___set_props, (baton, sts, editor->change_file_prop, pool) ); /* By now we should know if our file really changed. */ BUG_ON( sts->entry_status & FS_LIKELY ); /* However, sending fulltext only if it really changed DOES make * a difference if you do not have a gigabit pipe to your * server. ;) * The RF_ADD was replaced by FS_NEW above. */ DEBUGP("%s: status %s; flags %s", sts->name, st__status_string(sts), st__flags_string_fromint(sts->flags)); transfer_text= sts->entry_status & (FS_CHANGED | FS_NEW | FS_REMOVED); /* In case the file is identical to the original copy source, we need * not send the data to the server. * BUT we have to store the correct MD5 locally; as the source file may * have changed, we re-calculate it - that has the additional advantage * that the manber-hashes get written, for faster comparison next time. * * I thought about using cs__compare_file() in the local check sequence * to build a new file; but if anything goes wrong later, the file would * be overwritten with the wrong data. * That's true if something goes wrong here, too. * * Another idea would be to build the new manber file with another name, * and only rename if it actually was committed ... but there's a race, * too. And we couldn't abort the check on the first changed bytes, and * we'd need doubly the space, ... * * TODO: run the whole fsvs commit process against an unionfs, and use * that for local transactions. */ if (!transfer_text && !(sts->flags & RF___IS_COPY)) { DEBUGP("hasn't changed, and no copy."); } else { has_manber=0; switch (sts->st.mode & S_IFMT) { case S_IFLNK: STOPIF( ops__link_to_string(sts, filename, &cp), NULL); STOPIF( hlp__local2utf8(cp, &cp, -1), NULL); /* It is not defined whether svn_stringbuf_create copies the string, * takes the character pointer into the pool, or whatever. * Knowing people wanted. */ str=svn_stringbuf_create(cp, pool); break; case S_IFBLK: case S_IFCHR: /* See above */ /* We only put ASCII in this string */ str=svn_stringbuf_create( ops__dev_to_filedata(sts), pool); break; case S_IFREG: STOPIF( apr_file_open(&a_stream, filename, APR_READ, 0, pool), "open file \"%s\" for reading", filename); s_stream=svn_stream_from_aprfile (a_stream, pool); /* We need the local manber hashes and MD5s to detect changes; * the remote values would be needed for delta transfers. */ has_manber= (sts->st.size >= CS__MIN_FILE_SIZE); if (has_manber) STOPIF( cs__new_manber_filter(sts, s_stream, &s_stream, pool), NULL ); /* That's needed only for actually putting the data in the * repository - for local re-calculating it isn't. */ if (transfer_text && sts->decoder) { /* The user-defined properties have already been sent, so the * propval_commitpipe would already be cleared; we don't need to * check for prp__prop_will_be_removed(). */ STOPIF( hlp__encode_filter(s_stream, sts->decoder, 0, filename, &s_stream, &encoder, pool), NULL ); encoder->output_md5= &(sts->md5); IF_FREE(sts->decoder); } break; default: BUG("invalid/unknown file type 0%o", sts->st.mode); } /* for special nodes */ if (str) s_stream=svn_stream_from_stringbuf (str, pool); BUG_ON(!s_stream); if (transfer_text) { DEBUGP("really sending ..."); STOPIF_SVNERR( editor->apply_textdelta, (baton, NULL, // checksum of old file, pool, &delta_handler, &delta_baton)); /* If we're transferring the data, we always get an MD5 here. We can * take the local value, if it had to be encoded. */ STOPIF_SVNERR( svn_txdelta_send_stream, (s_stream, delta_handler, delta_baton, sts->md5, pool) ); DEBUGP("after sending encoder=%p", encoder); } else { DEBUGP("doing local MD5."); /* For a non-changed entry, simply pass the data through the MD5 (and, * depending on filesize, the manber filter). * If the manber filter already does the MD5, we don't need it a second * time. */ STOPIF( hlp__stream_md5(s_stream, has_manber ? NULL : sts->md5), NULL); } STOPIF_SVNERR( svn_stream_close, (s_stream) ); /* If it's a special entry (device/symlink), set the special flag. */ if (str) { stg=svn_string_create(propval_special, pool); STOPIF_SVNERR( editor->change_file_prop, (baton, propname_special, stg, pool) ); } /* If the entry was encoded, send the original MD5 as well. */ if (encoder) { cp=cs__md5tohex_buffered(sts->md5); DEBUGP("Sending original MD5 as %s", cp); stg=svn_string_create(cp, pool); STOPIF_SVNERR( editor->change_file_prop, (baton, propname_origmd5, stg, pool) ); STOPIF( prp__open_byestat(sts, GDBM_WRCREAT | HASH_REMEMBER_FILENAME, &db), NULL); STOPIF( prp__set(db, propname_origmd5, cp, -1), NULL); STOPIF( hsh__close(db, 0), NULL); } } STOPIF( cs__set_file_committed(sts), NULL); ex: if (a_stream) { /* As this file was opened read only, we can dismiss any errors. * We could give them only if everything else worked ... */ apr_file_close(a_stream); } RETURN_SVNERR(status); } /** Commit function for directories. * */ svn_error_t *ci__directory(const svn_delta_editor_t *editor, struct estat *dir, void *dir_baton, apr_pool_t *pool) { void *baton; int status; struct estat *sts; apr_pool_t *subpool; uint32_t i, exists_now; char *filename; char *utf8_filename, *tmp; svn_error_t *status_svn; char *src_path; svn_revnum_t src_rev; struct sstat_t stat; struct cache_entry_t *utf8fn_plus_missing; int utf8fn_len; status=0; utf8fn_plus_missing=NULL; subpool=NULL; DEBUGP("commit_dir with baton %p", dir_baton); for(i=0; ientry_count; i++) { sts=dir->by_inode[i]; /* The flags are stored persistently; we have to check whether this * entry shall be committed. */ if ( (sts->flags & RF___COMMIT_MASK) && sts->do_this_entry) { /* Did we change properties since last commit? Then we have something * to do. */ if (sts->flags & RF_PUSHPROPS) sts->entry_status |= FS_PROPERTIES; } else if (sts->entry_status) { /* The entry_status is set depending on the do_this_entry already; * if it's not 0, it's got to be committed. */ /* Maybe a child needs attention (with FS_CHILD_CHANGED), so we have * to recurse. */ } else /* Completely ignore item if nothing to be done. */ continue; /* clear an old pool */ if (subpool) apr_pool_destroy(subpool); /* get a fresh pool */ STOPIF( apr_pool_create_ex(&subpool, pool, NULL, NULL), "no pool"); STOPIF( ops__build_path(&filename, sts), NULL); /* As the path needs to be canonical we strip the ./ in front, and * possibly have to prepend some path (see option mkdir_base) */ STOPIF( hlp__local2utf8(filename+2, &utf8_filename, -1), NULL ); if (missing_path_utf8) { utf8fn_len=strlen(utf8_filename); STOPIF( cch__entry_set(&utf8fn_plus_missing, 0, NULL, missing_path_utf8_len + 1 + utf8fn_len + 1, 0, &tmp), NULL); strcpy(tmp, missing_path_utf8); tmp[missing_path_utf8_len]='/'; strcpy(tmp + missing_path_utf8_len +1, utf8_filename); utf8_filename=tmp; } DEBUGP("%s: action (%s), updated mode 0%o, flags %X, filter %d", filename, st__status_string(sts), sts->st.mode, sts->flags, ops__allowed_by_filter(sts)); if (ops__allowed_by_filter(sts)) STOPIF( st__status(sts), NULL); exists_now= !(sts->flags & RF_UNVERSION) && ( (sts->entry_status & (FS_NEW | FS_CHANGED | FS_META_CHANGED)) || (sts->flags & (RF_ADD | RF_PUSHPROPS | RF_COPY_BASE)) ); if ( (sts->flags & RF_UNVERSION) || (sts->entry_status & FS_REMOVED) ) { DEBUGP("deleting %s", sts->name); /* that's easy :-) */ STOPIF_SVNERR( editor->delete_entry, (utf8_filename, SVN_INVALID_REVNUM, dir_baton, subpool) ); committed_entries++; if (!exists_now) { DEBUGP("%s=%d doesn't exist anymore", sts->name, i); /* remove from data structures */ STOPIF( ops__delete_entry(dir, NULL, i, UNKNOWN_INDEX), NULL); STOPIF( waa__delete_byext(filename, WAA__FILE_MD5s_EXT, 1), NULL); STOPIF( waa__delete_byext(filename, WAA__PROP_EXT, 1), NULL); i--; continue; } } /* If there something to do - get a baton. * Else we're finished with this one. */ if (!exists_now && !(sts->entry_status & FS_CHILD_CHANGED)) continue; /* If we would send some data, verify the state of the entry. * Maybe it's a temporary file, which is already deleted. * As we'll access this entry in a few moments, the additional lookup * doesn't hurt much. * * (Although I'd be a bit happier if I found a way to do that better ... * currently I split by new/existing entry, and then by * directory/everything else. * Maybe I should change that logic to *only* split by entry type. * But then I'd still have to check for directories ...) * * So "Just Do It" (tm). */ /* access() would possibly be a bit lighter, but doesn't work * for broken symlinks. */ /* TODO: Could we use FS_REMOVED here?? */ if (hlp__lstat(filename, &stat)) { /* If an entry doesn't exist, but *should*, as it's marked RF_ADD, * we fail (currently). * Could be a warning with a default action of STOP. */ STOPIF_CODE_ERR( sts->flags & RF_ADD, ENOENT, "Entry %s should be added, but doesn't exist.", filename); DEBUGP("%s doesn't exist, ignoring (%d)", filename, errno); continue; } /* In case this entry is a directory that's only done because of its * children we shouldn't change its known data - we'd silently change * eg. the mtime. */ if (sts->do_this_entry && ops__allowed_by_filter(sts)) { sts->st=stat; DEBUGP("set st for %s", sts->name); } /* We need a baton. */ baton=NULL; /* If this entry has the RF_ADD or RF_COPY_BASE flag set, or is FS_NEW, * it is new (as far as subversion is concerned). * If this is an implicitly copied entry, subversion already knows * about it, so use open_* instead of add_*. */ if ((sts->flags & (RF_ADD | RF_COPY_BASE) ) || (sts->entry_status & FS_NEW) ) { /* New entry, fetch handle via add_* below. */ } else { status_svn= (S_ISDIR(sts->st.mode) ? editor->open_directory : editor->open_file) ( utf8_filename, dir_baton, current_url->current_rev, subpool, &baton); DEBUGP("opening %s with base %llu", filename, (t_ull)current_url->current_rev); TEST_FOR_OUT_OF_DATE(sts, status_svn, "%s(%s) returns %d", S_ISDIR(sts->st.mode) ? "open_directory" : "open_file", filename, status_svn->apr_err); DEBUGP("baton for mod %s %p (parent %p)", sts->name, baton, dir_baton); } if (!baton) { DEBUGP("new %s (parent %p)", sts->name, dir_baton); /* Maybe that test should be folded into cm__get_source -- that would * save the assignments in the else-branch. * But we'd have to check for ENOENT again - it's not allowed if * RF_COPY_BASE is set, but possible if this flag is not set. So we'd * not actually get much. */ if (sts->flags & RF_COPY_BASE) { status=cm__get_source(sts, filename, &src_path, &src_rev, 1); BUG_ON(status == ENOENT, "copy but not copied?"); STOPIF(status, NULL); } else { /* Set values to "not copied". */ src_path=NULL; src_rev=SVN_INVALID_REVNUM; } /* TODO: src_sts->entry_status newly added? Then remember for second * commit! * */ DEBUGP("adding %s with %s:%ld", filename, src_path, src_rev); /** \name STOPIF_SVNERR_INDIR */ status_svn = (S_ISDIR(sts->st.mode) ? editor->add_directory : editor->add_file) (utf8_filename, dir_baton, src_path, src_rev, subpool, &baton); TEST_FOR_OUT_OF_DATE(sts, status_svn, "%s(%s, source=\"%s\"@%s) returns %d", S_ISDIR(sts->st.mode) ? "add_directory" : "add_file", filename, src_path, hlp__rev_to_string(src_rev), status_svn->apr_err); DEBUGP("baton for new %s %p (parent %p)", sts->name, baton, dir_baton); /* Copied entries need their information later in ci__nondir(). */ if (!(sts->flags & RF_COPY_BASE)) { sts->flags &= ~RF_ADD; sts->entry_status |= FS_NEW | FS_META_CHANGED; } } committed_entries++; DEBUGP("doing changes, flags=%X", sts->flags); /* Now we have a baton. Do changes. */ if (S_ISDIR(sts->st.mode)) { STOPIF_SVNERR( ci__directory, (editor, sts, baton, subpool) ); STOPIF_SVNERR( editor->close_directory, (baton, subpool) ); } else { STOPIF_SVNERR( ci__nondir, (editor, sts, baton, subpool) ); STOPIF_SVNERR( editor->close_file, (baton, NULL, subpool) ); } /* If it's copy base, we need to clean up all flags below; else we * just remove an (ev. set) add-flag. * We cannot do that earlier, because eg. ci__nondir() needs this * information. */ if (sts->flags & RF_COPY_BASE) ci___unset_copyflags(sts); /* Now this paths exists in this URL. */ if (url__current_has_precedence(sts->url)) { DEBUGP("setting URL of %s", filename); sts->url=current_url; sts->repos_rev = SET_REVNUM; } } /* When a directory has been committed (with all changes), * we can drop the check flag. * If we only do parts of the child list, we must set it, so that we know * to check for newer entries on the next status. (The directory * structure must possibly be built in the repository, so we have to do * each layer, and after a commit we take the current timestamp -- so we * wouldn't see changes that happened before the partly commit.) */ if (! (dir->do_this_entry && ops__allowed_by_filter(dir)) ) dir->flags |= RF_CHECK; else dir->flags &= ~RF_CHECK; /* That this entry belongs to this URL has already been set by the * parent loop. */ /* Given this example: * $ mkdir -p dir/sub/subsub * $ touch dir/sub/subsub/file * $ fsvs ci dir/sub/subsub * * Now "sub" gets committed because of its children; as having a * directory *without* meta-data in the repository is worse than having * valid data set, we push the meta-data properties for *new* * directories, and otherwise if they should be done and got something * changed. */ /* Regarding the "dir->parent" check: If we try to send properties for * the root directory, we get "out of date" ... even if nothing changed. * So don't do that now, until we know a way to make that work. * * Problem case: user creates an empty directory in the repository "svn * mkdir url:///", then sets this directory as base, and we try to commit - * "it's empty, after all". * Needing an update is not nice - but maybe what we'll have to do. */ if ((dir->do_this_entry && ops__allowed_by_filter(dir) && dir->parent && /* Are there properties to push? */ (dir->entry_status & (FS_META_CHANGED | FS_PROPERTIES))) || (dir->entry_status & FS_NEW)) { STOPIF_SVNERR( ci___set_props, (dir_baton, dir, editor->change_dir_prop, pool) ); STOPIF( ci___send_user_props(dir_baton, dir, editor->change_dir_prop, 0, pool), NULL); } ex: if (subpool) apr_pool_destroy(subpool); RETURN_SVNERR(status); } /** Start an editor, to get a commit message. * * We look for \c $EDITOR and \c $VISUAL -- to fall back on good ol' vi. */ int ci__getmsg(char **filename) { char *editor_cmd, *cp; int l,status; apr_file_t *af; status=0; STOPIF( waa__get_tmp_name( NULL, filename, &af, global_pool), NULL); /* we close the file, as an editor might delete the file and * write a new. */ STOPIF( apr_file_close(af), "close commit message file"); editor_cmd=getenv("EDITOR"); if (!editor_cmd) editor_cmd=getenv("VISUAL"); if (!editor_cmd) editor_cmd="vi"; l=strlen(editor_cmd) + 1 + strlen(opt_commitmsgfile) + 1; STOPIF( hlp__strmnalloc(l, &cp, editor_cmd, " ", opt_commitmsgfile, NULL), NULL); l=system(cp); STOPIF_CODE_ERR(l == -1, errno, "fork() failed"); STOPIF_CODE_ERR(l, WEXITSTATUS(l), "spawned editor exited with %d, signal %d", WEXITSTATUS(l), WIFSIGNALED(l) ? WTERMSIG(l) : 0); status=0; ex: return status; } /** Creates base directories from \c missing_path_utf8, if necessary, and * calls \c ci__directory(). * * \a current_missing points into \c missing_path_utf8_len, to the current * path spec; \a editor, \a root and \a dir_baton are as in * ci__directory(). * * As the number of directories created this way is normally 0, and for * typical non-zero use I'd believe about 3 or 4 levels (maximum), we don't * use an extra recursion pool here. */ svn_error_t *ci___base_dirs(char *current_missing, const svn_delta_editor_t *editor, struct estat *root, void *dir_baton) { int status; svn_error_t *status_svn; char *delim; void *child_baton; status=0; if (current_missing && *current_missing) { /* Create one level of the hierarchy. */ delim=strchr(current_missing, '/'); if (delim) { *delim=0; delim++; /* There must not be a "/" at the end, or two slashes. */ BUG_ON(!*delim || *delim=='/'); } DEBUGP("adding %s", missing_path_utf8); STOPIF_SVNERR( editor->add_directory, (missing_path_utf8, dir_baton, NULL, SVN_INVALID_REVNUM, current_url->pool, &child_baton)); if (delim) delim[-1]='/'; STOPIF_SVNERR( ci___base_dirs, (delim, editor, root, child_baton)); STOPIF_SVNERR( editor->close_directory, (child_baton, current_url->pool)); } else STOPIF_SVNERR( ci__directory, (editor, root, dir_baton, current_url->pool)); ex: RETURN_SVNERR(status); } /** The main commit function. * * It does as much setup as possible before traversing the tree - to find * errors (no network, etc.) as soon as possible. * * The message file gets opened here to verify its existence, * and to get a handle to it. If we're doing \c chdir()s later we don't * mind; the open handle let's us read when we need it. And the contents * are cached only as long as necessary. */ int ci__work(struct estat *root, int argc, char *argv[]) { int status; svn_error_t *status_svn; const svn_delta_editor_t *editor; void *edit_baton; void *root_baton; struct stat st; int commitmsg_fh, commitmsg_is_temp; char *utf8_commit_msg; char **normalized; const char *url_name; time_t delay_start; char *missing_dirs; status=0; status_svn=NULL; edit_baton=NULL; editor=NULL; /* This cannot be used uninitialized, but gcc doesn't know */ commitmsg_fh=-1; opt__set_int(OPT__CHANGECHECK, PRIO_MUSTHAVE, opt__get_int(OPT__CHANGECHECK) | CHCHECK_DIRS | CHCHECK_FILE); /* This must be done before opening the file. */ commitmsg_is_temp=!opt_commitmsg && !opt_commitmsgfile; if (commitmsg_is_temp) STOPIF( ci__getmsg(&opt_commitmsgfile), NULL); /* If there's a message file, open it here. (Bug out early, if * necessary). * * This must be done before waa__find_common_base(), as this does a * chdir() and would make relative paths invalid. */ if (opt_commitmsgfile) { commitmsg_fh=open(opt_commitmsgfile, O_RDONLY); STOPIF_CODE_ERR( commitmsg_fh<0, errno, "cannot open file %s", opt_commitmsgfile); } STOPIF( waa__find_common_base(argc, argv, &normalized), NULL); /* Check if there's an URL defined before asking for a message */ STOPIF( url__load_nonempty_list(NULL, 0), NULL); if (urllist_count==1) current_url=urllist[0]; else { url_name=opt__get_string(OPT__COMMIT_TO); STOPIF_CODE_ERR( !url_name || !*url_name, EINVAL, "!Which URL would you like to commit to?\n" "Please choose one (config option \"commit_to\")."); STOPIF( url__find_by_name(url_name, ¤t_url), "!No URL named \"%s\" could be found.", url_name); } STOPIF_CODE_ERR( current_url->is_readonly, EROFS, "!Cannot commit to \"%s\",\n" "because it is marked read-only.", current_url->url); STOPIF(ign__load_list(NULL), NULL); STOPIF( url__open_session(NULL, &missing_dirs), NULL); /* Warn early. */ if (missing_dirs) STOPIF_CODE_ERR( opt__get_int(OPT__MKDIR_BASE) == OPT__NO, ENOENT, "!The given URL \"%s\" does not exist (yet).\n" "The missing directories \"%s\" could possibly be created, if\n" "you enable the \"mkdir_base\" option (with \"-o mkdir_base=yes\").", current_url->url, missing_dirs); opt__set_int( OPT__CHANGECHECK, PRIO_MUSTHAVE, opt__get_int(OPT__CHANGECHECK) | CHCHECK_DIRS | CHCHECK_FILE); /* This is the first step that needs some wall time - descending * through the directories, reading inodes */ STOPIF( waa__read_or_build_tree(root, argc, normalized, argv, NULL, 0), NULL); if (opt_commitmsgfile) { STOPIF_CODE_ERR( fstat(commitmsg_fh, &st) == -1, errno, "cannot estimate size of %s", opt_commitmsgfile); if (st.st_size == 0) { /* We're not using some mapped memory. */ DEBUGP("empty file"); opt_commitmsg=""; } else { DEBUGP("file is %llu bytes", (t_ull)st.st_size); opt_commitmsg=mmap(NULL, st.st_size, PROT_READ, MAP_SHARED, commitmsg_fh, 0); STOPIF_CODE_ERR(!opt_commitmsg, errno, "mmap commit message (%s, %llu bytes)", opt_commitmsgfile, (t_ull)st.st_size); } close(commitmsg_fh); } if (!*opt_commitmsg) { STOPIF_CODE_ERR( opt__get_int(OPT__EMPTY_MESSAGE)==OPT__NO, EINVAL, "!Empty commit messages are defined as invalid, " "see \"empty_message\" option."); } STOPIF( hlp__local2utf8(opt_commitmsg, &utf8_commit_msg, -1), "Conversion of the commit message to utf8 failed"); if (opt__verbosity() > VERBOSITY_VERYQUIET) printf("Committing to %s\n", current_url->url); STOPIF_SVNERR( svn_ra_get_commit_editor, (current_url->session, &editor, &edit_baton, utf8_commit_msg, ci__callback, root, NULL, // apr_hash_t *lock_tokens, FALSE, // svn_boolean_t keep_locks, global_pool) ); if (opt_commitmsgfile && st.st_size != 0) STOPIF_CODE_ERR( munmap(opt_commitmsg, st.st_size) == -1, errno, "munmap()"); if (commitmsg_is_temp) STOPIF_CODE_ERR( unlink(opt_commitmsgfile) == -1, errno, "Cannot remove temporary message file %s", opt_commitmsgfile); /* The whole URL is at the same revision - per definition. */ STOPIF_SVNERR( editor->open_root, (edit_baton, current_url->current_rev, global_pool, &root_baton) ); /* Only children are updated, not the root. Do that here. */ if (ops__allowed_by_filter(root)) STOPIF( hlp__lstat( root->name, &root->st), NULL); committed_entries=0; if (missing_dirs) { STOPIF( hlp__local2utf8( missing_dirs, &missing_dirs, -1), NULL); /* As we're doing a lot of local->utf8 conversions we have to copy the * result. */ missing_path_utf8_len=strlen(missing_dirs); STOPIF( hlp__strnalloc(missing_path_utf8_len+1, &missing_path_utf8, missing_dirs), NULL); } /* This is the second step that takes time. */ STOPIF_SVNERR( ci___base_dirs, (missing_path_utf8, editor, root, root_baton)); /* If an error occurred, abort the commit. */ if (!status) { if (opt__get_int(OPT__EMPTY_COMMIT)==OPT__NO && committed_entries==0) { if (opt__verbosity() > VERBOSITY_VERYQUIET) printf("Avoiding empty commit as requested.\n"); goto abort_commit; } STOPIF_SVNERR( editor->close_directory, (root_baton, global_pool) ); root_baton=NULL; STOPIF_SVNERR( editor->close_edit, (edit_baton, global_pool) ); edit_baton=NULL; delay_start=time(NULL); /* Has to write new file, if commit succeeded. */ if (!status) { /* We possibly have to use some generation counter: * - write the URLs to a temporary file, * - write the entries, * - rename the temporary file. * Although, if we're cut off anywhere, we're not consistent with the * data. * Just use unionfs - that's easier. */ STOPIF( waa__output_tree(root), NULL); STOPIF( url__output_list(), NULL); } /* We do the delay here ... here we've got a chance that the second * wrap has already happened because of the IO above. */ STOPIF( hlp__delay(delay_start, DELAY_COMMIT), NULL); } ex: STOP_HANDLE_SVNERR(status_svn); ex2: if (status && edit_baton) { abort_commit: /* If there has already something bad happened, it probably * makes no sense checking the error code. */ editor->abort_edit(edit_baton, global_pool); } return status; } fsvs-fsvs-1.2.12/src/commit.h000066400000000000000000000014641453631713700160040ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2008 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __COMMIT_H__ #define __COMMIT_H__ #include #include "actions.h" /** \file * \ref commit action header file. */ /** Mark entries' parents as to-be-traversed. */ action_t ci__action; /* Main commit function. */ work_t ci__work; /** Sets the given revision \a rev recursive on all entries correlating to * \a current_url. */ int ci__set_revision(struct estat *this, svn_revnum_t rev); #endif fsvs-fsvs-1.2.12/src/config.h.in000066400000000000000000000103371453631713700163650ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2006-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __CONFIG_H__ #define __CONFIG_H__ /** \file * \c Autoconf results storage. */ /** \defgroup compat Compatibility and interfaces * * For a reference on how to use FSVS on older system, please see also * \ref howto_chroot. * */ /** \defgroup compati Compilation-only * \ingroup compat * */ /** \defgroup autoconf Autoconf results. * \ingroup compati * * Here are the results of \c configure stored. * They get defined by the system and are used to tell FSVS whether * optional parts can be activated or not. */ /** @{ */ /** Whether the valgrind headers were found. * Then some initializers can specifically mark areas as initialized. */ #undef HAVE_VALGRIND /** If this is defined, some re-arrangements in struct-layout are made, * and additional bug-checking code may be included. */ #undef ENABLE_DEBUG /** Whether gcov test-coverage is wanted. */ #undef ENABLE_GCOV /** If set to 1, disable debug messages. */ #undef ENABLE_RELEASE /** How many characters of the MD5(wc_path) are used to distinguish the WAA * paths. */ #undef WAA_WC_MD5_CHARS #if WAA_WC_MD5_CHARS >=0 && WAA_WC_MD5_CHARS <=32 /* nothing, ok. */ #else #error "WAA_WC_MD5_CHARS invalid." #endif /** OpenBSD has no locales support. */ #undef HAVE_LOCALES /** Unsigned 32bit type. * The value of \c AC_CV_C_UINT32_T changed between autoconf 2.59e and 2.60. * Since 2.60 we get \c yes instead of the type. * And there's no \c HAVE_UINT32_T ... * I don't seem to get that to work properly. * So I changed configure.in to substitute \c yes to \c uint32_t. */ #undef HAVE_UINT32_T /* #if HAVE_UINT32_T #include #include #endif */ #undef AC_CV_C_UINT32_T /** Whether \c linux/types.h was found. */ #undef HAVE_LINUX_TYPES_H /** Whether \c linux/unistd.h was found. */ #undef HAVE_LINUX_UNISTD_H /** Whether \c dirfd() was found (\ref dir__get_dir_size()). */ #undef HAVE_DIRFD /** Whether there's an additional microsecond field in struct stat. */ #undef HAVE_STRUCT_STAT_ST_MTIM /** The chroot jail path given at configure time. */ #undef CHROOTER_JAIL #undef NEED_ENVIRON_EXTERN /** Comparison function definition (for \c qsort()) */ #undef HAVE_COMPARISON_FN_T #ifndef HAVE_COMPARISON_FN_T typedef int (*comparison_fn_t) (__const void *, __const void *); #endif #undef HAVE_O_DIRECTORY #ifndef HAVE_O_DIRECTORY #define O_DIRECTORY (0) #endif /** Does \c linux/kdev_t.h exist? * Needed for \a MAJOR() and \a MINOR() macros. */ #undef HAVE_LINUX_KDEV_T_H /** Should we fake definitions? */ #undef ENABLE_DEV_FAKE /** Error macro if no device definitions available. */ #undef DEVICE_NODES_DISABLED #ifdef HAVE_LINUX_KDEV_T_H #include #else #ifdef ENABLE_DEV_FAKE /** \name fakedev Fake definitions, as reported with configure. * Taken from \c linux/kdev_t.h. */ /** @{ */ #define MAJOR(dev) ((dev)>>8) #define MINOR(dev) ((dev) & 0xff) #define MKDEV(ma,mi) ((ma)<<8 | (mi)) /** @} */ #else /** No definitions, disable some code. */ #define DEVICE_NODES_DISABLED() BUG("No MAJOR(), MINOR() or MKDEV() found at configure time.") #undef MAJOR #undef MINOR #undef MKDEV #endif #endif /** @} */ /** i386 has the attribute fastcall; this is used for a few * small functions. */ #undef HAVE_FASTCALL #ifdef HAVE_FASTCALL #define FASTCALL __attribute__((fastcall)) #else #define FASTCALL #endif /** Changing owner/group for symlinks possible? */ #undef HAVE_LCHOWN /** Changing timestamp for symlinks? */ #undef HAVE_LUTIMES /** For Solaris 10, thanks Peter. */ #ifndef NAME_MAX #define NAME_MAX (FILENAME_MAX) #endif #undef HAVE_STRSEP #ifndef HAVE_STRSEP char * strsep (char **stringp, const char *delim); #endif #undef HAVE_FMEMOPEN #ifdef HAVE_FMEMOPEN #define ENABLE_DEBUGBUFFER 1 #else #undef ENABLE_DEBUGBUFFER #endif /** Check for doors; needed for Solaris 10, thanks XXX */ #ifndef S_ISDOOR #define S_ISDOOR(x) (0) #endif #endif fsvs-fsvs-1.2.12/src/cp_mv.c000066400000000000000000001316421453631713700156150ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2007-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #include #include #include "global.h" #include "cp_mv.h" #include "status.h" #include "est_ops.h" #include "url.h" #include "checksum.h" #include "options.h" #include "props.h" #include "cache.h" #include "helper.h" #include "waa.h" /** \file * \ref cp and \ref mv actions. * * Various thoughts ... * - Can we construct relations between 2 new files? * We'd just have to write the MD5 of the new files into the hash, then * we'd find the first file on commit of the 2nd file ... and we see that * the other one is new, too. \n * But see \ref commit_2_revs "creating 2 revisions on commit". * * */ /** * \addtogroup cmds * * \anchor mv * \section cp * * \code * fsvs cp [-r rev] SRC DEST * fsvs cp dump * fsvs cp load * \endcode * * The \c copy command marks \c DEST as a copy of \c SRC at revision \c * rev, so that on the next commit of \c DEST the corresponding source path * is sent as copy source. * * The default value for \c rev is \c BASE, ie. the revision the \c SRC * (locally) is at. * * Please note that this command works \b always on a directory \b * structure - if you say to copy a directory, the \b whole structure is * marked as copy. That means that if some entries below the copy are * missing, they are reported as removed from the copy on the next commit. * \n (Of course it is possible to mark files as copied, too; non-recursive * copies are not possible, but can be emulated by having parts of the * destination tree removed.) * * \note TODO: There will be differences in the exact usage - \c copy will * try to run the \c cp command, whereas \c copied will just remember the * relation. * * If this command are used without parameters, the currently defined * relations are printed; please keep in mind that the \b key is the * destination name, ie. the 2nd line of each pair! * * The input format for \c load is newline-separated - first a \c SRC line, * followed by a \c DEST line, then an line with just a dot (".") * as delimiter. If you've got filenames with newlines or other special * characters, you have to give the paths as arguments. * * Internally the paths are stored relative to the working copy base * directory, and they're printed that way, too. * * Later definitions are \b appended to the internal database; to undo * mistakes, use the \ref uncopy action. * * \note Important: User-defined properties like \ref * FSVS_PROP_COMMIT_PIPE "fsvs:commit-pipe" are \b not copied to the * destinations, because of space/time issues (traversing through entire * subtrees, copying a lot of property-files) and because it's not sure * that this is really wanted. \b TODO: option for copying properties? * * \todo -0 like for xargs? * * \todo Are different revision numbers for \c load necessary? Should \c * dump print the source revision number? * * \todo Copying from URLs means update from there * * \note As subversion currently treats a rename as copy+delete, the \ref * mv command is an alias to \ref cp. * * If you have a need to give the filenames \c dump or \c load as first * parameter for copyfrom relations, give some path, too, as in * "./dump". * * \note The source is internally stored as URL with revision number, so * that operations like these \code * $ fsvs cp a b * $ rm a/1 * $ fsvs ci a * $ fsvs ci b * \endcode * work - FSVS sends the old (too recent!) revision number as source, and * so the local filelist stays consistent with the repository. \n But it is * not implemented (yet) to give an URL as copyfrom source directly - we'd * have to fetch a list of entries (and possibly the data!) from the * repository. * * \todo Filter for dump (patterns?). * */ /** * \addtogroup cmds * * \section cpfd copyfrom-detect * * \code * fsvs copyfrom-detect [paths...] * \endcode * * This command tells FSVS to look through the new entries, and see * whether it can find some that seem to be copied from others already * known. \n * It will output a list with source and destination path and why it could * match. * * This is just for information purposes and doesn't change any FSVS state, * (TODO: unless some option/parameter is set). * * The list format is on purpose incompatible with the \c load * syntax, as the best match normally has to be taken manually. * * \todo some parameter that just prints the "best" match, and outputs the * correct format. * * If \ref glob_opt_verb "verbose" is used, an additional value giving the * percentage of matching blocks, and the count of possibly copied entries * is printed. * * Example: * \code * $ fsvs copyfrom-list -v * newfile1 * md5:oldfileA * newfile2 * md5:oldfileB * md5:oldfileC * md5:oldfileD * newfile3 * inode:oldfileI * manber=82.6:oldfileF * manber=74.2:oldfileG * manber=53.3:oldfileH * ... * 3 copyfrom relations found. * \endcode * * The abbreviations are: * * *
\e md5 * The \b MD5 of the new file is identical to that of one or more already * committed files; there is no percentage. * *
\e inode * The \b device/inode number is identical to the given known entry; this * could mean that the old entry has been renamed or hardlinked. * \b Note: Not all filesystems have persistent inode numbers (eg. NFS) - * so depending on your filesystems this might not be a good indicator! * *
\e name * The entry has the same name as another entry. * *
\e manber * Analysing files of similar size shows some percentage of * (variable-sized) common blocks (ignoring the order of the * blocks). * *
\e dirlist * The new directory has similar files to the old directory.\n * The percentage is (number_of_common_entries)/(files_in_dir1 + * files_in_dir2 - number_of_common_entries). * *
* * \note \b manber matching is not implemented yet. * * \note If too many possible matches for an entry are found, not all are * printed; only an indicator ... is shown at the end. * * */ /** * \addtogroup cmds * * \section uncp * * \code * fsvs uncopy DEST [DEST ...] * \endcode * * The \c uncopy command removes a \c copyfrom mark from the destination * entry. This will make the entry unknown again, and reported as \c New on * the next invocations. * * Only the base of a copy can be un-copied; if a directory structure was * copied, and the given entry is just implicitly copied, this command will * return an error. * * This is not folded in \ref revert, because it's not clear whether \c * revert on copied, changed entries should restore the original copyfrom * data or remove the copy attribute; by using another command this is no * longer ambiguous. * * Example: * \code * $ fsvs copy SourceFile DestFile * # Whoops, was wrong! * $ fsvs uncopy DestFile * \endcode * */ /* Or should for dirlist just the raw data be showed - common_files, * files_in_new_dir? */ /* for internal use only, not visible * * \section cp_mv_data Storing and reading the needed data * * For copy/move detection we need fast access to other files with the same * or similar data. * - To find identical files we just take a GDBM database, indexed by the * MD5, and having a (\c \\0 separated) list of filenames. * Why GDBM? * - For partial commits we need to remove/use parts of the data; a * textfile would have to be completely re-written. * - We need the list of copies/moves identified by the user. * - This should be quickly editable * - No big blobs to stay after a commit (empty db structure?) * - Using an file/symlink in the WAA for the new entry seems bad. We'd * have to try to find such an file/symlink for each committed entry. * - But might be fast by just doing readlink()? = a single syscall * - The number of copy-entries is typically small. * - Easy to remove * - Uses inodes, though * - Easily readable - just points to source * - GDBM? * - easy to update and delete * - might uses some space * - a single entry * - not as easy to read * - Text-file is out, because of random access for partial commits. * - Maybe we should write the manber hash of the first two blocks there, * too? -- No, manber-hashes would be done on all files at once. * * * Facts: * - we have to copy directory structures *after* waa__input_tree(), but * before waa__partial_update(). * - While running waa__input_tree() the source tree might still be in * work * - While running waa__partial_update() the "old" data of the copied * entries might already be destroyed * - But here we don't know the new entries yet! * - We have to do *all* entries there * - As we do not yet know which part of the tree we'll be working with * - We must not save these temporary entries * - Either mark FT_IGNORE * - Or don't load the copies for actions calling waa__output_tree(). * - But what about the property functions? They need access to copied * entries. * - Can do the entries as RF_ADD, as now. * * So a "copy" does copy all entries for the list, too; but that means that * a bit more data has to be written out. * * */ /* As the temporary databases (just indizes for detection) are good only * for a single run, we can easily store the address directly. For the real * copy-from db we have to use the path. * * Temporary storage: * value.dptr=sts; * value.dsize=sizeof(sts); * Persistent: * value.dptr=path; * value.dsize=sts->path_len; * * I don't think we want to keep these files up-to-date ... would mean * constant runtime overhead. */ /** Maximum number of entries that are stored. * The -1 is for overflow detection \c "...". */ #define MAX_DUPL_ENTRIES (HASH__LIST_MAX -1) #if 0 /** Files smaller than this are not automatically bound to some ancestor; * their data is not unique enough. */ #define MIN_FILE_SIZE (32) #endif /** How many entries could be correlated. */ int copydetect_count; /** Structure for candidate retrieval. */ struct cm___candidate_t { /** Candidate entry */ struct estat *sts; /** Bitmask to tell which matching criteria apply. */ int matches_where; /** Count, or for percent display, a number from 0 to 1000. */ int match_count; }; /** Function and type declarations for entry-to-hash-key conversions. * @{ */ typedef datum (cm___to_datum_t)(const struct estat *sts); cm___to_datum_t cm___md5_datum; cm___to_datum_t cm___inode_datum; cm___to_datum_t cm___name_datum; /** @} */ /** Structure for storing simple ways of file matching. * * This is the predeclaration. */ struct cm___match_t; /** Enters the given entry into the database */ typedef int (cm___register_fn)(struct estat *, struct cm___match_t *); /** Queries the database for the given entry. * * Output is (the address of) an array of cm___candidates_t, and the number * of elements. */ typedef int (cm___get_list_fn)(struct estat *, struct cm___match_t *, struct cm___candidate_t **, int *count); /** Format function for verbose output. * Formats the candidate \a cand in match \a match into a buffer, and * returns this buffer. */ typedef char* (cm___format_fn)(struct cm___match_t * match, struct cm___candidate_t *cand); /** Inserts into hash tables. */ cm___register_fn cm___hash_register; /** Queries the database for the given entry. */ cm___get_list_fn cm___hash_list; /** Match directories by their children. */ cm___get_list_fn cm___match_children; /** Outputs percent of match. */ cm___format_fn cm___output_pct; /** -. */ struct cm___match_t { /** Name for this way of matching */ char name[8]; /** Which entry type is allowed? */ mode_t entry_type; /** Whether this can be avoided by an option. */ int is_expensive:1; /** Whether this match is allowed. */ int is_enabled:1; /** Callback function for inserting elements */ cm___register_fn *insert; /** Callback function for querying elements */ cm___get_list_fn *get_list; /** Callback function to format the amount of similarity. */ cm___format_fn *format; /** For simple, GDBM-based database matches */ /** How to get a key from an entry */ cm___to_datum_t *to_key; /** Filename for this database. */ char filename[8]; /** Database handle */ hash_t db; /** Last queried key. * Needed if a single get_list call isn't sufficient (TODO). */ datum key; }; /** Enumeration for (some) matching criteria */ enum cm___match_e { CM___NAME_F=0, CM___NAME_D, CM___DIRLIST, }; /** Array with ways for simple matches. * * We keep file and directory matching separated; a file cannot be the * copyfrom source of a directory, and vice-versa. * * The \e important match types are at the start, as they're directly * accessed, too. */ struct cm___match_t cm___match_array[]= { [CM___NAME_F] = { .name="name", .to_key=cm___name_datum, .insert=cm___hash_register, .get_list=cm___hash_list, .entry_type=S_IFREG, .filename=WAA__FILE_NAME_EXT}, [CM___NAME_D] = { .name="name", .to_key=cm___name_datum, .insert=cm___hash_register, .get_list=cm___hash_list, .entry_type=S_IFDIR, .filename=WAA__DIR_NAME_EXT}, [CM___DIRLIST] = { .name="dirlist", .get_list=cm___match_children, .format=cm___output_pct, .entry_type=S_IFDIR, }, { .name="md5", .to_key=cm___md5_datum, .is_expensive=1, .insert=cm___hash_register, .get_list=cm___hash_list, .entry_type=S_IFREG, .filename=WAA__FILE_MD5s_EXT}, { .name="inode", .to_key=cm___inode_datum, .insert=cm___hash_register, .get_list=cm___hash_list, .entry_type=S_IFDIR, .filename=WAA__FILE_INODE_EXT}, { .name="inode", .to_key=cm___inode_datum, .insert=cm___hash_register, .get_list=cm___hash_list, .entry_type=S_IFREG, .filename=WAA__DIR_INODE_EXT}, }; #define CM___MATCH_NUM (sizeof(cm___match_array)/sizeof(cm___match_array[0])) /** Gets a \a datum from a struct estat::md5. */ datum cm___md5_datum(const struct estat *sts) { datum d; d.dsize=APR_MD5_DIGESTSIZE*2+1; d.dptr=cs__md5tohex_buffered(sts->md5); return d; } /** Gets a \a datum from the name of an entry; the \\0 gets included (for * easier dumping). */ datum cm___name_datum(const struct estat *sts) { datum d; d.dptr=sts->name; /* We have only the full path length stored. */ d.dsize=strlen(d.dptr)+1; return d; } /** Gets a \a datum from the filesystem addressing - device and inode. */ datum cm___inode_datum(const struct estat *sts) { static struct { ino_t ino; dev_t dev; } tmp; datum d; tmp.ino=sts->st.ino; tmp.dev=sts->st.dev; d.dptr=(char*)&tmp; d.dsize=sizeof(tmp); return d; } /** Compare function for cm___candidate_t. */ static int cm___cand_compare(const void *_a, const void *_b) { const struct cm___candidate_t *a=_a; const struct cm___candidate_t *b=_b; return a->sts - b->sts; } /** Compare function for cm___candidate_t. */ static int cm___cand_comp_count(const void *_a, const void *_b) { const struct cm___candidate_t *a=_a; const struct cm___candidate_t *b=_b; return a->match_count - b->match_count; } /** -. */ int cm___hash_register(struct estat *sts, struct cm___match_t *match) { int status; status=hsh__insert_pointer( match->db, (match->to_key)(sts), sts); /* If there is no more space available ... just ignore it. */ if (status == EFBIG) status=0; return status; } static int common; int both(struct estat *a, struct estat *b) { common++; return 0; } /** -. * * The big question is - should this work recursively? Would mean that the * topmost directory would be descended, and the results had to be cached. * */ int cm___match_children(struct estat *sts, struct cm___match_t *match, struct cm___candidate_t **list, int *found) { int status; /* We take a fair bit more, to get *all* (or at least most) possible * matches. */ static struct cm___candidate_t similar_dirs[MAX_DUPL_ENTRIES*4]; struct cm___candidate_t *cur, tmp_cand={0}; size_t simil_dir_count; struct estat **children, *curr; struct estat **others, *other_dir; int other_count, i; datum key; struct cm___match_t *name_match; status=0; DEBUGP("child matching for %s", sts->name); /* No children => cannot be matched */ if (!sts->entry_count) goto ex; simil_dir_count=0; children=sts->by_inode; while (*children) { curr=*children; /* Find entries with the same name. Depending on the type of the entry * we have to look in one of the two hashes. */ if (S_ISDIR(curr->st.mode)) name_match=cm___match_array+CM___NAME_D; else if (S_ISREG(curr->st.mode)) name_match=cm___match_array+CM___NAME_F; else goto next_child; key=(name_match->to_key)(curr); status=hsh__list_get(name_match->db, key, &key, &others, &other_count); /* If there are too many entries with the same name, we ignore this * hint. */ if (status != ENOENT && other_count && other_countparent; cur=lsearch(&tmp_cand, similar_dirs, &simil_dir_count, sizeof(similar_dirs[0]), cm___cand_compare); cur->match_count++; DEBUGP("dir %s has count %d", cur->sts->name, cur->match_count); BUG_ON(simil_dir_count > sizeof(similar_dirs)/sizeof(similar_dirs[0])); } } next_child: children++; } /* Now do the comparisons. */ for(i=0; ientry_count + other_dir->entry_count - common); } /* Now sort, and return a few. */ qsort( similar_dirs, simil_dir_count, sizeof(similar_dirs[0]), cm___cand_comp_count); *found=simil_dir_count > HASH__LIST_MAX ? HASH__LIST_MAX : simil_dir_count; *list=similar_dirs; ex: return status; } /** -. * */ int cm___hash_list(struct estat *sts, struct cm___match_t *match, struct cm___candidate_t **output, int *found) { int status; static struct cm___candidate_t arr[MAX_DUPL_ENTRIES]; struct estat **list; int i; match->key=(match->to_key)(sts); status=hsh__list_get(match->db, match->key, &match->key, &list, found); if (status == 0) { for(i=0; i<*found; i++) { /** The other members are touched by upper layers, so we have to * re-initialize them. */ memset(arr+i, 0, sizeof(*arr)); arr[i].sts=list[i]; } *output=arr; } return status; } /** Puts cm___candidate_t::match_count formatted into \a buffer. */ char* cm___output_pct(struct cm___match_t *match, struct cm___candidate_t *cand) { static char buffer[16]; BUG_ON(cand->match_count > 1000 || cand->match_count < 0); /* GCC notes * ‘sprintf’ output between 6 and 16 bytes into a destination of size 8 * which isn't correct (see the BUG_ON above), * but it can be 1+4+1+1+1 == 8 characters. */ sprintf(buffer, "=%d.%1d%%", cand->match_count/10, cand->match_count % 10); return buffer; } /** Inserts the given entry in all allowed matching databases. */ int cm___register_entry(struct estat *sts) { int status; int i; struct cm___match_t *match; status=0; if (!(sts->entry_status & FS_NEW)) { for(i=0; iis_enabled && match->insert && (sts->st.mode & S_IFMT) == match->entry_type ) { STOPIF( (match->insert)(sts, match), NULL); DEBUGP("inserted %s for %s", sts->name, match->name); } } } ex: return status; } /** Shows possible copyfrom sources for the given entry. * */ static int cm___match(struct estat *entry) { int status; char *path, *formatted; int i, count, have_match, j, overflows; struct estat *sts; struct cm___match_t *match; struct cm___candidate_t candidates[HASH__LIST_MAX*CM___MATCH_NUM]; struct cm___candidate_t *cur, *list; size_t candidate_count; FILE *output=stdout; /* #error doesn't work with sizeof() ? * But doesn't matter much, this gets removed by the compiler. */ BUG_ON(sizeof(candidates[0].matches_where) *4 < CM___MATCH_NUM, "Wrong datatype chosen for matches_where."); formatted=NULL; status=0; candidate_count=0; overflows=0; path=NULL; /* Down below status will get the value ENOENT from the hsh__list_get() * lookups; we change it back to 0 shortly before leaving. */ for(i=0; ist.mode & S_IFMT) != match->entry_type) continue; /* \todo Loop if too many for a single call. */ status=match->get_list(entry, match, &list, &count); /* ENOENT = nothing to see */ if (status == ENOENT) continue; STOPIF(status, NULL); if (count > MAX_DUPL_ENTRIES) { /* We show one less than we store, so we have the overflow * information. */ overflows++; count=MAX_DUPL_ENTRIES; } for(j=0; j sizeof(candidates)/sizeof(candidates[0])); cur->matches_where |= 1 << i; /* Copy dirlist value */ if (i == CM___DIRLIST) cur->match_count=list[j].match_count; DEBUGP("got %s for %s => 0x%X", cur->sts->name, match->name, cur->matches_where); } } status=0; if (candidate_count) { copydetect_count++; STOPIF( ops__build_path(&path, entry), NULL); STOPIF( hlp__format_path(entry, path, &formatted), NULL); /* Print header line for this file. */ STOPIF_CODE_EPIPE( fprintf(output, "%s\n", formatted), NULL); /* Output list of matches */ for(j=0; jname, output), NULL); if (opt__is_verbose()>0 && match->format) STOPIF_CODE_EPIPE( fputs( match->format(match, candidates+j), output), NULL); } } STOPIF( ops__build_path(&path, sts), NULL); STOPIF( hlp__format_path(sts, path, &formatted), NULL); STOPIF_CODE_EPIPE( fprintf(output, ":%s\n", formatted), NULL); } if (overflows) STOPIF_CODE_EPIPE( fputs(" ...\n", output), NULL); } else { /* cache might be overwritten again when we're here. */ STOPIF( ops__build_path(&path, entry), NULL); if (opt__is_verbose() > 0) { STOPIF( hlp__format_path(entry, path, &formatted), NULL); STOPIF_CODE_EPIPE( fprintf(output, "- No copyfrom relation found for %s\n", formatted), NULL); } else DEBUGP("No sources found for %s", path); } STOPIF_CODE_EPIPE( fflush(output), NULL); ex: return status; } int cm__find_dir_source(struct estat *dir) { int status; status=0; STOPIF( cm___match( dir ), NULL); ex: return status; } int cm__find_file_source(struct estat *file) { int status; char *path; status=0; STOPIF( ops__build_path(&path, file), NULL); DEBUGP("finding source of %s", file->name); STOPIF( cs__compare_file(file, path, NULL), NULL); /* TODO: EPIPE and similar for output */ STOPIF( cm___match( file ), NULL); ex: return status; } /** After loading known entries try to find some match for every new entry. * */ int cm__find_copied(struct estat *root) { int status; struct estat *sts, **child; status=0; child=root->by_inode; if (!child) goto ex; while (*child) { sts=*child; /* Should we try to associate the directory after all children have been * done? We could simply take a look which parent the children's sources * point to ... * * Otherwise, if there's some easy way to see the source of a directory, * we could maybe save some searching for all children.... */ if (sts->entry_status & FS_NEW) { switch (sts->st.mode & S_IFMT) { case S_IFDIR: STOPIF( cm__find_dir_source(sts), NULL); break; case S_IFLNK: case S_IFREG: STOPIF( cm__find_file_source(sts), NULL); break; default: DEBUGP("Don't handle entry %s", sts->name); } } if (S_ISDIR(sts->st.mode) && (sts->entry_status & (FS_CHILD_CHANGED | FS_CHANGED)) ) STOPIF( cm__find_copied(sts), NULL); child++; } ex: return status; } /** -. * */ int cm__detect(struct estat *root, int argc, char *argv[]) { int status, st2; char **normalized; int i; struct cm___match_t *match; hash_t hash; /* Operate recursively. */ opt_recursive++; /* But do not allow to get current MD5s - we need the data from the * repository. */ opt__set_int(OPT__CHANGECHECK, PRIO_MUSTHAVE, CHCHECK_NONE); STOPIF( waa__find_common_base(argc, argv, &normalized), NULL); /** \todo Do we really need to load the URLs here? They're needed for * associating the entries - but maybe we should do that two-way: * - just read \c intnum , and store it again * - or process to (struct url_t*). * * Well, reading the URLs doesn't cost that much ... */ STOPIF( url__load_list(NULL, 0), NULL); for(i=0; iis_enabled= !match->is_expensive || opt__get_int(OPT__COPYFROM_EXP); if (!match->filename[0]) continue; DEBUGP("open hash for %s as %s", match->name, match->filename); /* Create database file for WC root. */ STOPIF( hsh__new(wc_path, match->filename, HASH_TEMPORARY, & match->db), NULL); } /* We read all entries, and show some progress. */ status=waa__read_or_build_tree(root, argc, normalized, argv, cm___register_entry, 1); if (status == -ENOENT) STOPIF(status, "!No committed working copy found."); STOPIF(status, NULL); copydetect_count=0; STOPIF( cm__find_copied(root), NULL); if (!copydetect_count) STOPIF_CODE_EPIPE( printf("No copyfrom relations found.\n"), NULL); else if (opt__is_verbose() > 0) STOPIF_CODE_EPIPE( printf("%d copyfrom relation%s found.\n", copydetect_count, copydetect_count == 1 ? "" : "s"), NULL); ex: for(i=0; i= buflen); *string=buffer; ex: return status; } /** Returns the absolute path * */ int cm___absolute_path(char *path, char **output) { static struct cache_t *cache; int status, len; char *cp; STOPIF( cch__new_cache(&cache, 8), NULL); STOPIF( cch__add(cache, 0, NULL, // wc_path_len + 1 + strlen(path) + 1, &cp), NULL); start_path_len + 1 + strlen(path) + 1, &cp), NULL); DEBUGP("norm from: %s", path); hlp__pathcopy(cp, &len, path, NULL); DEBUGP("norm to: %s", cp); BUG_ON(len > cache->entries[cache->lru]->len); *output=cp; ex: return status; } /** Checks whether a path is below \c wc_path, and returns the relative * part. * * If that isn't possible (because \a path is not below \c wc_path), * \c EINVAL is returned. * The case \c path==wc_path is not allowed, either. */ int cm___not_below_wcpath(char *path, char **out) { if (strncmp(path, wc_path, wc_path_len) != 0 || path[wc_path_len] != PATH_SEPARATOR) return EINVAL; *out=path+wc_path_len+1; return 0; } /** Dump a list of copyfrom relations to the stream. * * TODO: filter by wildcards (?) */ int cm___dump_list(FILE *output, int argc, char *normalized[]) { int status; hash_t db; datum key, value; int have; char *path; svn_revnum_t rev; /* TODO: Use some filter, eg. by pathnames. */ db=NULL; /* Create database file for WC root. */ status=hsh__new(wc_path, WAA__COPYFROM_EXT, GDBM_READER, &db); if (status==ENOENT) { status=0; goto no_copyfrom; } have=0; status=hsh__first(db, &key); while (status == 0) { STOPIF( hsh__fetch(db, key, &value), NULL); /* The . at the end is suppressed; therefore we print it from the * second dataset onwards. */ if (have) status=fputs(".\n", output); STOPIF( cm___string_to_rev_path( value.dptr, &path, &rev), NULL); status |= fprintf(output, "%s\n%s\n", path, key.dptr); IF_FREE(value.dptr); STOPIF_CODE_ERR( status < 0, -EPIPE, "output error"); status=hsh__next(db, &key, &key); have++; } if (!have) { no_copyfrom: fprintf(output, "No copyfrom information was written.\n"); } else if (opt__is_verbose() > 0) fprintf(output, "%d copyfrom relation%s.\n", have, have == 1 ? "" : "s"); ex: if (db) STOPIF( hsh__close(db, status), NULL); return status; } /** Make the copy in the tree started at \a root. * * The destination must not already exist in the tree; it can exist in the * filesystem. * * If \a revision is not \c 0 (which corresponds to \c BASE), the correct * list of entries must be taken from the corresponding repository. * * Uninitialization is done via \c root==NULL. * * If the flag \a paths_are_wc_relative is set, the paths \a cp_src and \a * cp_dest are taken as-is. * Else they're are converted to wc-relative paths by making them absolute * (eventually using \ref start_path as anchor), and cutting the wc-root * off. */ int cm___make_copy(struct estat *root, char *cp_src, svn_revnum_t revision, char *cp_dest, int paths_are_wc_relative) { int status; static const char err[]="!The %s path \"%s\" is not below the wc base."; struct estat *src, *dest; static hash_t db=NULL; char *abs_src, *abs_dest; char *wc_src, *wc_dest; char *buffer, *url; if (!root) { STOPIF( hsh__close(db, 0), NULL); goto ex; } /* That we shuffle the characters back and forth can be excused because * - either we are cmdline triggered, in which case we have the full task * starting overhead, and don't do it here again, and * - if we're doing a list of entries, we have to do it at least here. */ if (paths_are_wc_relative) { wc_dest=cp_dest; wc_src=cp_src; } else { STOPIF( cm___absolute_path(cp_dest, &abs_dest), NULL); STOPIF( cm___absolute_path(cp_src, &abs_src), NULL); STOPIF( cm___not_below_wcpath(abs_dest, &wc_dest), err, "destination", abs_dest); STOPIF( cm___not_below_wcpath(abs_src, &wc_src), err, "source", abs_src); } STOPIF( ops__traverse(root, cp_src, 0, 0, &src), NULL); /* TODO: Make copying copied entries possible. * But as we only know the URL, we'd either have to do a checkout, or try * to parse back to the original entry. */ STOPIF_CODE_ERR( src->flags & RF___IS_COPY, EINVAL, "!Copied entries must be committed before using them as copyfrom source."); /* The directories above must be added; the entries below get RF_COPY_SUB * set (by waa__copy_entries), and this entry gets overridden to * RF_COPY_BASE below. */ STOPIF( ops__traverse(root, cp_dest, OPS__CREATE, RF_ADD, &dest), NULL); STOPIF_CODE_ERR( !(dest->flags & RF_ISNEW), EINVAL, "!The destination is already known - must be a new entry."); if (!db) STOPIF( hsh__new(wc_path, WAA__COPYFROM_EXT, GDBM_WRCREAT, &db), NULL); if (revision) { BUG_ON(1, "fetch list of entries from the repository"); } else { STOPIF( waa__copy_entries(src, dest), NULL); revision=src->repos_rev; } /* Mark as base entry for copy; the RF_ADD flag was removed by * copy_entries above, but the entry really is *new*. */ dest->flags |= RF_COPY_BASE; dest->flags &= ~RF_COPY_SUB; STOPIF( url__full_url( src, &url), NULL); STOPIF( cm___rev_path_to_string(url, revision, &buffer), NULL); STOPIF( hsh__store_charp(db, wc_dest, buffer), NULL); ex: return status; } /** Sets all entries that are just implicitly copied to ignored. * Explicitly added entries (because of \ref add, or \ref prop-set) are * kept. * * Returns a \c 0 or \c 1, with \c 1 saying that \b all entries below are * ignored, and so whether \a cur can (perhaps) be completely ignored, too. * */ int cm___ignore_impl_copied(struct estat *cur) { struct estat **sts; int all_ign; all_ign=1; cur->flags &= ~RF_COPY_SUB; if (cur->flags & (RF_ADD | RF_PUSHPROPS)) all_ign=0; if (ops__has_children(cur)) { sts=cur->by_inode; while (*sts) { all_ign &= cm___ignore_impl_copied(*sts); sts++; } } if (all_ign) cur->to_be_ignored=1; else /* We need that because of its children, and we have to check. */ cur->flags |= RF_ADD | RF_CHECK; DEBUGP("%s: all_ignore=%d", cur->name, all_ign); return all_ign; } /** -. * */ int cm__uncopy(struct estat *root, int argc, char *argv[]) { int status; char **normalized; struct estat *dest; /* Do only the selected elements. */ opt_recursive=-1; if (!argc) ac__Usage_this(); STOPIF( waa__find_common_base(argc, argv, &normalized), NULL); STOPIF( url__load_nonempty_list(NULL, 0), NULL); /* Load the current data, without updating */ status=waa__input_tree(root, NULL, NULL); if (status == ENOENT) STOPIF( EINVAL, "!No working copy could be found."); else STOPIF( status, NULL); while (*normalized) { DEBUGP("uncopy %s %s", *normalized, normalized[1]); STOPIF( ops__traverse(root, *normalized, OPS__FAIL_NOT_LIST, 0, &dest), "!The entry \"%s\" is not known.", *normalized); STOPIF_CODE_ERR( !(dest->flags & RF_COPY_BASE), EINVAL, "!The entry \"%s\" is not a copy base.", *normalized); /* Directly copied, unchanged entry. * Make it unknown - remove copy relation (ie. mark hash value for * deletion), and remove entry from local list. */ STOPIF( cm__get_source(dest, NULL, NULL, NULL, 1), NULL); dest->flags &= ~RF_COPY_BASE; /* That removes all not explicitly added entries from this subtree. */ cm___ignore_impl_copied(dest); normalized++; } STOPIF( waa__output_tree(root), NULL); /* Purge. */ STOPIF( cm__get_source(NULL, NULL, NULL, NULL, 0), NULL); ex: return status; } /** -. * */ int cm__work(struct estat *root, int argc, char *argv[]) { int status; char **normalized; int count; FILE *input=stdin; char *src, *dest, *cp; int is_dump, is_load; svn_revnum_t revision; status=0; is_load=is_dump=0; /* We have to do the parameter checking in two halves, because we must not * use "dump" or "load" as working copy path. So we first check what to do, * eventually remove these strings from the parameters, and then look for * the wc base. */ /* If there's \b no parameter given, we default to dump. */ if (argc==0) is_dump=1; else if (strcmp(argv[0], parm_dump) == 0) { is_dump=1; argv++; argc--; } else if (strcmp(argv[0], parm_load) == 0) { is_load=1; argv++; argc--; } STOPIF( waa__find_common_base(argc, argv, &normalized), NULL); if (is_dump) { STOPIF( cm___dump_list(stdout, argc, normalized), NULL); /* To avoid the indentation */ goto ex; } switch (opt_target_revisions_given) { case 0: /* Default is \c BASE. */ revision=0; break; case 1: revision=opt_target_revision; default: STOPIF( EINVAL, "!Only a single revision number may be given."); } STOPIF( url__load_nonempty_list(NULL, 0), NULL); /* Load the current data, without updating; so st.mode equals * st.local_mode_packed and so on. */ status=waa__input_tree(root, NULL, NULL); if (status == -ENOENT) STOPIF(status, "!No entries are currently known, " "so you can't define copy or move relations yet.\n"); STOPIF(status, NULL); hlp__string_from_filep(NULL, NULL, NULL, SFF_RESET_LINENUM); if (is_load) { /* Load copyfrom data. */ count=0; while (1) { status=hlp__string_from_filep(input, &cp, NULL, 0); if (status == EOF) { status=0; break; } STOPIF( status, "Failed to read copyfrom source"); STOPIF_CODE_ERR( !*cp, EINVAL, "!Copyfrom source must not be empty."); STOPIF( hlp__strdup( &src, cp), NULL); status=hlp__string_from_filep(input, &cp, NULL, 0); STOPIF_CODE_ERR( status == EOF, EINVAL, "!Expected a target specification, got EOF!"); STOPIF( status, "Failed to read copyfrom destination"); STOPIF( hlp__strdup( &dest, cp), NULL); /* Get the empty line */ status=hlp__string_from_filep(input, &cp, NULL, SFF_WHITESPACE); if (status == EOF) DEBUGP("delimiter line missing - EOF"); else if (status == 0 && cp[0] == '.' && cp[1] == 0) DEBUGP("delimiter line ok"); else { STOPIF(status, "Cannot read delimiter line"); /* status == 0 ? not empty. */ STOPIF(EINVAL, "Expected delimiter line - got %s", cp); } DEBUGP("read %s => %s", src, dest); /* These paths were given relative to the cwd, which is changed now, as * we're in the wc base. Calculate correct names. */ STOPIF( cm___make_copy(root, src, revision, dest, 0), NULL); count++; free(dest); free(src); } if (opt__is_verbose() >= 0) printf("%d copyfrom relation%s loaded.\n", count, count==1 ? "" : "s"); } else { STOPIF_CODE_ERR(argc != 2, EINVAL, "!At least source and destination, " "or \"dump\" resp. \"load\" must be given."); /* Create database file for WC root. */ STOPIF( cm___make_copy(root, normalized[0], revision, normalized[1], 1), "Storing \"%s\" as source of \"%s\" failed.", normalized[0], normalized[1]); } STOPIF( cm___make_copy(NULL, NULL, 0, NULL, 0), NULL); STOPIF( waa__output_tree(root), NULL); ex: return status; } /** Get the source of an entry with \c RF_COPY_BASE set. * See cm__get_source() for details. * */ int cm___get_base_source(struct estat *sts, char *name, char **src_url, svn_revnum_t *src_rev, int alloc_extra, int register_for_cleanup) { int status; datum key, value; static hash_t hash; static int init=0; char *url; value.dptr=NULL; status=0; if (src_url) *src_url=NULL; if (src_rev) *src_rev=SVN_INVALID_REVNUM; if (!sts) { /* uninit */ STOPIF( hsh__close(hash, register_for_cleanup), NULL); hash=NULL; init=0; goto ex; } if (!init) { /* We cannot simply use !hash as condition; if there is no database with * copyfrom information, we'd try to open it *every* time we're asked for * a source, which is clearly not optimal for performance. * So we use an static integer. */ init=1; /* In case there's a cleanup at the end we have to open read/write. */ status=hsh__new(wc_path, WAA__COPYFROM_EXT, GDBM_WRITER | HASH_REMEMBER_FILENAME, &hash); /* If we got an ENOENT here, hash==NULL; so we'll re-set the * *parameters below and return. */ if (status != ENOENT) STOPIF( status, NULL); } /* Normal proceedings, this is a direct target of a copy definition. */ if (!name) STOPIF( ops__build_path( &name, sts), NULL); if (name[0]=='.' && name[1]==PATH_SEPARATOR) name+=2; key.dptr=name; key.dsize=strlen(name)+1; status=hsh__fetch(hash, key, &value); if (status) { DEBUGP("no source for %s found", name); goto ex; } if (register_for_cleanup) STOPIF( hsh__register_delete(hash, key), NULL); /* Extract the revision number. */ STOPIF( cm___string_to_rev_path( value.dptr, &url, src_rev), NULL); if (src_url) { BUG_ON(!url); status=strlen(url); /* In case the caller wants to do something with this buffer, we return * more. We need at least the additional \0; and we give a few byte * extra, gratis, free for nothing (and that's cutting my own throat)! * */ STOPIF( hlp__strnalloc( status + 1 +alloc_extra + 4, src_url, url), NULL); status=0; } ex: IF_FREE(value.dptr); return status; } /** Recursively creating the URL. * As most of the parameters are constant, we could store them statically * ... don't know whether it would make much difference, this function * doesn't get called very often. * \a length_to_add is increased while going up the tree; \a eobuffer gets * handed back down. */ int cm___get_sub_source_rek(struct estat *cur, int length_to_add, char **dest_buffer, svn_revnum_t *src_rev, char **eobuffer) { int status; struct estat *copied; int len; /* Get source of parent. * Just because this entry should be removed from the copyfrom database * that isn't automatically true for the corresponding parent. */ copied=cur->parent; BUG_ON(!copied, "Copy-sub but no base?"); len=strlen(cur->name); length_to_add+=len+1; if (copied->flags & RF_COPY_BASE) { /* Silent error return. */ status=cm___get_base_source(copied, NULL, dest_buffer, src_rev, length_to_add, 0); if (status) goto ex; *eobuffer=*dest_buffer+strlen(*dest_buffer); DEBUGP("after base eob-5=%s", *eobuffer-5); } else { /* Maybe better do (sts->path_len - copied->path_len))? * Would be faster. */ status=cm___get_sub_source_rek(copied, length_to_add, dest_buffer, src_rev, eobuffer); if (status) goto ex; } /* Now we have the parent's URL ... put cur->name after it. */ /* Not PATH_SEPARATOR, it's an URL and not a pathname. */ **eobuffer = '/'; strcpy( *eobuffer +1, cur->name ); *eobuffer += len+1; DEBUGP("sub source of %s is %s", cur->name, *dest_buffer); ex: return status; } /** Get the source of an entry with \c RF_COPY_SUB set. * See cm__get_source() for details. * * This function needs no cleanup. * */ int cm___get_sub_source(struct estat *sts, char *name, char **src_url, svn_revnum_t *src_rev) { int status; char *eob; /* As we only store the URL in the hash database, we have to proceed as * follows: * - Look which parent is the copy source, * - Get its URL * - Append the path after that to the URL of the copy source: * root / dir1 / dir2 / / dir3 / * and * root / dir1 / dir3 / * Disadvantage: * - we have to traverse the entries, and make the estat::by_name * arrays for all intermediate nodes. * * We do that as a recursive sub-function, to make bookkeeping easier. */ STOPIF( cm___get_sub_source_rek(sts, 0, src_url, src_rev, &eob), NULL); ex: return status; } /** -. * * Wrapper around cm___get_base_source() and cm___get_sub_source(). * * If \c *src_url is needed, it is allocated and must be \c free()ed after * use. * * If \a name is not given, it has to be calculated. * * Both \a src_name and \a src_rev are optional. * These are always set; if no source is defined, they're set to \c NULL, * \c NULL and \c SVN_INVALID_REVNUM. * * Uninitializing should be done via calling with \c sts==NULL; in this * case the \a register_for_cleanup value is used as success flag. * * If no source could be found, \c ENOENT is returned. */ int cm__get_source(struct estat *sts, char *name, char **src_url, svn_revnum_t *src_rev, int register_for_cleanup) { int status; if (!sts) { status=cm___get_base_source(NULL, NULL, NULL, NULL, 0, 0); goto ex; } if (sts->flags & RF_COPY_BASE) { status= cm___get_base_source(sts, name, src_url, src_rev, 0, register_for_cleanup); } else if (sts->flags & RF_COPY_SUB) { status= cm___get_sub_source(sts, name, src_url, src_rev); } else { status=ENOENT; goto ex; } if (src_url) DEBUGP("source of %s is %s", sts->name, *src_url); if (status) { /* That's a bug ... the bit is set, but no source was found? * Could some stale entry cause that? Don't error out now; perhaps at a * later time. */ DEBUGP("bit set, no source!"); /* BUG_ON(1,"bit set, no source!"); */ goto ex; } ex: return status; } fsvs-fsvs-1.2.12/src/cp_mv.h000066400000000000000000000015351453631713700156170ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2007-2008 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __CP_MV_H #define __CP_MV_H #include "actions.h" /** \file * \ref cp and \ref mv actions header file. * */ /** For defining copyfrom relations. */ work_t cm__work; /** For automatically finding relations. */ work_t cm__detect; /** For removing copyfrom relations. */ work_t cm__uncopy; /** Returns the source of the given entry. */ int cm__get_source(struct estat *sts, char *name, char **src_name, svn_revnum_t *src_rev, int register_for_cleanup); #endif fsvs-fsvs-1.2.12/src/dev/000077500000000000000000000000001453631713700151145ustar00rootroot00000000000000fsvs-fsvs-1.2.12/src/dev/FAQ000066400000000000000000000014161453631713700154500ustar00rootroot00000000000000Why do the functions have a "-." in their comment block? - Ask doxygen. The first part of documentation is in the .h file, so doxygen throws away the "brief" part in the corresponding .c file. We need to have an empty sentence. How to run the tests? - Simple case: "make run-tests". - Running some tests: "make run-tests TEST_LIST=001*" - Running with valgrind: "make run-tests CHECKER=valgrind". It's strongly recommended to use TEST_LIST and test only single calls. What about gcov? - Configure with "--enable-debug --enable-gcov"; compile with make. Start a clean environment with "make gcov-clean". Start one or more tests. Look at the summary with "make gcov"; the details are in .gcov, and per-file summaries in .gcov.smry. Example: "fsvs.c.gcov.smry". fsvs-fsvs-1.2.12/src/dev/check-option-docs.pl000077500000000000000000000011721453631713700207660ustar00rootroot00000000000000#!/usr/bin/perl # # Checks whether all defined options have some documentation. %opt=(); open(O, "< " . shift()) || die "can't read options.c: $!"; while () { chomp; if ( /^struct \s+ opt__list_t \s+ opt__list/x .. /^\S;/ ) { # print("found option $1\n"), $opt{$1}++ if /\.name \s* = \s* " ([^"]+) "/x; } } while (<>) { chomp; if (/\\\c (.+) - /) { map { # print("documented: $_\n"); delete $opt{$_}; } split(/, \\c /, $1); $opt{$1}++ if /\\ref\s+(\w+)/; } delete $opt{$2} if / \\(subsection|anchor) \s+ (\w+) /x; } exit if !keys %opt; die "Doc missing for ". join(", ", sort keys %opt) . "\n"; fsvs-fsvs-1.2.12/src/dev/check-version-output.pl000077500000000000000000000011261453631713700215520ustar00rootroot00000000000000#!/usr/bin/perl $config = shift || die "which config.h?"; $output = shift || die "which .c?"; %ignore=map { ($_,1); } qw(__CONFIG_H__ FASTCALL MKDEV MAJOR MINOR); %syms=(); open(F,"<", $config) || die "open $config: $!"; while () { $syms{$1}++ if /^\s*#(?:define|undef)\s+(\w+)/ && !$ignore{$1}; } open(F,"<", $output) || die "open $output: $!"; undef $/; $file=; close F; ($_) = ($file =~ /\s Version \s* \( [^)]* \) \s* \n \{ ([\x00-\xff]*) \n \} /xm); die "No Version() found." unless $_; study($_); for $sym (keys %syms) { warn("Not seen: $sym\n") unless m#\b$sym\b#; } fsvs-fsvs-1.2.12/src/dev/dox2txt.pl000077500000000000000000000013221453631713700170660ustar00rootroot00000000000000#!/usr/bin/perl use open qw(:std :utf8); use IPC::Open2; $input=shift; $output=shift; $pid = open2(my $r, my $w, qw"lynx -dump -nolist -nonumbers -stdin") or die $!; binmode($r, "encoding(UTF-8)"); binmode($w, "encoding(UTF-8)"); open(STDIN, "<", $input) or die $!; binmode(STDIN, "encoding(UTF-8)"); while () { # There's no space around anymore. s,(\S)(\),$1 $2,g; print $w $_; } close $w; #open(STDOUT, "> $output") || die $!; # Cut until first

header while (<$r>) { # I'd thought lynx had an option to not print these? # yes ... -nonumbers. s#\[\d+\]##; next if m#^\[#; s/\xe2/--/g; # $p=m#^SYNOPSIS# .. m#^\s*-{30,}#; $p=m#^\w# .. m#^\s*_{30,}#; print if ($p =~ m#^\d+$#); } fsvs-fsvs-1.2.12/src/dev/gcov-summary.pl000077500000000000000000000062311453631713700201070ustar00rootroot00000000000000#!/usr/bin/perl # read whole files undef $/; $exe_lines=$sum_lines=0; %runs=(); while (<>) { ($c_file=$ARGV) =~ s#\.gcov\.smry$##; # File 'warnings.c' # Lines executed:85.71% of 28 # warnings.c:creating 'warnings.c.gcov' ($pct, $lines) = (m#File '$c_file'\s+Lines executed:([\d\.]+)% of (\d+)#); if (!$lines) { warn "Cannot parse (or no lines executed) for $ARGV.\n"; next; } open(SRC, "< " . $c_file) || die $!; @funcs_to_ignore = map { m#\s*/// FSVS GCOV MARK: (\w+)# ? $1 : (); } split(/\n/,); close(SRC); $ignored=0; for $func (@funcs_to_ignore) { ($fexec, $flines) = m#Function '$func'\s+Lines executed:([\d\.]+)\% of (\d+)#; if (!defined($flines)) { warn "Function $func should be ignored, but was not found!\n"; } elsif ($fexec>0) { warn "Function $func should be ignored, but was run!\n"; } else { $ignored += $flines; } } # #####: 77: STOPIF( st__status(sts, path), NULL); # TODO: Count the whole block; eg. DEBUG normally has more than a single # line. open(GCOV, "< $c_file.gcov"); { local($/)="\n"; $last_line=$cur=0; # find untested lines, and count them $this_run=0; while () { $cur++; if (/^\s*(#####|-):\s+\d+:\s+(STOPIF|BUG|BUG_ON|DEBUGP)?/) { $stopif_lines++ if $2; if ($last_line == $cur -1) { $old=delete $runs{$c_file . "\0" . $last_line}; # An line without executable code (mark '-') is taken as continuation, but # doesn't add to unexecuted lines. $runs{$c_file . "\0" . $cur} = [ $old->[0] + ($1 eq "#####" ? 1 : 0), $old->[1] || $cur ]; } $last_line=$cur; } } } $covered=int($lines*$pct/100.0+0.5); $lines -= $ignored; $pct=$covered/$lines*100.0; $cover{sprintf("%9.5f-%s",$pct,$ARGV)} = [$lines, $pct, $ARGV, $covered, $ignored]; $sum_lines+=$lines; $exe_lines+=$covered; } die "No useful information found!!\n" if !$sum_lines; $delim="---------+--------+--------+--------------------------------------------------\n"; print "\n\n", $delim; for (reverse sort keys %cover) { ($lines, $pct, $name, $covered, $ignored)=@{$cover{$_}}; $ntest=$lines-$covered; $name =~ s#\.gcov\.smry$##i; write; } format STDOUT_TOP= Percent | exec'd | #lines | #!test | #ignrd | Filename ---------+--------+--------+--------+--------+---------------------------- . format STDOUT= @##.##% | @##### | @##### | @##### | @##### | @<<<<<<<<<<<<<<<<<<<<<<<<<< $pct, $covered, $lines, $ntest, $ignored, $name . print $delim; $pct=100.0*$exe_lines/$sum_lines; $covered=$exe_lines; $lines=$sum_lines; $name="Total"; write; print $delim; printf " %6.2f%% coverage when counting %d error handling lines as executed\n", 100.0*($exe_lines+$stopif_lines)/$sum_lines, $stopif_lines; print "-" x (length($delim)-1), "\n\n"; # Print runs @runs_by_length=(); map { $runs_by_length[$runs{$_}[0]]{$_}=$runs{$_}; } keys %runs; $max=10; print "Longest runs:\n"; while ($max>0 && @runs_by_length) { $this_length=$#runs_by_length; printf " %3d# ",$this_length; $length_arr=delete $runs_by_length[$this_length]; for (sort keys %$length_arr) { ($file, $last)=split(/\0/); print " ",$file,":",$length_arr->{$_}[1]; $max--; } print "\n"; } print "\n\n"; fsvs-fsvs-1.2.12/src/dev/make_doc.pl000077500000000000000000000012441453631713700172170ustar00rootroot00000000000000#!/usr/bin/perl print "/* This file is generated, do not edit!\n", " * Last done on ", scalar(gmtime(time())),"\n", " * */\n", "\n\n"; while (<>) { chomp; next if /(_{30,})/; next if /^\s*$/ && !@text; $sect=$1 if /^_?([\w\-]{1,5}[a-zA-Z0-9])/; # print STDERR "sect=$sect old=$old_sect\n"; if ($sect ne $old_sect) { print "const char hlp_${old_sect}[]=\"" . join("\"\n \"", @text),"\";\n\n" if ($old_sect && $old_sect =~ /^[a-z]/); @text=(); $sect =~ s#-#_#g; $old_sect=$sect; } else { # make \ safe s#\\#\\\\#g; # make " safe s#"#\\"#g; # remove space at beginning # s#^ ##; push(@text,$_ . "\\n"); } } print "\n\n// vi: filetype=c\n"; fsvs-fsvs-1.2.12/src/dev/make_fsvs_release.pl000077500000000000000000000025211453631713700211320ustar00rootroot00000000000000#!/usr/bin/perl $version=shift() || die "Welche Version??\n"; $version =~ m#^(\d+\.)+\d+$# || die "Version ungültig!!\n"; $tagdir="fsvs-$version"; system("git tag '$tagdir'"); warn "Fehler $? beim Taggen!" if $?; #print "Getaggt!! Warte auf Bestätigung.\n"; $_=; srand(); $tempdir="/tmp/" . $$ . ".tmp.".rand(); mkdir ($tempdir) || die "mkdir($tempdir): $!"; sub C { system("rm -rf '$tempdir'"); }; $SIG{"__DIE__"}=sub { print @_; C(); exit($! || 1); }; system("git archive --prefix '$tagdir/' | tar -xf -C '$tempdir'"); die "Fehler $?" if $?; chdir($tempdir); system("cd $tagdir && autoconf"); if ($?) { #die "Fehler $?" if $?; print "Fehler $?!!\n"; system("/bin/bash"); } # open(CH, "< $tagdir/CHANGES") || die $!; # open(CHHTML,"> CHANGES.html") || die $!; # while() # { # chomp; # last if /^\s*$/; # # print(CHHTML "$_\n
    \n"), next if (/^\w/); # s#^- #
  • #; # print CHHTML $_, "\n"; # } # print CHHTML "
\n"; # close CH; close CHHTML; chdir($tempdir); system("tar -cvf $tagdir.tar $tagdir"); die "Fehler $?" if $?; system("bzip2 -v9k $tagdir.tar"); die "Fehler $?" if $?; system("gzip -v9 $tagdir.tar"); die "Fehler $?" if $?; system("md5sum *.tar.* > MD5SUM"); die "Fehler $?" if $?; system("sha256sum *.tar.* > SHA256SUM"); die "Fehler $?" if $?; print "ok\n\n cd $tempdir\n\n"; #C(); exit(0); fsvs-fsvs-1.2.12/src/dev/permutate-all-tests000077500000000000000000000204171453631713700207620ustar00rootroot00000000000000#!/usr/bin/perl # vim: sw=2 ts=2 expandtab # # Runs the tests in various configurations # To be started from the src/ directory, to have matching paths # # If there's an environment variable MAKEFLAGS set, and it includes a # -j parameter, the tests are run in parallel. # # ########################################################################## # Copyright (C) 2005-2008 Philipp Marek. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3 as # published by the Free Software Foundation. ########################################################################## use Encode qw(from_to); use Fcntl qw(FD_CLOEXEC F_SETFD F_GETFD); # ############################################################################# # Detection and preparation ############################################################################# { @locales=`locale -a`; # look for UTF8 ($utf8_locale)=grep(/\.utf-?8/i,@locales); chomp $utf8_locale; # look for non-utf8 ($loc_locale)=grep(!/(POSIX|C|utf-?8$)/i, @locales); chomp $loc_locale; ($cur_locale)=map { /LC_CTYPE="(.*)"/ ? ($1) : (); } `locale`; @test_locales=($utf8_locale, $loc_locale); ($cur_locale_norm = $cur_locale) =~ s#utf-8#utf8#i; push @test_locales, $cur_locale unless grep(lc($cur_locale_norm) eq lc($_), @test_locales); # Test the locales. ($utf8, $loc)=`make -C ../tests locale_strings BINARY=/bin/true`; # print $utf8,$loc; $target_enc="ISO-8859-1"; from_to($utf8, "utf-8", $target_enc, Encode::FB_CROAK); from_to($loc, $target_enc, "utf-8", Encode::FB_CROAK); # print $utf8,$loc; exit; # Use special directories, so that normal system operation is not harmed. $PTESTBASE="/tmp/fsvs-tests-permutated"; mkdir($PTESTBASE, 0777) || die $! if !-d $PTESTBASE; open(CSV, "> /tmp/fsvs-tests.csv") || die $!; select((select(CSV), $|=1)[0]); print CSV qq("Nr","Prot","LANG","priv","config","Result"\n); # To get some meaningful test name outputted $ENV{"CURRENT_TEST"}="ext-tests"; $start=time(); $ENV{"MAKEFLAGS"} =~ /-j\s*(\d*)\b/; $parallel=$ENV{"PARALLEL"} || ($1+0) || 1; MSG("INFO", "Parallel found as $parallel") if $parallel; # Used for status output $fail=0; # Used for counting $sum=0; # For parallel execution $running=0; # Wait for children $SIG{"CHLD"}="DEFAULT"; %results=(); %pid_to_result=(); MSG("INFO", StartText($start)); # We don't want no cache. $| =1; } ############################################################################# # Run the tests ############################################################################# { # My default is debug - so do that last, to have a # correctly configured environment :-) # Furthermore the "unusual" configurations are done first. # for $release ("--enable-debug") # for $release ("--enable-release") for $release ("--with-waa_md5=8", "", "--enable-release", "--enable-debug") { # make sure that the binary gets recompiled $conf_cmd="( cd .. && ./configure $release ) && ". "touch config.h && make -j$parallel"; system("( $conf_cmd ) > /tmp/fsvs-conf.txt 2>&1") && die "configure problem: $?"; # Start the slow, uncommon tasks first. for $prot ("svn+ssh", "file://") { for $user ("sudo", "") { for $lang (@test_locales) { $sum++; # We have to make the conf and waa directory depend on the # user, so that root and normal user don't share the same base - # the user would get some EPERM. # Furthermore parallel tests shouldn't collide. $PTESTBASE2="$PTESTBASE/u.$user" . ($parallel ? ".$sum" : ""); # Start the test asynchronous, and wait if limit reached. $pid=StartTest(); $running++; { my($tmp); $tmp="?"; $results{$lang}{$user}{$prot}{$release}=\$tmp; $pid_to_result{$pid}=\$tmp; } WaitForChilds($parallel); } } } # As we reconfigure on the next run, we have to wait for *all* pending # children. WaitForChilds(1); } } ############################################################################# # Summary ############################################################################# { $end=time(); MSG("INFO", EndText($start, $end)); if ($fail) { MSG("ERROR","$fail of $sum tests failed."); } else { MSG("SUCCESS", "All $sum tests passed."); } close CSV; } system qq(make gcov); exit; ############################################################################# # Functions ############################################################################# sub MSG { my($type, @text)=@_; # We use the same shell functions, to get a nice consistent output. Bash(". ../tests/test_functions\n\$$type '" . join(" ",@text) . "'"); } # Gets all parameters from global variables. sub StartTest { $pid=fork(); die $! unless defined($pid); return $pid if ($pid); # $x=(0.5 < rand())+0; print "$$: exit with $x\n"; exit($x); # this is the child ... pipe(FAILREAD, FAILWRITE) || die "pipe: $!"; # sudo closes the filehandles above 2, and I found no way to get it to # keep them open. # So we have to give a path name to the children. $tl=$ENV{"TEST_LIST"}; $parms="LANG=$lang" . " LC_MESSAGES=C" . " 'TESTBASEx=$PTESTBASE2'" . " 'PROTOCOL=$prot'" . " RANDOM_ORDER=1" . ($tl ? " 'TEST_LIST=$tl'" : "") . " TEST_FAIL_WRITE_HDL=/proc/$$/fd/".fileno(FAILWRITE) . # And it can have our STDERR. " TEST_TTY_HDL=/proc/$$/fd/2"; # To avoid getting N*N running tasks for a "-j N", we explicitly say 1. # Parallel execution within the tests is not done yet, but better safe # than sorry. $cmd="$user make run-tests -j1 $parms"; $start=time(); # Output on STDOUT is short; the logfile says it all. print "#$sum ", StartText($start); open(LOG, "> /tmp/fsvs-test-$sum.log"); select((select(LOG), $|=1)[0]); print LOG "Testing #$sum: (configure=$release) $parms\n", StartText($start), "\n$conf_cmd &&\n\t$cmd\n\n"; # The sources are already configured; just the tests have to be run. $pid=fork(); die $! unless defined($pid); if (!$pid) { close FAILREAD; $ENV{"MAKEFLAGS"}=""; open(STDIN, "< /dev/null") || die $!; open(STDOUT, ">&LOG") || die $!; open(STDERR, ">&LOG") || die $!; system("make -C ../tests diag BINARY=true LC_ALL=$lang"); $x=fcntl(FAILWRITE, F_GETFD, 0); fcntl(FAILWRITE, F_SETFD, $x & ~FD_CLOEXEC); # sudo removes some environment variables, so set all options via make. exec $cmd; die; } # Give the child some time to take the write side. # If we ever get more than 4/64 kB of failed tests this will hang. die $! if waitpid($pid, 0) == -1; $error=$?; # We have to close the write side of the pipe, so that on reading we'll # see an EOF. close FAILWRITE; @failed=map { chomp; $_; } ; close FAILREAD; $end=time(); $t=EndText($start, $end); if ($error) { $status="FAIL"; open(F, "< /proc/loadavg") && print(LOG "LoadAvg: ", ) && close(F); MSG("WARN", "#$sum failed; $t"); } else { $status="OK"; MSG("INFO", "#$sum done; $t"); system("sudo rm -rf $PTESTBASE2"); } print LOG "\n", "$t\n", "$status $error: $user $parms\n", "got failed as (", join(" ", @failed), ")\n", "\n", "$conf_cmd && $cmd\n"; close LOG; $u = $user || "user"; print CSV join(",", $sum, map { "'$_'"; } ($prot, $lang, $u, $release, $status, sort(@failed))), "\n"; close CSV; # We cannot return $error directly ... only the low 8bit would # be taken, and these are the signal the process exited with. # A normal error status would be discarded! exit($error ? 1 : 0); } sub WaitForChilds { my($allowed)=@_; my($pid, $ret); while ($running >= $allowed) { $pid=wait(); $ret=$?; die $! if $pid == -1; ${$pid_to_result{$pid}}=$ret; $fail++ if $ret; $running--; } } # Some of the things done in via the shell only works with bash; since # debian has moved to dash recently, we make sure to use the correct # program. sub Bash { die unless @_ == 1; system '/bin/bash', '-c', @_; } # The \n don't matter for the shell, and they help for direct output. sub StartText { my($start)=@_; return "Started at (" . localtime($start) . ").\n"; } sub EndText { my($start, $end)=@_; return "Finished after ". ($end - $start) . " seconds (" . localtime($end) . ")."; } fsvs-fsvs-1.2.12/src/diff.c000066400000000000000000000654461453631713700154310ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2006-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #include #include #include #include #include #include #include #include "global.h" #include "revert.h" #include "helper.h" #include "interface.h" #include "url.h" #include "status.h" #include "options.h" #include "est_ops.h" #include "ignore.h" #include "waa.h" #include "racallback.h" #include "cp_mv.h" #include "warnings.h" #include "diff.h" /** \file * The \ref diff command source file. * * Currently only diffing single files is possible; recursive diffing * of trees has to be done. * * For trees it might be better to fetch all files in a kind of * update-scenario; then we'd avoid the many round-trips we'd have with * single-file-fetching. * Although an optimized file-fetching (rsync-like block transfers) would * probably save a lot of bandwidth. * */ /** \addtogroup cmds * * \section diff * * \code * fsvs diff [-v] [-r rev[:rev2]] [-R] PATH [PATH...] * \endcode * * This command gives you diffs between local and repository files. * * With \c -v the meta-data is additionally printed, and changes shown. * * If you don't give the revision arguments, you get a diff of the base * revision in the repository (the last commit) against your current local file. * With one revision, you diff this repository version against your local * file. With both revisions given, the difference between these repository * versions is calculated. * * You'll need the \c diff program, as the files are simply passed as * parameters to it. * * The default is to do non-recursive diffs; so fsvs diff . will * output the changes in all files in the current directory and * below. * * The output for special files is the diff of the internal subversion * storage, which includes the type of the special file, but no newline at * the end of the line (which \c diff complains about). * * For entries marked as copy the diff against the (clean) source entry is * printed. * * Please see also \ref o_diff and \ref o_colordiff. * * \todo Two revisions diff is buggy in that it (currently) always fetches * the full trees from the repository; this is not only a performance * degradation, but you'll see more changed entries than you want (like * changes A to B to A). This will be fixed. * */ int cdiff_pipe=STDOUT_FILENO; pid_t cdiff_pid=0; /** A number that cannot be a valid pointer. */ #define META_DIFF_DELIMITER (0xf44fee31) /** How long may a meta-data diff string be? */ #define META_DIFF_MAXLEN (256) /** Diff the given meta-data. * The given \a format string is used with the va-args to generate two * strings. If they are equal, one is printed (with space at front); else * both are shown (with '-' and '+'). * The delimiter between the two argument lists is via \ref * META_DIFF_DELIMITER. (NULL could be in the data, eg. as integer \c 0.) * * It would be faster to simply compare the values given to \c vsnprintf(); * that could even be done here, by using two \c va_list variables and * comparing. But it's not a performance problem. */ int df___print_meta(char *format, ... ) { int status; va_list va; char buf_old[META_DIFF_MAXLEN], buf_new[META_DIFF_MAXLEN]; int l1, l2; status=0; va_start(va, format); l1=vsnprintf(buf_old, META_DIFF_MAXLEN-1, format, va); DEBUGP("meta-diff: %s", buf_old); l2=0; while (va_arg(va, int) != META_DIFF_DELIMITER) { l2++; BUG_ON(l2>5, "Parameter list too long"); } l2=vsnprintf(buf_new, META_DIFF_MAXLEN-1, format, va); DEBUGP("meta-diff: %s", buf_new); STOPIF_CODE_ERR( l1<0 || l2<0 || l1>=META_DIFF_MAXLEN || l2>=META_DIFF_MAXLEN, EINVAL, "Printing meta-data strings format error"); /* Different */ STOPIF_CODE_EPIPE( printf( (l1 != l2 || strcmp(buf_new, buf_old) !=0) ? "-%s\n+%s\n" : " %s\n", buf_old, buf_new), NULL); ex: return status; } /** Get a file from the repository, and initiate a diff. * * Normally rev1 == root->repos_rev; to diff against * the \e base revision of the file. * * If the user specified only a single revision (rev2 == 0), * the local file is diffed against this; else against the * other repository version. * * \a rev2_file is meaningful only if \a rev2 is 0; this file gets removed * after printing the difference! * */ int df__do_diff(struct estat *sts, svn_revnum_t rev1, svn_revnum_t rev2, char *rev2_file) { int status; int ch_stat; static pid_t last_child=0; static char *last_tmp_file=NULL; static char *last_tmp_file2=NULL; pid_t tmp_pid; char *path, *disp_dest, *disp_source; int len_d, len_s; char *b1, *b2; struct estat sts_r2; char short_desc[10]; char *new_mtime_string, *other_mtime_string; char *url_to_fetch, *other_url; int is_copy; int fdflags; apr_hash_t *props_r1, *props_r2; status=0; /* Check whether we have an active child; wait for it. */ if (last_child) { /* Keep the race window small. */ tmp_pid=last_child; last_child=0; STOPIF_CODE_ERR( waitpid(tmp_pid, &ch_stat, 0) == -1, errno, "Waiting for child gave an error"); DEBUGP("child %d exitcode %d - status 0x%04X", tmp_pid, WEXITSTATUS(ch_stat), ch_stat); STOPIF_CODE_ERR( !WIFEXITED(ch_stat), EIO, "!Child %d terminated abnormally", tmp_pid); if (WEXITSTATUS(ch_stat) == 1) DEBUGP("exit code 1 - file has changed."); else { STOPIF( wa__warn(WRN__DIFF_EXIT_STATUS, EIO, "Child %d gave an exit status %d", tmp_pid, WEXITSTATUS(ch_stat)), NULL); } } /* \a last_tmp_file should only be set when last_child is set; * but who knows. * * This cleanup must be done \b after waiting for the child - else we * might delete the file before it was opened! * */ if (last_tmp_file) { STOPIF_CODE_ERR( unlink(last_tmp_file) == -1, errno, "Cannot remove temporary file %s", last_tmp_file); last_tmp_file=NULL; } if (last_tmp_file2) { STOPIF_CODE_ERR( unlink(last_tmp_file2) == -1, errno, "Cannot remove temporary file %s", last_tmp_file2); last_tmp_file2=NULL; } /* Just uninit? */ if (!sts) goto ex; STOPIF( ops__build_path( &path, sts), NULL); url_to_fetch=NULL; /* If this entry is freshly copied, get it's source URL. */ is_copy=sts->flags & RF___IS_COPY; if (is_copy) { /* Should we warn if any revisions are given? Can we allow one? */ STOPIF( cm__get_source(sts, NULL, &url_to_fetch, &rev1, 0), NULL); /* \TODO: That doesn't work for unknown URLs - but that's needed as * soon as we allow "fsvs cp URL path". */ STOPIF( url__find(url_to_fetch, &sts->url), NULL); } else url_to_fetch=path+2; current_url = sts->url; /* We have to fetch a file and do the diff, so open a session. */ STOPIF( url__open_session(NULL, NULL), NULL); /* The function rev__get_file() overwrites the data in \c *sts with * the repository values - mtime, ctime, etc. * We use this as an advantage and remember the current time - so that * we can print both. */ /* \e From is always the "old" - base revision, or first given revision. * \e To is the newer version - 2nd revision, or local file. */ /* TODO: use delta transfers for 2nd file. */ sts_r2=*sts; if (rev2 != 0) { STOPIF( url__full_url(sts, &other_url), NULL); STOPIF( url__canonical_rev(current_url, &rev2), NULL); STOPIF( rev__get_text_to_tmpfile(other_url, rev2, DECODER_UNKNOWN, NULL, &last_tmp_file2, NULL, &sts_r2, &props_r2, current_url->pool), NULL); } else if (rev2_file) { DEBUGP("diff against %s", rev2_file); /* Let it get removed. */ last_tmp_file2=rev2_file; } /* Now fetch the \e old version. */ STOPIF( url__canonical_rev(current_url, &rev1), NULL); STOPIF( rev__get_text_to_tmpfile(url_to_fetch, rev1, DECODER_UNKNOWN, NULL, &last_tmp_file, NULL, sts, &props_r1, current_url->pool), NULL); /* If we didn't flush the stdio buffers here, we'd risk getting them * printed a second time from the child. */ fflush(NULL); last_child=fork(); STOPIF_CODE_ERR( last_child == -1, errno, "Cannot fork diff program"); if (!last_child) { STOPIF( hlp__format_path(sts, path, &disp_dest), NULL); /* Remove the ./ at the front */ setenv(FSVS_EXP_CURR_ENTRY, path+2, 1); disp_source= is_copy ? url_to_fetch : disp_dest; len_d=strlen(disp_dest); len_s=strlen(disp_source); if (cdiff_pipe != STDOUT_FILENO) { STOPIF_CODE_ERR( dup2(cdiff_pipe, STDOUT_FILENO) == -1, errno, "Redirect output"); /* Problem with svn+ssh - see comment below. */ fdflags=fcntl(STDOUT_FILENO, F_GETFD); fdflags &= ~FD_CLOEXEC; /* Does this return errors? */ fcntl(STDOUT_FILENO, F_SETFD, fdflags); } /* We need not be nice with memory usage - we'll be replaced soon. */ /* 30 chars should be enough for everyone */ b1=malloc(len_s + 60 + 30); b2=malloc(len_d + 60 + 30); STOPIF( hlp__strdup( &new_mtime_string, ctime(& sts_r2.st.mtim.tv_sec)), NULL); STOPIF( hlp__strdup( &other_mtime_string, ctime(&sts->st.mtim.tv_sec)), NULL); sprintf(b1, "%s \tRev. %llu \t(%-24.24s)", disp_source, (t_ull) rev1, other_mtime_string); if (rev2 == 0) { sprintf(b2, "%s \tLocal version \t(%-24.24s)", disp_dest, new_mtime_string); strcpy(short_desc, "local"); } else { sprintf(b2, "%s \tRev. %llu \t(%-24.24s)", disp_dest, (t_ull) rev2, new_mtime_string); sprintf(short_desc, "r%llu", (t_ull) rev2); } /* Print header line, just like a recursive diff does. */ STOPIF_CODE_EPIPE( printf("diff -u %s.r%llu %s.%s\n", disp_source, (t_ull)rev1, disp_dest, short_desc), "Diff header"); if (opt__is_verbose() > 0) // TODO: && !symlink ...) { STOPIF( df___print_meta( "Mode: 0%03o", sts->st.mode & 07777, META_DIFF_DELIMITER, sts_r2.st.mode & 07777), NULL); STOPIF( df___print_meta( "MTime: %.24s", other_mtime_string, META_DIFF_DELIMITER, new_mtime_string), NULL); STOPIF( df___print_meta( "Owner: %d (%s)", sts->st.uid, hlp__get_uname(sts->st.uid, "undefined"), META_DIFF_DELIMITER, sts_r2.st.uid, hlp__get_uname(sts_r2.st.uid, "undefined") ), NULL); STOPIF( df___print_meta( "Group: %d (%s)", sts->st.gid, hlp__get_grname(sts->st.gid, "undefined"), META_DIFF_DELIMITER, sts_r2.st.gid, hlp__get_grname(sts_r2.st.gid, "undefined") ), NULL); } fflush(NULL); // TODO: if special_dev ... /* Checking \b which return value we get is unnecessary ... On \b * every error we get \c -1 .*/ execlp( opt__get_string(OPT__DIFF_PRG), opt__get_string(OPT__DIFF_PRG), opt__get_string(OPT__DIFF_OPT), last_tmp_file, "--label", b1, (rev2 != 0 ? last_tmp_file2 : rev2_file ? rev2_file : path), "--label", b2, opt__get_string(OPT__DIFF_EXTRA), NULL); STOPIF_CODE_ERR( 1, errno, "Starting the diff program \"%s\" failed", opt__get_string(OPT__DIFF_PRG)); } ex: return status; } /** Cleanup rests. */ int df___cleanup(void) { int status; int ret; if (cdiff_pipe != STDOUT_FILENO) STOPIF_CODE_ERR( close(cdiff_pipe) == -1, errno, "Cannot close colordiff pipe"); if (cdiff_pid) { /* Should we kill colordiff? Let it stop itself? Wait for it? * It should terminate itself, because STDIN gets no more data. * * But if we don't wait, it might get scheduled after the shell printed * its prompt ... and that's not fine. But should we ignore the return * code? */ STOPIF_CODE_ERR( waitpid( cdiff_pid, &ret, 0) == -1, errno, "Can't wait"); DEBUGP("child %d exitcode %d - status 0x%04X", cdiff_pid, WEXITSTATUS(ret), ret); } STOPIF( df__do_diff(NULL, 0, 0, 0), NULL); ex: return status; } /// FSVS GCOV MARK: df___signal should not be executed /** Signal handler function. * If the user wants us to quit, we remove the temporary files, and exit. * * Is there a better/cleaner way? * */ static void df___signal(int sig) { DEBUGP("signal %d arrived!", sig); df___cleanup(); exit(0); } /** Does a diff of the local non-directory against the given revision. * */ int df___type_def_diff(struct estat *sts, svn_revnum_t rev, apr_pool_t *pool) { int status; char *special_stg, *fn; apr_file_t *apr_f; apr_size_t wr_len, exp_len; status=0; special_stg=NULL; switch (sts->st.mode & S_IFMT) { case S_IFREG: STOPIF( df__do_diff(sts, rev, 0, NULL), NULL); break; case S_IFCHR: case S_IFBLK: case S_IFANYSPECIAL: special_stg=ops__dev_to_filedata(sts); /* Fallthrough, ignore first statement. */ case S_IFLNK: if (!special_stg) STOPIF( ops__link_to_string(sts, NULL, &special_stg), NULL); STOPIF( ops__build_path( &fn, sts), NULL); STOPIF_CODE_EPIPE( printf("Special entry changed: %s\n", fn), NULL); /* As "diff" cannot handle special files directly, we have to * write the expected string into a file, and diff against * that. * The remote version is fetched into a temporary file anyway. */ STOPIF( waa__get_tmp_name(NULL, &fn, &apr_f, pool), NULL); wr_len=exp_len=strlen(special_stg); STOPIF( apr_file_write(apr_f, special_stg, &wr_len), NULL); STOPIF_CODE_ERR( wr_len != exp_len, ENOSPC, NULL); STOPIF( apr_file_close(apr_f), NULL); STOPIF( df__do_diff(sts, rev, 0, fn), NULL); break; default: BUG("type?"); } ex: return status; } /** -. */ int df___direct_diff(struct estat *sts) { int status; svn_revnum_t rev1; char *fn; STOPIF( ops__build_path( &fn, sts), NULL); status=0; if (!S_ISDIR(sts->st.mode)) { DEBUGP("doing %s", fn); /* Has to be set per sts. */ rev1=sts->repos_rev; if ( (sts->entry_status & FS_REMOVED)) { STOPIF_CODE_EPIPE( printf("Only in repository: %s\n", fn), NULL); goto ex; } if (sts->to_be_ignored) goto ex; if ( (sts->entry_status & FS_NEW) || !sts->url) { if (sts->flags & RF___IS_COPY) { /* File was copied, we have a source */ } else { if (opt__is_verbose() > 0) STOPIF_CODE_EPIPE( printf("Only in local filesystem: %s\n", fn), NULL); goto ex; } } /* Local files must have changed; for repos-only diffs do always. */ if (sts->entry_status || opt_target_revisions_given) { DEBUGP("doing diff rev1=%llu", (t_ull)rev1); if (S_ISDIR(sts->st.mode)) { /* TODO: meta-data diff? */ } else { /* TODO: Some kind of pool handling in recursion. */ STOPIF( df___type_def_diff(sts, rev1, global_pool), NULL); } } } else { /* Nothing to do for directories? */ } ex: return status; } /** A cheap replacement for colordiff. * Nothing more than a \c cat. */ int df___cheap_colordiff(void) { int status; char *tmp; const int tmp_size=16384; status=0; tmp=alloca(tmp_size); while ( (status=read(STDIN_FILENO,tmp, tmp_size)) > 0 ) if ( (status=write(STDOUT_FILENO, tmp, status)) == -1) break; if (status == -1) { STOPIF_CODE_ERR(errno != EPIPE, errno, "Getting or pushing diff data"); status=0; } ex: return status; } /** Tries to start colordiff. * If colordiff can not be started, but the option says \c auto, we just * forward the data. Sadly neither \c splice nor \c sendfile are available * everywhere. * */ int df___colordiff(int *handle, pid_t *cd_pid) { const char *program; int status; int pipes[2], fdflags, success[2]; status=0; program=opt__get_int(OPT__COLORDIFF) ? opt__get_string(OPT__COLORDIFF) : "colordiff"; STOPIF_CODE_ERR( pipe(pipes) == -1, errno, "No more pipes"); STOPIF_CODE_ERR( pipe(success) == -1, errno, "No more pipes, case 2"); /* There's a small problem if the parent gets scheduled before the child, * and the child doesn't find the colordiff binary; then the parent might * only find out when it tries to send the first data across the pipe. * * But the successfully spawned colordiff won't report success, so the * parent would have to wait for a fail message - which delays execution * unnecessary - or simply live with diff getting EPIPE. * * Trying to get it scheduled by sending it a signal (which will be * ignored) doesn't work reliably, too. * * The only way I can think of is opening a second pipe in reverse * direction; if there's nothing to be read but EOF, the program could be * started - else we get a single byte, signifying an error. */ *cd_pid=fork(); STOPIF_CODE_ERR( *cd_pid == -1, errno, "Cannot fork colordiff program"); if (!*cd_pid) { close(success[0]); fdflags=fcntl(success[1], F_GETFD); fdflags |= FD_CLOEXEC; fcntl(success[1], F_SETFD, fdflags); STOPIF_CODE_ERR( ( dup2(pipes[0], STDIN_FILENO) | close(pipes[1]) | close(pipes[0]) ) == -1, errno, "Redirecting IO didn't work"); execlp( program, program, NULL); /* "" as value means best effort, so no error; any other string should * give an error. */ if (opt__get_int(OPT__COLORDIFF) != 0) { fdflags=errno; if (!fdflags) fdflags=EINVAL; /* Report an error to the parent. */ write(success[1], &fdflags, sizeof(fdflags)); STOPIF_CODE_ERR_GOTO(1, fdflags, quit, "!Cannot start colordiff program \"%s\"", program); } close(success[1]); /* Well ... do the best. */ /* We cannot use STOPIF() and similar, as that would return back up to * main - and possibly cause problems somewhere else. */ status=df___cheap_colordiff(); quit: exit(status ? 1 : 0); } close(pipes[0]); close(success[1]); status=read(success[0], &fdflags, sizeof(fdflags)); close(success[0]); STOPIF_CODE_ERR( status>0, fdflags, "!The colordiff program \"%s\" doesn't accept any data.\n" "Maybe it couldn't be started, or stopped unexpectedly?", opt__get_string(OPT__COLORDIFF) ); /* For svn+ssh connections a ssh process is spawned off. * If we don't set the CLOEXEC flag, it inherits the handle, and so the * colordiff child will never terminate - it might get data from ssh, after * all. */ fdflags=fcntl(pipes[1], F_GETFD); fdflags |= FD_CLOEXEC; /* Does this return errors? */ fcntl(pipes[1], F_SETFD, fdflags); *handle=pipes[1]; DEBUGP("colordiff is %d", *cd_pid); ex: return status; } /** Prints diffs for all entries with estat::entry_status or * estat::remote_status set. */ int df___diff_wc_remote(struct estat *entry, apr_pool_t *pool) { int status; struct estat **sts; int removed; char *fn; apr_pool_t *subpool; status=0; subpool=NULL; STOPIF( apr_pool_create(&subpool, pool), NULL); removed = ( ((entry->remote_status & FS_REPLACED) == FS_REMOVED) ? 1 : 0 ) | ( ((entry->remote_status & FS_REPLACED) == FS_NEW) ? 2 : 0 ) | ( ((entry->entry_status & FS_REPLACED) == FS_REMOVED) ? 2 : 0 ); STOPIF( ops__build_path(&fn, entry), NULL); DEBUGP_dump_estat(entry); /* TODO: option to print the whole lot of removed and "new" lines for * files existing only at one point? */ switch (removed) { case 3: /* Removed both locally and remote; no change to print. (?) */ break; case 1: /* Remotely removed. */ STOPIF_CODE_EPIPE( printf("Only locally: %s\n", fn), NULL); break; case 2: /* Locally removed. */ STOPIF_CODE_EPIPE( printf("Only in the repository: %s\n", fn), NULL); break; case 0: /* Exists on both; show (recursive) differences. */ if ((entry->local_mode_packed != entry->new_rev_mode_packed)) { /* Another type, so a diff doesn't make much sense, does it? */ STOPIF_CODE_EPIPE( printf("Type changed from local %s to %s: %s\n", st__type_string(PACKED_to_MODE_T(entry->local_mode_packed)), st__type_string(PACKED_to_MODE_T(entry->new_rev_mode_packed)), fn), NULL); /* Should we print some message that sub-entries are available? if (opt__is_verbose() > 0) { } */ } else if (entry->entry_status || entry->remote_status) { /* Local changes, or changes to repository. */ if (S_ISDIR(entry->st.mode)) { /* TODO: meta-data diff? */ if (entry->entry_count) { sts=entry->by_inode; while (*sts) { STOPIF( df___diff_wc_remote(*sts, subpool), NULL); sts++; } } } else STOPIF( df___type_def_diff(entry, entry->repos_rev, subpool), NULL); } break; } ex: /* This is of type (void), so we don't have any status to check. */ if (subpool) apr_pool_destroy(subpool); return status; } /** Set the entry as BASE (has no changes). */ int df___reset_remote_st(struct estat *sts) { sts->remote_status=0; return 0; } /** Does a repos/repos diff. * Currently works only for files. */ int df___repos_repos(struct estat *sts) { int status; char *fullpath, *path; struct estat **children; STOPIF( ops__build_path( &fullpath, sts), NULL); DEBUGP("%s: %s", fullpath, st__status_string_fromint(sts->remote_status)); STOPIF( hlp__format_path( sts, fullpath, &path), NULL); if ((sts->remote_status & FS_REPLACED) == FS_REPLACED) STOPIF_CODE_EPIPE( printf("Completely replaced: %s\n", path), NULL); else if (sts->remote_status & FS_NEW) STOPIF_CODE_EPIPE( printf("Only in r%llu: %s\n", (t_ull)opt_target_revision2, path), NULL); else if ((sts->remote_status & FS_REPLACED) == FS_REMOVED) STOPIF_CODE_EPIPE( printf("Only in r%llu: %s\n", (t_ull)opt_target_revision, path), NULL); else if (sts->remote_status) switch (sts->st.mode & S_IFMT) { case S_IFDIR: /* TODO: meta-data diff? */ if (sts->entry_count) { children=sts->by_inode; while (*children) STOPIF( df___repos_repos(*(children++)), NULL); } break; /* Normally a repos-repos diff can only show symlinks changing - * all other types of special entries get *replaced*. */ case S_IFANYSPECIAL: /* We don't know yet which special type it is. */ case S_IFLNK: case S_IFBLK: case S_IFCHR: STOPIF_CODE_EPIPE( printf("Special entry changed: %s\n", path), NULL); /* Fallthrough */ case S_IFREG: STOPIF( df__do_diff(sts, opt_target_revision, opt_target_revision2, NULL), NULL); break; default: BUG("type?"); } ex: return status; } /** -. * * We get the WC status, fetch the named changed entries, and call * an external diff program for each. * * As a small performance optimization we do that kind of parallel - * while we're fetching a file, we run the diff. */ int df__work(struct estat *root, int argc, char *argv[]) { int status; int i, deinit; char **normalized; svn_revnum_t rev, base; char *norm_wcroot[2]= {".", NULL}; status=0; deinit=1; STOPIF( waa__find_common_base(argc, argv, &normalized), NULL); STOPIF( url__load_nonempty_list(NULL, 0), NULL); STOPIF(ign__load_list(NULL), NULL); signal(SIGINT, df___signal); signal(SIGTERM, df___signal); signal(SIGHUP, df___signal); signal(SIGCHLD, SIG_DFL); /* check for colordiff */ if (( opt__get_int(OPT__COLORDIFF)==0 || opt__doesnt_say_off(opt__get_string(OPT__COLORDIFF)) ) && (isatty(STDOUT_FILENO) || opt__get_prio(OPT__COLORDIFF) > PRIO_PRE_CMDLINE) ) { DEBUGP("trying to use colordiff"); STOPIF( df___colordiff(&cdiff_pipe, &cdiff_pid), NULL); } /* TODO: If we get "-u X@4 Y@4:3 Z" we'd have to do different kinds of * diff for the URLs. * What about filenames? */ STOPIF( url__mark_todo(), NULL); switch (opt_target_revisions_given) { case 0: /* Diff WC against BASE. */ action->local_callback=df___direct_diff; /* We know that we've got a wc base because of * waa__find_common_base() above. */ STOPIF( waa__read_or_build_tree(root, argc, normalized, argv, NULL, 1), NULL); break; case 1: /* WC against rX. */ /* Fetch local changes ... */ action->local_callback=st__progress; action->local_uninit=st__progress_uninit; STOPIF( waa__read_or_build_tree(root, argc, normalized, argv, NULL, 1), NULL); // Has to set FS_CHILD_CHANGED somewhere /* Fetch remote changes ... */ while ( ! ( status=url__iterator(&rev) ) ) { STOPIF( cb__record_changes(root, rev, current_url->pool), NULL); } STOPIF_CODE_ERR( status != EOF, status, NULL); STOPIF( df___diff_wc_remote(root, current_url->pool), NULL); break; case 2: /* rX:Y. * This works in a single loop because the URLs are sorted in * descending priority, and an entry removed at a higher priority * could be replaced by one at a lower. */ /* TODO: 2 revisions per-URL. */ /* If no entries are given, do the whole working copy. */ if (!argc) normalized=norm_wcroot; while ( ! ( status=url__iterator(&rev) ) ) { STOPIF( url__canonical_rev(current_url, &opt_target_revision), NULL); STOPIF( url__canonical_rev(current_url, &opt_target_revision2), NULL); /* Take the values at the first revision as base; say that we've * got nothing. */ current_url->current_rev=0; action->repos_feedback=df___reset_remote_st; STOPIF( cb__record_changes(root, opt_target_revision, current_url->pool), NULL); /* Now get changes. We cannot do diffs directly, because * we must not use the same connection for two requests * simultaneously. */ action->repos_feedback=NULL; /* We say that the WC root is at the target revision, but that some * paths are not. */ base=current_url->current_rev; current_url->current_rev=opt_target_revision2; STOPIF( cb__record_changes_mixed(root, opt_target_revision2, normalized, base, current_url->pool), NULL); } STOPIF_CODE_ERR( status != EOF, status, NULL); /* If we'd use the log functions to get a list of changed files * we'd be slow for large revision ranges; for the various * svn_ra_do_update, svn_ra_do_diff2 and similar functions we'd * need the (complete) working copy base to get deltas against (as * we don't know which entries are changed). * * This way seems to be the fastest, and certainly the easiest for * now. */ /* "time fsvs diff -r4:4" on "ssh+svn://localhost/..." for 8400 * files gives a real time of 3.6sec. * "time fsvs diff > /dev/null" on "ssh+svn://localhost/..." for 840 * of 8400 files changed takes 1.8sec. * */ /* A possible idea would be to have a special delta-editor that * accepts (not already known) directories as unchanged. * Then it should be possible [1] to ask for the *needed* parts * only, which should save a fair bit of bandwidth. * * Ad 1: Ignoring "does not exist" messages when we say "directory * 'not-needed' is already at revision 'target'" and this isn't * true. TODO: Test whether all ra layers make that possible. */ STOPIF( df___repos_repos(root), NULL); status=0; break; default: BUG("what?"); } STOPIF( df__do_diff(NULL, 0, 0, 0), NULL); ex: if (deinit) { deinit=0; i=df___cleanup(); if (!status && i) STOPIF(i, NULL); } return status; } fsvs-fsvs-1.2.12/src/diff.h000066400000000000000000000010741453631713700154210ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2006-2008 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __DIFF_H__ #define __DIFF_H__ /** \file * \ref diff action header file. */ #include "global.h" #include "actions.h" /** Diff command main function. */ work_t df__work; #endif fsvs-fsvs-1.2.12/src/direnum.c000066400000000000000000000402551453631713700161530ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2009 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #include #include #include #include #include #include #include #include #include #include #include "est_ops.h" #include "direnum.h" #include "warnings.h" #include "global.h" #include "helper.h" /** \file * Directory enumerator functions. */ /** \defgroup getdents Directory reading * \ingroup perf * How to read a million inodes as fast as possible * * \section getdents_why Why? * Why do we care for \a getdents64 instead of simply using the * (portable) \a readdir()? * - \a getdents64 gives 64bit inodes (which we need on big * filesystems) * - as \a getdents64 gives up to (currently) 4096 bytes of directory * data, we save some amount of library and/or kernel calls - * for 32 byte per directory entry (estimated, measured, averaged) * we get a maximum of about 128 directory entries per call - which * saves many syscalls and much time. * Not counting the overhead of the apr- and libc-layers ... which we * should (have to) use for eg. windows. * * \section getdents_how How? * We have two kinds of directory reading codes. * - A fast one with \a getdents64() (linux-specific) * - A compatibility layer using \a opendir() / \a readdir() / \a closedir(). * * Which one to use is defined by \c configure. * */ /** @{ */ #undef HAVE_GETDENTS64 #ifdef HAVE_LINUX_TYPES_H #ifdef HAVE_LINUX_UNISTD_H /** If the system fulfills all necessary checks to use getdents(), this macro * is set. */ #define HAVE_GETDENTS64 1 #endif #endif #ifdef HAVE_GETDENTS64 /* Fast linux version. */ #include #include /** The type of handle. */ typedef int dir__handle; /** A compatibility structure. * It has an inode; a name; and a record length in it, to get from one * record to the next. */ typedef struct dirent64 fsvs_dirent; /** Starts enumeration of the given \a path. The directory handle is returned * in \a *dirp. * \return 0 for success, or an error code. */ int dir__start_enum(dir__handle *dh, char *path) { int status; status=0; *dh=open(path, O_RDONLY | O_DIRECTORY); STOPIF_CODE_ERR( *dh <= 0, errno, "open directory %s for reading", path); ex: return status; } /** The enumeration function. * \param dh The handle given by dir__start_enum. * \param dirp The space where data should be returned * \param count The maximum number of bytes in \a dirp. * * \return The number of bytes used in \a dirp. */ int dir__enum(dir__handle dh, fsvs_dirent *dirp, unsigned int count) { return syscall(__NR_getdents64, dh, dirp, count); } /** Simply closes the handle \a dh. * */ int dir__close(dir__handle dh) { int status; status=0; STOPIF_CODE_ERR( close(dh) == -1, errno, "closing dir-handle"); ex: return status; } /** How to get the length of a directory (in bytes), from a handle \a dh, * into \a st->size. */ int dir__get_dir_size(dir__handle dh, struct sstat_t *st) { int status; status=0; STOPIF( hlp__fstat(dh, st), "Get directory size"); ex: return status; } #else /* We fake something compatible with what we need. * That's not the finest way, but it works (TM). */ #include #include struct fsvs_dirent_t { uint64_t d_ino; int d_reclen; char d_name[NAME_MAX+1]; }; typedef struct fsvs_dirent_t fsvs_dirent; typedef DIR* dir__handle; int dir__start_enum(dir__handle *dh, char *path) { int status; status=0; STOPIF_CODE_ERR( (*dh=opendir(path)) == NULL, errno, "Error opening directory %s", path); ex: return status; } /* Impedance matching .. don't like it. */ int dir__enum(dir__handle dh, fsvs_dirent *dirp, unsigned int count) { struct dirent *de; de=readdir(dh); /* EOD ? */ if (!de) return 0; dirp[0].d_ino = de->d_ino; strcpy( dirp[0].d_name, de->d_name); dirp[0].d_reclen = sizeof(dirp[0])-sizeof(dirp[0].d_name) + strlen(dirp[0].d_name) + 1; return dirp[0].d_reclen; } int dir__close(dir__handle dh) { int status; status=0; STOPIF_CODE_ERR( closedir(dh) == -1, errno, "Error closing directory handle"); ex: return status; } int dir__get_dir_size(dir__handle dh, struct sstat_t *st) { int status; status=0; st->size=0; #ifdef HAVE_DIRFD STOPIF( hlp__fstat(dirfd(dh), st), "Get directory size()"); #endif ex: return status; } #endif /** @} */ /** The amount of memory that should be allocated for directory reading. * This value should be bigger (or at least equal) than the number of * bytes returned by \a getdents(). * For the compatibility layer it's more or less the maximum filename length * plus the inode and record length lengths. * * This many bytes \b more will also be allocated for the filenames in a * directory; if we get this close to the end of the buffer, * the memory area will be reallocated. */ #define FREE_SPACE (4096) /** Compares two struct estat pointers by device/inode. * \return +2, +1, 0, -1, -2, suitable for \a qsort(). * * That is now an inline function; but without force gcc doesn't inline it * on 32bit, because of the size (64bit compares, 0x6b bytes). * [ \c __attribute__((always_inline)) in declaration]. */ int dir___f_sort_by_inodePP(struct estat *a, struct estat *b) { register const struct sstat_t* __a=&(a->st); register const struct sstat_t* __b=&(b->st); if (__a->dev > __b->dev) return +2; if (__a->dev < __b->dev) return -2; if (__a->ino > __b->ino) return +1; if (__a->ino < __b->ino) return -1; return 0; } /** Compares the data inside two struct estat pointers to pointers by * device/inode. * \return +2, +1, 0, -1, -2, suitable for \a qsort(). */ int dir___f_sort_by_inode(struct estat **a, struct estat **b) { return dir___f_sort_by_inodePP(*a, *b); } /** Compares two names/strings. * Used for type checking cleanliness. * 'C' as for 'Const'. * \return +2, +1, 0, -1, -2, suitable for \a qsort(). */ int dir___f_sort_by_nameCC(const void *a, const void *b) { return strcoll(a,b); } /** Compares the data inside two struct estat pointers to pointers * by name. * \return +2, +1, 0, -1, -2, suitable for \a qsort(). */ int dir___f_sort_by_name(const void *a, const void *b) { register const struct estat * const *_a=a; register const struct estat * const *_b=b; return dir___f_sort_by_nameCC((*_a)->name, (*_b)->name); } /** Compares a pointer to name (string) with a struct estat pointer * to pointer. * \return +2, +1, 0, -1, -2, suitable for \a qsort(). */ int dir___f_sort_by_nameCS(const void *a, const void *b) { register const struct estat * const *_b=b; return dir___f_sort_by_nameCC(a, (*_b)->name); } /** -. * If it has no entries, an array with NULL is nonetheless allocated. */ int dir__sortbyname(struct estat *sts) { int count, status; // BUG_ON(!S_ISDIR(sts->st.mode)); count=sts->entry_count+1; /* After copying we can release some space, as 64bit inodes * are smaller than 32bit pointers. * Or otherwise we may have to allocate space anyway - this * happens automatically on reallocating a NULL pointer. */ STOPIF( hlp__realloc( &sts->by_name, count*sizeof(*sts->by_name)), NULL); if (sts->entry_count!=0) { memcpy(sts->by_name, sts->by_inode, count*sizeof(*sts->by_name)); qsort(sts->by_name, sts->entry_count, sizeof(*sts->by_name), dir___f_sort_by_name); } sts->by_name[sts->entry_count]=NULL; status=0; ex: return status; } /** -. * */ int dir__sortbyinode(struct estat *sts) { // BUG_ON(!S_ISDIR(sts->st.mode)); if (sts->entry_count) { BUG_ON(!sts->by_inode); qsort(sts->by_inode, sts->entry_count, sizeof(*sts->by_inode), (comparison_fn_t)dir___f_sort_by_inode); } return 0; } /** -. * The entries are sorted by inode number and stat()ed. * * \param this a pointer to this directory's stat - for estimating * the number of entries. Only this->st.st_size is used for that - * it may have to be zeroed before calling. * \param est_count is used to give an approximate number of entries, to * avoid many realloc()s. * \param give_by_name simply tells whether the ->by_name array should be * created, too. * * The result is written back into the sub-entry array in \a this. * * To avoid reallocating (and copying!) large amounts of memory, * this function fills some arrays from the directory, then allocates the * needed space, sorts the data (see note below) and adds all other data. * See \a sts_array, \a names and \a inode_numbers. * * \note Sorting by inode number brings about 30% faster lookup * times on my test environment (8 to 5 seconds) on an \b empty cache. * Once the cache is filled, it won't make a difference. * * \return 0 for success, else an errorcode. */ int dir__enumerator(struct estat *this, int est_count, int give_by_name) { dir__handle dirhandle; int size; int count; int i,j,l; int sts_free; int status; /* Struct \a estat pointer for temporary use. */ struct estat *sts=NULL; /* The estimated number of entries. */ int alloc_count; /* Stores the index of the next free byte in \a strings. */ int mark; /* Filename storage space. Gets stored in the directories \a ->strings * for memory management purposes. */ void *strings=NULL; /* Array of filenames. As the data space potentially has to be * reallocated at first only the offsets into \a *strings is stored. * These entries must be of the same size as a pointer, because the array * is reused as \c sts_array[] .*/ long *names=NULL; /* The buffer space, used as a struct \a fsvs_dirent */ char buffer[FREE_SPACE]; /* points into and walks over the \a buffer */ fsvs_dirent *p_de; /* Array of the struct \a estat pointers. Reuses the storage space * of the \a names Array. */ struct estat **sts_array=NULL; /* Array of inodes. */ ino_t *inode_numbers=NULL; STOPIF( dir__start_enum(&dirhandle, "."), NULL); if (!this->st.size) STOPIF( dir__get_dir_size(dirhandle, &(this->st)), NULL); /* At least a long for the inode number, and 3 characters + * a \0 per entry. But assume an average of 11 characters + \0. * If that's incorrect, we'll have to do an realloc. Oh, well. * * Another estimate which this function gets is the number of files * last time this directory was traversed. * * Should maybe be tunable in the future. * * (On my system I have an average of 13.9 characters per entry, * without the \0) */ alloc_count=this->st.size/(sizeof(*p_de) - sizeof(p_de->d_name) + ESTIMATED_ENTRY_LENGTH +1); /* + ca. 20% */ est_count= (est_count*19)/16 +1; if (alloc_count > est_count) est_count=alloc_count; /* on /proc, which gets reported with 0 bytes, * only 1 entry is allocated. This entry multiplied with 19/16 * is still 1 ... crash. * So all directories reported with 0 bytes are likely virtual * file systems, which can have _many_ entries ... */ if (est_count < 32) est_count=32; size=FREE_SPACE + est_count*( ESTIMATED_ENTRY_LENGTH + 1 ); STOPIF( hlp__alloc( &strings, size), NULL); mark=count=0; inode_numbers=NULL; names=NULL; alloc_count=0; /* read the directory and count entries */ while ( (i=dir__enum(dirhandle, (fsvs_dirent*)buffer, sizeof(buffer))) >0) { /* count entries, copy name and inode nr */ j=0; while (j= alloc_count) { /* If we already started, put a bit more space here. * Should maybe be configurable. */ if (!alloc_count) alloc_count=est_count; else alloc_count=alloc_count*19/16; STOPIF( hlp__realloc( &names, alloc_count*sizeof(*names)), NULL); /* temporarily we store the inode number in the *entries_by_inode * space; that changes when we've sorted them. */ STOPIF( hlp__realloc( &inode_numbers, alloc_count*sizeof(*inode_numbers)), NULL); } p_de=(fsvs_dirent*)(buffer+j); DEBUGP("found %llu %s", (t_ull)p_de->d_ino, p_de->d_name); if (p_de->d_name[0] == '.' && ((p_de->d_name[1] == '\0') || (p_de->d_name[1] == '.' && p_de->d_name[2] == '\0')) ) { /* just ignore . and .. */ } else { /* store inode for sorting */ inode_numbers[count] = p_de->d_ino; /* Store pointer to name. * In case of a realloc all pointers to the strings would get * invalid. So don't store the addresses now - only offsets. */ names[count] = mark; /* copy name, mark space as used */ l=strlen(p_de->d_name); strcpy(strings+mark, p_de->d_name); mark += l+1; count++; } /* next */ j += p_de->d_reclen; } /* Check for free space. * We read at most FREE_SPACE bytes at once, * so it's enough to have FREE_SPACE bytes free. * Especially because there are some padding and pointer bytes * which get discarded. */ if (size-mark < FREE_SPACE) { /* Oh no. Have to reallocate. * But we can hope that this (big) chunk is on the top * of the heap, so that it won't be copied elsewhere. * * How much should we add? For now, just give about 30%. */ /* size*21: Let's hope that this won't overflow :-) */ size=(size*21)/16; /* If +20% is not at least the buffer size (FREE_SPACE), * take at least that much memory. */ if (size < mark+FREE_SPACE) size=mark+FREE_SPACE; STOPIF( hlp__realloc( &strings, size), NULL); DEBUGP("strings realloc(%p, %d)", strings, size); } } STOPIF_CODE_ERR(i<0, errno, "getdents64"); DEBUGP("after loop found %d entries, %d bytes string-space", count, mark); this->entry_count=count; /* Free allocated, but not used, memory. */ STOPIF( hlp__realloc( &strings, mark), NULL); /* If a _down_-sizing ever gives an error, we're really botched. * But if it's an empty directory, a NULL pointer will be returned. */ BUG_ON(mark && !strings); this->strings=strings; /* Now this space is used - don't free. */ strings=NULL; /* Same again. Should never be NULL, as the size is never 0. */ STOPIF( hlp__realloc( &inode_numbers, (count+1)*sizeof(*inode_numbers)), NULL); STOPIF( hlp__realloc( &names, (count+1)*sizeof(*names)), NULL); /* Store end-of-array markers */ inode_numbers[count]=0; names[count]=0; /* Now we know exactly how many entries, we build the array for sorting. * We don't do that earlier, because resizing (and copying!) * is slow. Doesn't matter as much if it's just pointers, * but for bigger structs it's worth avoiding. * Most of the structures get filled only after sorting! */ /* We reuse the allocated array for names (int**) for storing * the (struct estat**). */ sts_array=(struct estat**)names; sts_free=0; for(i=0; iname=this->strings + names[i]; sts->st.ino=inode_numbers[i]; /* now the data is copied, we store the pointer. */ sts_array[i] = sts; sts++; sts_free--; } /* now names is no longer valid - space was taken by sts_array. */ names=NULL; this->by_inode=sts_array; /* Now the space is claimed otherwise - so don't free. */ sts_array=NULL; /* See inodeSort */ STOPIF( dir__sortbyinode(this), NULL); // for(i=0; id_ino, de[i]->d_name); for(i=0; iby_inode[i]; sts->parent=this; sts->repos_rev=SVN_INVALID_REVNUM; status=hlp__lstat(sts->name, &(sts->st)); if (abs(status) == ENOENT) { DEBUGP("entry \"%s\" not interesting - maybe a fifo or socket?", sts->name); sts->to_be_ignored=1; } else STOPIF( status, "lstat(%s)", sts->name); /* New entries get that set, because they're "updated". */ sts->old_rev_mode_packed = sts->local_mode_packed= MODE_T_to_PACKED(sts->st.mode); } /* Possibly return list sorted by name. */ if (give_by_name) STOPIF(dir__sortbyname(this), NULL); else /* should not be needed - but it doesn't hurt, either. */ this->by_name=NULL; status=0; ex: IF_FREE(strings); IF_FREE(names); IF_FREE(inode_numbers); IF_FREE(sts_array); if (dirhandle>=0) dir__close(dirhandle); return status; } fsvs-fsvs-1.2.12/src/direnum.h000066400000000000000000000026561453631713700161630ustar00rootroot00000000000000/************************************************************************ * Copyright (C) 2005-2008 Philipp Marek. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 3 as * published by the Free Software Foundation. ************************************************************************/ #ifndef __DIRENUM_H__ #define __DIRENUM_H__ /** \file * Directory enumerator header file. */ // for alphasort #include #include "global.h" /** This function reads a directory into a self-allocated memory area. */ int dir__enumerator(struct estat *this, int est_count, int by_name) ; /** Sorts the entries of the directory \a sts by name into the * estat::by_name array, which is reallocated and NULL-terminated. */ int dir__sortbyname(struct estat *sts); /** Sorts the existing estat::by_inode array afresh, by device/inode. */ int dir__sortbyinode(struct estat *sts); int dir___f_sort_by_inode(struct estat **a, struct estat **b); int dir___f_sort_by_inodePP(struct estat *a, struct estat *b); int dir___f_sort_by_name(const void *a, const void *b); int dir___f_sort_by_nameCC(const void *a, const void *b); int dir___f_sort_by_nameCS(const void *a, const void *b); /** How many bytes an average filename needs. * Measured on a debian system: * \code * find / -printf "%f\n" | wc * \endcode * */ #define ESTIMATED_ENTRY_LENGTH (15) #endif fsvs-fsvs-1.2.12/src/doc.g-c000066400000000000000000000734061453631713700155050ustar00rootroot00000000000000/* This file is generated, do not edit! * Last done on Mon Nov 7 09:41:43 2022 * */ const char hlp_add[]=" fsvs add [-u URLNAME] PATH [PATH...]\n" "\n" " With this command you can explicitly define entries to be versioned,\n" " even if they have a matching ignore pattern. They will be sent to the\n" " repository on the next commit, just like other new entries, and will\n" " therefore be reported as New .\n" "\n" " The -u option can be used if you're have more than one URL defined for\n" " this working copy and want to have the entries pinned to the this URL.\n" "\n"; const char hlp_unvers[]=" fsvs unversion PATH [PATH...]\n" "\n" " This command flags the given paths locally as removed. On the next\n" " commit they will be deleted in the repository, and the local\n" " information of them will be removed, but not the entries themselves. So\n" " they will show up as New again, and you get another chance at ignoring\n" " them.\n" "\n"; const char hlp_build[]=" This is used mainly for debugging. It traverses the filesystem and\n" " builds a new entries file. In production it should not be used; as\n" " neither URLs nor the revision of the entries is known, information is\n" " lost by calling this function!\n" "\n" " Look at sync-repos.\n" "\n"; const char hlp_delay[]=" This command delays execution until time has passed at least to the\n" " next second after writing the data files used by FSVS (dir and urls).\n" "\n" " This command is for use in scripts; where previously the delay option\n" " was used, this can be substituted by the given command followed by the\n" " delay command.\n" "\n" " The advantage against the delay option is that read-only commands can\n" " be used in the meantime.\n" "\n" " An example:\n" " fsvs commit /etc/X11 -m \"Backup of X11\"\n" " ... read-only commands, like \"status\"\n" " fsvs delay /etc/X11\n" " ... read-write commands, like \"commit\"\n" "\n" " The optional path can point to any path in the WC.\n" "\n" " In the testing framework it is used to save a bit of time; in normal\n" " operation, where FSVS commands are not so tightly packed, it is\n" " normally preferable to use the delay option.\n" "\n"; const char hlp_cat[]=" fsvs cat [-r rev] path\n" "\n" " Fetches a file repository, and outputs it to STDOUT. If no revision is\n" " specified, it defaults to BASE, ie. the current local revision number\n" " of the entry.\n" "\n"; const char hlp_checko[]=" fsvs checkout [path] URL [URLs...]\n" "\n" " Sets one or more URLs for the current working directory (or the\n" " directory path), and does an checkout of these URLs.\n" "\n" " Example:\n" " fsvs checkout . http://svn/repos/installation/machine-1/trunk\n" "\n" " The distinction whether a directory is given or not is done based on\n" " the\n" " result of URL-parsing – if it looks like an URL, it is used as an URL.\n" " Please mind that at most a single path is allowed; as soon as two\n" " non-URLs are found an error message is printed.\n" "\n" " If no directory is given, \".\" is used; this differs from the usual\n" " subversion usage, but might be better suited for usage as a recovery\n" " tool (where versioning / is common). Opinions welcome.\n" "\n" " The given path must exist, and should be empty – FSVS will abort on\n" " conflicts, ie. if files that should be created already exist.\n" " If there's a need to create that directory, please say so; patches for\n" " some parameter like -p are welcome.\n" "\n" " For a format definition of the URLs please see the chapter Format of\n" " URLs and the urls and update commands.\n" "\n" " Furthermore you might be interested in Using an alternate root\n" " directory and Recovery for a non-booting system.\n" "\n"; const char hlp_commit[]=" fsvs commit [-m \"message\"|-F filename] [-v] [-C [-C]] [PATH [PATH ...]]\n" " filename\n" " static char * filename\n" " Definition: update.c:90\n" "\n" " Commits (parts of) the current state of the working copy into the\n" " repository.\n" "\n"; const char hlp_cp[]=" fsvs cp [-r rev] SRC DEST\n" " fsvs cp dump\n" " fsvs cp load\n" "\n" " The copy command marks DEST as a copy of SRC at revision rev, so that\n" " on the next commit of DEST the corresponding source path is sent as\n" " copy source.\n" "\n" " The default value for rev is BASE, ie. the revision the SRC (locally)\n" " is at.\n" "\n" " Please note that this command works always on a directory structure -\n" " if you say to copy a directory, the whole structure is marked as copy.\n" " That means that if some entries below the copy are missing, they are\n" " reported as removed from the copy on the next commit.\n" " (Of course it is possible to mark files as copied, too; non-recursive\n" " copies are not possible, but can be emulated by having parts of the\n" " destination tree removed.)\n" "\n" " Note\n" " TODO: There will be differences in the exact usage - copy will\n" " try to run the cp command, whereas copied will just remember the\n" " relation.\n" "\n" " If this command are used without parameters, the currently defined\n" " relations are printed; please keep in mind that the key is the\n" " destination name, ie. the 2nd line of each pair!\n" "\n" " The input format for load is newline-separated - first a SRC line,\n" " followed by a DEST line, then an line with just a dot ( \".\") as\n" " delimiter. If you've got filenames with newlines or other special\n" " characters, you have to give the paths as arguments.\n" "\n" " Internally the paths are stored relative to the working copy base\n" " directory, and they're printed that way, too.\n" "\n" " Later definitions are appended to the internal database; to undo\n" " mistakes, use the uncopy action.\n" "\n" " Note\n" " Important: User-defined properties like fsvs:commit-pipe are not\n" " copied to the destinations, because of space/time issues\n" " (traversing through entire subtrees, copying a lot of\n" " property-files) and because it's not sure that this is really\n" " wanted. TODO: option for copying properties?\n" "\n" " Todo:\n" " -0 like for xargs?\n" "\n" " Todo:\n" " Are different revision numbers for load necessary? Should dump\n" " print the source revision number?\n" "\n" " Todo:\n" " Copying from URLs means update from there\n" "\n" " Note\n" " As subversion currently treats a rename as copy+delete, the mv\n" " command is an alias to cp.\n" "\n" " If you have a need to give the filenames dump or load as first\n" " parameter for copyfrom relations, give some path, too, as in \"./dump\".\n" "\n" " Note\n" " The source is internally stored as URL with revision number, so\n" " that operations like these\n" "\n" " $ fsvs cp a b\n" "\n" " $ rm a/1\n" "\n" " $ fsvs ci a\n" "\n" " $ fsvs ci b\n" " work - FSVS sends the old (too recent!) revision number as\n" " source, and so the local filelist stays consistent with the\n" " repository.\n" " But it is not implemented (yet) to give an URL as copyfrom\n" " source directly - we'd have to fetch a list of entries (and\n" " possibly the data!) from the repository.\n" "\n" " Todo:\n" " Filter for dump (patterns?).\n" "\n"; const char hlp_copyfr[]=" fsvs copyfrom-detect [paths...]\n" "\n" " This command tells FSVS to look through the new entries, and see\n" " whether it can find some that seem to be copied from others already\n" " known.\n" " It will output a list with source and destination path and why it could\n" " match.\n" "\n" " This is just for information purposes and doesn't change any FSVS\n" " state, (TODO: unless some option/parameter is set).\n" "\n" " The list format is on purpose incompatible with the load syntax, as the\n" " best match normally has to be taken manually.\n" "\n" " Todo:\n" " some parameter that just prints the \"best\" match, and outputs\n" " the correct format.\n" "\n" " If verbose is used, an additional value giving the percentage of\n" " matching blocks, and the count of possibly copied entries is printed.\n" "\n" " Example:\n" " $ fsvs copyfrom-list -v\n" " newfile1\n" " md5:oldfileA\n" " newfile2\n" " md5:oldfileB\n" " md5:oldfileC\n" " md5:oldfileD\n" " newfile3\n" " inode:oldfileI\n" " manber=82.6:oldfileF\n" " manber=74.2:oldfileG\n" " manber=53.3:oldfileH\n" " ...\n" " 3 copyfrom relations found.\n" "\n" " The abbreviations are:\n" " md5\n" "\n" " The MD5 of the new file is identical to that of one or more already\n" " committed files; there is no percentage.\n" "\n" " inode\n" "\n" " The device/inode number is identical to the given known entry; this\n" " could mean that the old entry has been renamed or hardlinked. Note: Not\n" " all filesystems have persistent inode numbers (eg. NFS) - so depending\n" " on your filesystems this might not be a good indicator!\n" "\n" " name\n" "\n" " The entry has the same name as another entry.\n" "\n" " manber\n" "\n" " Analysing files of similar size shows some percentage of\n" " (variable-sized) common blocks (ignoring the order of the blocks).\n" "\n" " dirlist\n" "\n" " The new directory has similar files to the old directory.\n" " The percentage is (number_of_common_entries)/(files_in_dir1 +\n" " files_in_dir2 - number_of_common_entries).\n" "\n" " Note\n" " manber matching is not implemented yet.\n" " If too many possible matches for an entry are found, not all are\n" " printed; only an indicator ... is shown at the end.\n" "\n"; const char hlp_uncp[]=" fsvs uncopy DEST [DEST ...]\n" "\n" " The uncopy command removes a copyfrom mark from the destination entry.\n" " This will make the entry unknown again, and reported as New on the next\n" " invocations.\n" "\n" " Only the base of a copy can be un-copied; if a directory structure was\n" " copied, and the given entry is just implicitly copied, this command\n" " will return an error.\n" "\n" " This is not folded in revert, because it's not clear whether revert on\n" " copied, changed entries should restore the original copyfrom data or\n" " remove the copy attribute; by using another command this is no longer\n" " ambiguous.\n" "\n" " Example:\n" " $ fsvs copy SourceFile DestFile\n" " # Whoops, was wrong!\n" " $ fsvs uncopy DestFile\n" "\n"; const char hlp_diff[]=" fsvs diff [-v] [-r rev[:rev2]] [-R] PATH [PATH...]\n" "\n" " This command gives you diffs between local and repository files.\n" "\n" " With -v the meta-data is additionally printed, and changes shown.\n" "\n" " If you don't give the revision arguments, you get a diff of the base\n" " revision in the repository (the last commit) against your current local\n" " file. With one revision, you diff this repository version against your\n" " local file. With both revisions given, the difference between these\n" " repository versions is calculated.\n" "\n" " You'll need the diff program, as the files are simply passed as\n" " parameters to it.\n" "\n" " The default is to do non-recursive diffs; so fsvs diff . will output\n" " the changes in all files in the current directory and below.\n" "\n" " The output for special files is the diff of the internal subversion\n" " storage, which includes the type of the special file, but no newline at\n" " the end of the line (which diff complains about).\n" "\n" " For entries marked as copy the diff against the (clean) source entry is\n" " printed.\n" "\n" " Please see also Options relating to the \"diff\" action and Using\n" " colordiff.\n" "\n" " Todo:\n" " Two revisions diff is buggy in that it (currently) always\n" " fetches the full trees from the repository; this is not only a\n" " performance degradation, but you'll see more changed entries\n" " than you want (like changes A to B to A). This will be fixed.\n" "\n"; const char hlp_export[]=" fsvs export REPOS_URL [-r rev]\n" "\n" " If you want to export a directory from your repository without storing\n" " any FSVS-related data you can use this command.\n" "\n" " This restores all meta-data - owner, group, access mask and\n" " modification time; its primary use is for data recovery.\n" "\n" " The data gets written (in the correct directory structure) below the\n" " current working directory; if entries already exist, the export will\n" " stop, so this should be an empty directory.\n" "\n"; const char hlp_help[]=" help [command]\n" "\n" " This command shows general or specific help (for the given command). A\n" " similar function is available by using -h or -? after a command.\n" "\n"; const char hlp_groups[]=" fsvs groups dump|load\n" " fsvs groups [prepend|append|at=n] group-definition [group-def ...]\n" " fsvs ignore [prepend|append|at=n] pattern [pattern ...]\n" " fsvs groups test [-v|-q] [pattern ...]\n" "\n" " This command adds patterns to the end of the pattern list, or, with\n" " prepend, puts them at the beginning of the list. With at=x the patterns\n" " are inserted at the position x , counting from 0.\n" "\n" " The difference between groups and ignore is that groups requires a\n" " group name, whereas the latter just assumes the default group ignore.\n" "\n" " For the specification please see the related documentation .\n" "\n" " fsvs dump prints the patterns to STDOUT . If there are special\n" " characters like CR or LF embedded in the pattern without encoding (like\n" " \\r or \\n), the output will be garbled.\n" "\n" " The patterns may include * and ? as wildcards in one directory level,\n" " or ** for arbitrary strings.\n" "\n" " These patterns are only matched against new (not yet known) files;\n" " entries that are already versioned are not invalidated.\n" " If the given path matches a new directory, entries below aren't found,\n" " either; but if this directory or entries below are already versioned,\n" " the pattern doesn't work, as the match is restricted to the directory.\n" "\n" " So:\n" " fsvs ignore ./tmp\n" "\n" " ignores the directory tmp; but if it has already been committed,\n" " existing entries would have to be unmarked with fsvs unversion.\n" " Normally it's better to use\n" " fsvs ignore ./tmp/**\n" "\n" " as that takes the directory itself (which might be needed after restore\n" " as a mount point anyway), but ignore all entries below.\n" " Currently this has the drawback that mtime changes will be reported and\n" " committed; this is not the case if the whole directory is ignored.\n" "\n" " Examples:\n" " fsvs group group:unreadable,mode:4:0\n" " fsvs group 'group:secrets,/etc/*shadow'\n" " fsvs ignore /proc\n" " fsvs ignore /dev/pts\n" " fsvs ignore './var/log/*-*'\n" " fsvs ignore './**~'\n" " fsvs ignore './**/*.bak'\n" " fsvs ignore prepend 'take,./**.txt'\n" " fsvs ignore append 'take,./**.svg'\n" " fsvs ignore at=1 './**.tmp'\n" " fsvs group dump\n" " fsvs group dump -v\n" " echo \"./**.doc\" | fsvs ignore load\n" " # Replaces the whole list\n" "\n" " Note\n" " Please take care that your wildcard patterns are not expanded by\n" " the shell!\n" "\n"; const char hlp_rign[]=" fsvs rel-ignore [prepend|append|at=n] path-spec [path-spec ...]\n" " fsvs ri [prepend|append|at=n] path-spec [path-spec ...]\n" "\n" " If you keep the same repository data at more than one working copy on\n" " the same machine, it will be stored in different paths - and that makes\n" " absolute ignore patterns infeasible. But relative ignore patterns are\n" " anchored at the beginning of the WC root - which is a bit tiring to\n" " type if you're deep in your WC hierarchy and want to ignore some files.\n" "\n" " To make that easier you can use the rel-ignore (abbreviated as ri)\n" " command; this converts all given path-specifications (which may include\n" " wildcards as per the shell pattern specification above) to WC-relative\n" " values before storing them.\n" "\n" " Example for /etc as working copy root:\n" " fsvs rel-ignore '/etc/X11/xorg.conf.*'\n" " cd /etc/X11\n" " fsvs rel-ignore 'xorg.conf.*'\n" "\n" " Both commands would store the pattern \"./X11/xorg.conf.*\".\n" "\n" " Note\n" " This works only for shell patterns.\n" "\n" " For more details about ignoring files please see the ignore command and\n" " Specification of groups and patterns.\n" "\n"; const char hlp_info[]=" fsvs info [-R [-R]] [PATH...]\n" "\n" " Use this command to show information regarding one or more entries in\n" " your working copy.\n" " You can use -v to obtain slightly more information.\n" "\n" " This may sometimes be helpful for locating bugs, or to obtain the URL\n" " and revision a working copy is currently at.\n" "\n" " Example:\n" " $ fsvs info\n" " URL: file:\n" " .... 200 .\n" " Type: directory\n" " Status: 0x0\n" " Flags: 0x100000\n" " Dev: 0\n" " Inode: 24521\n" " Mode: 040755\n" " UID/GID: 1000/1000\n" " MTime: Thu Aug 17 16:34:24 2006\n" " CTime: Thu Aug 17 16:34:24 2006\n" " Revision: 4\n" " Size: 200\n" "\n" " The default is to print information about the given entry only. With a\n" " single -R you'll get this data about all entries of a given directory;\n" " with another -R you'll get the whole (sub-)tree.\n" "\n"; const char hlp_log[]=" fsvs log [-v] [-r rev1[:rev2]] [-u name] [path]\n" "\n" " This command views the revision log information associated with the\n" " given path at its topmost URL, or, if none is given, the highest\n" " priority URL.\n" "\n" " The optional rev1 and rev2 can be used to restrict the revisions that\n" " are shown; if no values are given, the logs are given starting from\n" " HEAD downwards, and then a limit on the number of revisions is applied\n" " (but see the limit option).\n" "\n" " If you use the -v -option, you get the files changed in each revision\n" " printed, too.\n" "\n" " There is an option controlling the output format; see the log_output\n" " option.\n" "\n" " Optionally the name of an URL can be given after -u; then the log of\n" " this URL, instead of the topmost one, is shown.\n" "\n" " TODOs:\n" " * --stop-on-copy\n" " * Show revision for all URLs associated with a working copy? In which\n" " order?\n" "\n"; const char hlp_prop_g[]=" fsvs prop-get PROPERTY-NAME PATH...\n" "\n" " Prints the data of the given property to STDOUT.\n" "\n" " Note\n" " Be careful! This command will dump the property as it is, ie.\n" " with any special characters! If there are escape sequences or\n" " binary data in the property, your terminal might get messed up!\n" " If you want a safe way to look at the properties, use prop-list\n" " with the -v parameter.\n" "\n"; const char hlp_prop_s[]=" fsvs prop-set [-u URLNAME] PROPERTY-NAME VALUE PATH...\n" "\n" " This command sets an arbitrary property value for the given path(s).\n" "\n" " Note\n" " Some property prefixes are reserved; currently everything\n" " starting with svn: throws a (fatal) warning, and fsvs: is\n" " already used, too. See Special property names.\n" "\n" " If you're using a multi-URL setup, and the entry you'd like to work on\n" " should be pinned to a specific URL, you can use the -u parameter; this\n" " is like the add command, see there for more details.\n" "\n"; const char hlp_prop_d[]=" fsvs prop-del PROPERTY-NAME PATH...\n" "\n" " This command removes a property for the given path(s).\n" "\n" " See also prop-set.\n" "\n"; const char hlp_prop_l[]=" fsvs prop-list [-v] PATH...\n" "\n" " Lists the names of all properties for the given entry.\n" " With -v, the value is printed as well; special characters will be\n" " translated, as arbitrary binary sequences could interfere with your\n" " terminal settings.\n" "\n" " If you need raw output, post a patch for --raw, or write a loop with\n" " prop-get.\n" "\n"; const char hlp_remote[]=" fsvs remote-status PATH [-r rev]\n" "\n" " This command looks into the repository and tells you which files would\n" " get changed on an update - it's a dry-run for update .\n" "\n" " Per default it compares to HEAD, but you can choose another revision\n" " with the -r parameter.\n" "\n" " Please see the update documentation for details regarding multi-URL\n" " usage.\n" "\n"; const char hlp_resolv[]=" fsvs resolve PATH [PATH...]\n" "\n" " When FSVS tries to update local files which have been changed, a\n" " conflict might occur. (For various ways of handling these please see\n" " the conflict option.)\n" "\n" " This command lets you mark such conflicts as resolved.\n" "\n"; const char hlp_revert[]=" fsvs revert [-rRev] [-R] PATH [PATH...]\n" "\n" " This command undoes local modifications:\n" " * An entry that is marked to be unversioned gets this flag removed.\n" " * For a already versioned entry (existing in the repository) the\n" " local entry is replaced with its repository version, and its status\n" " and flags are cleared.\n" " * An entry that is a modified copy destination gets reverted to the\n" " copy source data.\n" " * Manually added entries are changed back to \"N\"ew.\n" "\n" " Please note that implicitly copied entries, ie. entries that are marked\n" " as copied because some parent directory is the base of a copy, can not\n" " be un-copied; they can only be reverted to their original (copied-from)\n" " data, or removed.\n" "\n" " If you want to undo a copy operation, please see the uncopy command.\n" "\n" " See also HOWTO: Understand the entries' statii.\n" "\n" " If a directory is given on the command line all versioned entries in\n" " this directory are reverted to the old state; this behaviour can be\n" " modified with -R/-N, or see below.\n" "\n" " The reverted entries are printed, along with the status they had before\n" " the revert (because the new status is per definition unchanged).\n" "\n" " If a revision is given, the entries' data is taken from this revision;\n" " furthermore, the new status of that entry is shown.\n" "\n" " Note\n" " Please note that mixed revision working copies are not (yet)\n" " possible; the BASE revision is not changed, and a simple revert\n" " without a revision arguments gives you that.\n" " By giving a revision parameter you can just choose to get the\n" " text from a different revision.\n" "\n"; const char hlp_status[]=" fsvs status [-C [-C]] [-v] [-f filter] [PATHs...]\n" "\n" " This command shows the entries that have been changed locally since the\n" " last commit.\n" "\n" " The most important output formats are:\n" " * A status columns of four (or, with -v , six) characters. There are\n" " either flags or a \".\" printed, so that it's easily parsed by\n" " scripts – the number of columns is only changed by -q, -v –\n" " verbose/quiet.\n" " * The size of the entry, in bytes, or \"dir\" for a directory, or \"dev\"\n" " for a device.\n" " * The path and name of the entry, formatted by the path option.\n" "\n" " Normally only changed entries are printed; with -v all are printed, but\n" " see the filter option for more details.\n" "\n" " The status column can show the following flags:\n" " * 'D' and 'N' are used for deleted and new entries.\n" " * 'd' and 'n' are used for entries which are to be unversioned or\n" " added on the next commit; the characters were chosen as little\n" " delete (only in the repository, not removed locally) and little new\n" " (although ignored). See add and unversion.\n" " If such an entry does not exist, it is marked with an \"!\" in the\n" " last column – because it has been manually marked, and so the\n" " removal is unexpected.\n" " * A changed type (character device to symlink, file to directory\n" " etc.) is given as 'R' (replaced), ie. as removed and newly added.\n" " * If the entry has been modified, the change is shown as 'C'.\n" " If the modification or status change timestamps (mtime, ctime) are\n" " changed, but the size is still the same, the entry is marked as\n" " possibly changed (a question mark '?' in the last column) - but see\n" " change detection for details.\n" " * A 'x' signifies a conflict.\n" " * The meta-data flag 'm' shows meta-data changes like properties,\n" " modification timestamp and/or the rights (owner, group, mode);\n" " depending on the -v/-q command line parameters, it may be split\n" " into 'P' (properties), 't' (time) and 'p' (permissions).\n" " If 'P' is shown for the non-verbose case, it means only property\n" " changes, ie. the entries filesystem meta-data is unchanged.\n" " * A '+' is printed for files with a copy-from history; to see the URL\n" " of the copyfrom source, see the verbose option.\n" "\n" " Here's a table with the characters and their positions:\n" "* Without -v With -v\n" "* .... ......\n" "* NmC? NtpPC?\n" "* DPx! D x!\n" "* R + R +\n" "* d d\n" "* n n\n" "*\n" "\n" " Furthermore please take a look at the stat_color option, and for more\n" " information about displayed data the verbose option.\n" "\n"; const char hlp_sync_r[]=" fsvs sync-repos [-r rev] [working copy base]\n" "\n" " This command loads the file list afresh from the repository.\n" " A following commit will send all differences and make the repository\n" " data identical to the local.\n" "\n" " This is normally not needed; the only use cases are\n" " * debugging and\n" " * recovering from data loss in the $FSVS_WAA area.\n" "\n" " It might be of use if you want to backup two similar machines. Then you\n" " could commit one machine into a subdirectory of your repository, make a\n" " copy of that directory for another machine, and sync this other\n" " directory on the other machine.\n" "\n" " A commit then will transfer only changed files; so if the two machines\n" " share 2GB of binaries ( /usr , /bin , /lib , ...) then these 2GB are\n" " still shared in the repository, although over time they will deviate\n" " (as both committing machines know nothing of the other path with\n" " identical files).\n" "\n" " This kind of backup could be substituted by two or more levels of\n" " repository paths, which get overlaid in a defined priority. So the base\n" " directory, which all machines derive from, will be committed from one\n" " machine, and it's no longer necessary for all machines to send\n" " identical files into the repository.\n" "\n" " The revision argument should only ever be used for debugging; if you\n" " fetch a filelist for a revision, and then commit against later\n" " revisions, problems are bound to occur.\n" "\n" " Note\n" " There's issue 2286 in subversion which describes sharing\n" " identical files in the repository in unrelated paths. By using\n" " this relaxes the storage needs; but the network transfers would\n" " still be much larger than with the overlaid paths.\n" "\n"; const char hlp_update[]=" fsvs update [-r rev] [working copy base]\n" " fsvs update [-u url@rev ...] [working copy base]\n" "\n" " This command does an update on the current working copy; per default\n" " for all defined URLs, but you can restrict that via -u.\n" "\n" " It first reads all filelist changes from the repositories, overlays\n" " them (so that only the highest-priority entries are used), and then\n" " fetches all necessary changes.\n" "\n"; const char hlp_urls[]=" fsvs urls URL [URLs...]\n" " fsvs urls dump\n" " fsvs urls load\n" "\n" " Initializes a working copy administrative area and connects the current\n" " working directory to REPOS_URL. All commits and updates will be done to\n" " this directory and against the given URL.\n" "\n" " Example:\n" " fsvs urls http://svn/repos/installation/machine-1/trunk\n" "\n" " For a format definition of the URLs please see the chapter Format of\n" " URLs.\n" "\n" " Note\n" " If there are already URLs defined, and you use that command\n" " later again, please note that as of 1.0.18 the older URLs are\n" " not overwritten as before, but that the new URLs are appended to\n" " the given list! If you want to start afresh, use something like\n" "\n" " true | fsvs urls load\n" "\n"; // vi: filetype=c fsvs-fsvs-1.2.12/src/dox/000077500000000000000000000000001453631713700151305ustar00rootroot00000000000000fsvs-fsvs-1.2.12/src/dox/HOWTO-BACKUP.dox000066400000000000000000000152071453631713700175140ustar00rootroot00000000000000/** \defgroup howto A small collection of HOW-TOs \ingroup userdoc Here you see a small collection of HOW-TOs. These aim to give you a small overview about common tasks. The paths and examples are based on a current Debian/Testing, but should be easily transferable to other Linux distributions or other UNIXes. */ /** \defgroup howto_backup HOWTO: Backup \ingroup howto This document is a step-by-step explanation how to do backups using FSVS. \section howto_backup_prep Preparation If you're going to back up your system, you have to decide what you want to have stored in your backup, and what should be left out. Depending on your system usage and environment you first have to decide:
  • Do you only want to backup your data in \c /home?
    • Less storage requirements
    • In case of hardware crash the OS must be set up again
  • Do you want to keep track of your configuration in \c /etc?
    • Very small storage overhead
    • Not much use for backup/restore, but shows what has been changed
  • Or do you want to backup your whole installation, from \c / on?
    • Whole system versioned, restore is only a few commands
    • Much more storage space needed - typically you'd need at least a few GB free space.
The next few moments should be spent thinking about the storage space for the repository - will it be on the system harddisk, a secondary or an external harddisk, or even off-site? \note If you just created a fresh repository, you probably should create the "default" directory structure for subversion - \c trunk, \c branches, \c tags; this layout might be useful for your backups.\n The URL you'd use in fsvs would go to \c trunk. Possibly you'll have to take the available bandwidth into your considerations; a single home directory may be backed up on a 56k modem, but a complete system installation would likely need at least some kind of DSL or LAN. \note If this is a production box with sparse, small changes, you could take the initial backup on a local harddisk, transfer the directory with some media to the target machine, and switch the URLs. A fair bit of time should go to a small investigation which file patterns and paths you \b not want to back-up.
  • Backup files like \c *.bak, \c *~, \c *.tmp, and similar
  • History files: .sh-history and similar in the home-directories
  • Cache directories: your favourite browser might store many MB of cached data in you home-directories
  • Virtual system directories, like \c /proc and \c /sys, \c /dev/shmfs.
\section howto_backup_first_steps Telling FSVS what to do Given \c $WC as the working directory - the base of the data you'd like backed up (\c /, \c /home), and \c $URL as a valid subversion URL to your (already created) repository path. Independent of all these details the first steps look like these: \code cd $WC fsvs urls $URL \endcode Now you have to say what should be ignored - that'll differ depending on your needs/wishes. \code fsvs ignore './§**~' './§**.tmp' './§**.bak' fsvs ignore ./proc/ ./sys/ ./tmp/ fsvs ignore ./var/tmp/ ./var/spool/lpd/ fsvs ignore './var/log/§*.gz' fsvs ignore ./var/run/ /dev/pts/ fsvs ignore './etc/*.dpkg-dist' './etc/*.dpkg-new' fsvs ignore './etc/*.dpkg-old' './etc/*.dpkg-bak' \endcode \note \c /var/run is for transient files; I've heard reports that \ref revert "reverting" files there can cause problems with running programs.\n Similar for \c /dev/pts - if that's a \c devpts filesystem, you'll run into problems on \ref update or \ref revert - as FSVS won't be allowed to create entries in this directory. Now you may find that you'd like to have some files encrypted in your backup - like \c /etc/shadow, or your \c .ssh/id_* files. So you tell fsvs to en/decrypt these files: \code fsvs propset fsvs:commit-pipe 'gpg -er {your backup key}' /etc/shadow /etc/gshadow fsvs propset fsvs:update-pipe 'gpg -d' /etc/shadow /etc/gshadow \endcode \note This are just examples. You'll probably have to exclude some other paths and patterns from your backup, and mark some others as to-be-filtered. \section howto_backup_first_commit The first backup \code fsvs commit -m "First commit." \endcode That's all there is to it! \section howto_backup_usage Further use and maintenance The further usage is more or less the \c commit command from the last section. \n When do you have to do some manual work?
  • When ignore patterns change.
    • New filesystems that should be ignored, or would be ignored but shouldn't
    • You find that your favorite word-processor leaves many *.segv files behind, and similar things
  • If you get an error message from fsvs, check the arguments and retry. In desperate cases (or just because it's quicker than debugging yourself) create a github issue.
\section howto_backup_restore Restoration in a working system Depending on the circumstances you can take different ways to restore data from your repository.
  • "fsvs export" allows you to just dump some repository data into your filesystem - eg. into a temporary directory to sort things out.
  • Using "fsvs revert" you can get older revisions of a given file, directory or directory tree inplace. \n
  • Or you can do a fresh checkout - set an URL in an (empty) directory, and update to the needed revision.
  • If everything else fails (no backup media with fsvs on it), you can use subversion commands (eg. \c export) to restore needed parts, and update the rest with fsvs.
\section howto_backup_recovery Recovery for a non-booting system In case of a real emergency, when your harddisks crashed or your filesystem was eaten and you have to re-partition or re-format, you should get your system working again by
  • booting from a knoppix or some other Live-CD (with FSVS on it),
  • partition/format as needed,
  • mount your harddisk partitions below eg. \c /mnt,
  • and then recovering by
\code $ cd /mnt $ export FSVS_CONF=/etc/fsvs # if non-standard $ export FSVS_WAA=/var/spool/fsvs # if non-standard $ fsvs checkout -o softroot=/mnt \endcode If somebody asks really nice I'd possibly even create a \c recovery command that deduces the \c softroot parameter from the current working directory. For more information please take a look at \ref o_softroot. \section howto_backup_feedback Feedback If you've got any questions, ideas, wishes or other feedback, please tell me. Thank you! */ // vi: filetype=doxygen spell spelllang=en_us fsvs-fsvs-1.2.12/src/dox/HOWTO-MASTER_LOCAL.dox000066400000000000000000000250051453631713700204510ustar00rootroot00000000000000/** \defgroup howto_master_local HOWTO: Master/Local repositories \ingroup howto This HOWTO describes how to use a single working copy with multiple repositories. Please read the \ref howto_backup first, to know about basic steps using FSVS. \section howto_masloc_ratio Rationale If you manage a lot of machines with similar or identical software, you might notice that it's a bit of work keeping them all up-to-date. Sure, automating distribution via rsync or similar is easy; but then you get identical machines, or you have to play with lots of exclude patterns to keep the needed differences. Here another way is presented; and even if you don't want to use FSVS for distributing your files, the ideas presented here might help you keep your machines under control. \section howto_masloc_prep Preparation, repository layout In this document the basic assumption is that there is a group of (more or less identical) machines, that share most of their filesystems. Some planning should be done beforehand; while the ideas presented here might suffice for simple versioning, your setup can require a bit of thinking ahead. This example uses some distinct repositories, to achieve a bit more clarity; of course these can simply be different paths in a single repository (see \ref howto_masloc_single_repos for an example configuration). Repository in URL \c base: \code trunk/ bin/ ls true lib/ libc6.so modules/ sbin/ mkfs usr/ local/ bin/ sbin/ tags/ branches/ \endcode Repository in URL \c machine1 (similar for machine2): \code trunk/ etc/ HOSTNAME adjtime network/ interfaces passwd resolv.conf shadow var/ log/ auth.log messages tags/ branches/ \endcode \subsection howto_masloc_prep_user User data versioning If you want to keep the user data versioned, too, a idea might be to start a new working copy in \b every home directory; this way - the system- and (several) user-commits can be run in parallel, - the intermediate \c home directory in the repository is not needed, and - you get a bit more isolation (against FSVS failures, out-of-space errors and similar). - Furthermore FSVS can work with smaller file sets, which helps performance a bit (less dentries to cache at once, less memory used, etc.). \code A/ Andrew/ .bashrc .ssh/ .kde/ Alexander/ .bashrc .ssh/ .kde/ B/ Bertram/ \endcode A cronjob could simply loop over the directories in \c /home, and call fsvs for each one; giving a target URL name is not necessary if every home-directory is its own working copy. \note URL names can include a forward slash \c / in their name, so you might give the URLs names like \c home/Andrew - although that should not be needed, if every home directory is a distinct working copy. \section howto_masloc_using Using master/local repositories Imagine having 10 similar machines with the same base-installation. Then you install one machine, commit that into the repository as \c base/trunk, and make a copy as \c base/released. The other machines get \c base/released as checkout source, and another (overlaid) from eg. \c machine1/trunk. \n Per-machine changes are always committed into the \c machineX/trunk of the per-machine repository; this would be the host name, IP address, and similar things. On the development machine all changes are stored into \c base/trunk; if you're satisfied with your changes, you merge them (see \ref howto_masloc_branches) into \c base/released, whereupon all other machines can update to this latest version. So by looking at \c machine1/trunk you can see the history of the machine-specific changes; and in \c base/released you can check out every old version to verify problems and bugs. \note You can take this system a bit further: optional software packages could be stored in other subtrees. They should be of lower priority than the base tree, so that in case of conflicts the base should always be preferred (but see \ref howto_masloc_note_1). Here is a small example; \c machine1 is the development machine, \c machine2 is a \e client. \code machine1$ fsvs urls name:local,P:200,svn+ssh://lserver/per-machine/machine1/trunk machine1$ fsvs urls name:base,P:100,http://bserver/base-install1/trunk # Determine differences, and commit them machine1$ fsvs ci -o commit_to=local /etc/HOSTNAME /etc/network/interfaces /var/log machine1$ fsvs ci -o commit_to=base / \endcode Now you've got a base-install in your repository, and can use that on the other machine: \code machine2$ fsvs urls name:local,P:200,svn+ssh://lserver/per-machine/machine2/trunk machine2$ fsvs urls name:base,P:100,http://bserver/base-install1/trunk machine2$ fsvs sync-repos # Now you see differences of this machines' installation against the other: machine2$ fsvs st # You can see what is different: machine2$ fsvs diff /etc/X11/xorg.conf # You can take the base installations files: machine2$ fsvs revert /bin/ls # And put the files specific to this machine into its repository: machine2$ fsvs ci -o commit_to=local /etc/HOSTNAME /etc/network/interfaces /var/log \endcode Now, if this machine has a harddisk failure or needs setup for any other reason, you boot it (eg. via PXE, Knoppix or whatever), and do (\ref howto_masloc_note_3) \code # Re-partition and create filesystems (if necessary) machine2-knoppix$ fdisk ... machine2-knoppix$ mkfs ... # Mount everything below /mnt machine2-knoppix$ mount /mnt/[...] machine2-knoppix$ cd /mnt # Do a checkout below /mnt machine2-knoppix$ fsvs co -o softroot=/mnt \endcode \section howto_masloc_branches Branching, tagging, merging Other names for your branches (instead of \c trunk, \c tags and \c branches) could be \c unstable, \c testing, and \c stable; your production machines would use \c stable, your testing environment \c testing, and in \c unstable you'd commit all your daily changes. \note Please note that there's no merging mechanism in FSVS; and as far as I'm concerned, there won't be. Subversion just gets automated merging mechanisms, and these should be fine for this usage too. (\ref howto_masloc_note_4) \subsection howto_masloc_branch_tags Thoughts about tagging Tagging works just like normally; although you need to remember to tag more than a single branch. Maybe FSVS should get some knowledge about the subversion repository layout, so a fsvs tag would tag all repositories at once? It would have to check for duplicate tag-names (eg. on the \c base -branch), and just keep it if it had the same copyfrom-source. But how would tags be used? Define them as source URL, and checkout? Would be a possible case. Or should fsvs tag do a \e merge into the repository, so that a single URL contains all files currently checked out, with copyfrom-pointers to the original locations? Would require using a single repository, as such pointers cannot be across different repositories. If the committed data includes the \c $FSVS_CONF/.../Urls file, the original layout would be known, too - although to use it a \ref sync-repos would be necessary. \section howto_masloc_single_repos Using a single repository A single repository would have to be partitioned in the various branches that are needed for bookkeeping; see these examples. Depending on the number of machines it might make sense to put them in a 1- or 2 level deep hierarchy; named by the first character, like \code machines/ A/ Axel/ Andreas/ B/ Berta/ G/ Gandalf/ \endcode \subsection howto_masloc_single_simple Simple layout Here only the base system gets branched and tagged; the machines simply backup their specific/localized data into the repository. \code # For the base-system: trunk/ bin/ usr/ sbin/ tags/ tag-1/ branches/ branch-1/ # For the machines: machines/ machine1/ etc/ passwd HOSTNAME machine2/ etc/ passwd HOSTNAME \endcode \subsection howto_masloc_single_per_area Per-area Here every part gets its \c trunk, \c branches and \c tags: \code base/ trunk/ bin/ sbin/ usr/ tags/ tag-1/ branches/ branch-1/ machine1/ trunk/ etc/ passwd HOSTNAME tags/ tag-1/ branches/ machine2/ trunk/ etc/ passwd HOSTNAME tags/ branches/ \endcode \subsection howto_masloc_single_common_ttb Common trunk, tags, and branches Here the base-paths \c trunk, \c tags and \c branches are shared: \code trunk/ base/ bin/ sbin/ usr/ machine2/ etc/ passwd HOSTNAME machine1/ etc/ passwd HOSTNAME tags/ tag-1/ branches/ branch-1/ \endcode \section howto_masloc_notes Other notes \subsection howto_masloc_note_1 1 Conflicts should not be automatically merged. If two or more trees bring the same file, the file from the \e highest tree wins - this way you always know the file data on your machines. It's better if a single software doesn't work, compared to a machine that no longer boots or is no longer accessible (eg. by SSH)). So keep your base installation at highest priority, and you've got good chances that you won't loose control in case of conflicting files. \subsection howto_masloc_note_2 2 If you don't know which files are different in your installs, - install two machines, - commit the first into fsvs, - do a \ref sync-repos on the second, - and look at the \ref status output. \subsection howto_masloc_note_3 3 As debian includes FSVS in the near future, it could be included on the next KNOPPIX, too! Until then you'd need a custom boot CD, or copy the absolute minimum of files to the harddisk before recovery. There's a utility \c svntar available; it allows you to take a snapshot of a subversion repository directly into a \c .tar -file, which you can easily export to destination machine. (Yes, it knows about the meta-data properties FSVS uses, and stores them into the archive.) \subsection howto_masloc_note_4 4 Why no file merging? Because all real differences are in the per-machine files -- the files that are in the \c base repository are changed only on a single machine, and so there's an unidirectional flow. BTW, how would you merge your binaries, eg. \c /bin/ls? \section howto_masloc_feedback Feedback If you've got any questions, ideas, wishes or other feedback, please tell me. Thank you! */ // vi: filetype=doxygen spell spelllang=en_us fsvs-fsvs-1.2.12/src/dox/TIPS_TRICKS.dox000066400000000000000000000027561453631713700175140ustar00rootroot00000000000000/** \defgroup tips Tips and tricks \ingroup userdoc This is a list of tips and tricks that you might find useful. \section tip_verbose Seeing the verbose status, but only changed entries Sometimes the status \ref status_meta_changed "meta-data changed" is not enough - the differentiation between \c mtime and the permission attributes is needed. For that the command line option \ref glob_opt_verb "-v" is used; but this \e verbose mode also prints all entries, not only the changed. To solve that the \ref glob_opt_filter "filter option" gets set; with the value \c none (to reset the mask), and then with the wanted mask - to restore the default the string \c "text,meta" could be set. Example: \code $ fsvs status -v -f none,text,meta $ fsvs status -v -f none,text,meta /etc $ fsvs status -v -f none,text,meta some/dir another_dir and_a_file \endcode \section tip_perf Performance points Some effort has been taken to get FSVS as fast as possible. With 1.1.17 the default for checking for changes on files was altered, to do a MD5-check of files with a changed modification time but the same size (to avoid printing a \c "?" \ref status_possibly "as status"); if that affects your use-case badly you can use the \ref o_chcheck "option" to get the old (fast) behavior. Please note that not the whole file has to be read - the first changed manber block (with averaged 128kB) terminates the check. */ // vi: filetype=doxygen spell spelllang=en_us fsvs-fsvs-1.2.12/src/dox/dev.dox000066400000000000000000000230121453631713700164200ustar00rootroot00000000000000/** \addtogroup dev \section dev_welcome Dear developers/debuggers, thank you for your interest in fsvs. I highly appreciate any help, tips and tricks, and even if it's just a bug report I want to know that. I'm also looking forward to documentation updates, and notifying me about mistakes will be politely answered, too. */ /** \defgroup dev_debug What to do in case of errors \ingroup dev First, please read the documentation to rule out the possibility that it's just a badly written sentence that caused misunderstanding. If you can't figure it out yourself, don't hesitate and write a bug report. Please include the version you're running (output of fsvs -V), the command line you're calling fsvs with, and the output it gives. Furthermore it might help diagnosing if you tried with the \ref glob_opt_verb "-v" parameter, and/or with \ref glob_opt_deb "-d"; but please mind that there might be data in the dump that you don't want to make public! Send these things along with a description of what you wanted to do to me or, if you like that alternative better, just file an issue. \n (The bugs I find and the things on my \c TODO are not in the issue tracker, as I can't access it while on the train - and that's where I spend the most time working on fsvs). Please be aware that I possibly need more details or some other tries to find out what goes wrong. \section dev_devs People that like to help If you know C and want to help with fsvs, Just Do It (R) :-) Look into the \c TODO file, pick your favorite point, and implement it. If you don't know C, but another programming language (like perl, python, or shell-programming), you can help, too -- help write test scripts. \n I mostly checked the positive behavior (ie. that something should happen given a set of predefined state and parameters), but testing for wrong and unexpected input makes sense, too. If you don't know any programming language, you can still look at the documentation and point me to parts which need clarifying, write documents yourself, or just fix mistakes. All contributions should \b please be sent as a unified diff, along with a description of the change, and there's a good chance to have it integrated into the fsvs code-base. \note How to generate such a diff? \n If you're using svn or svk to track fsvs usage, the "svn diff" or "svk diff" commands should do what you want. If you downloaded a \c .tar.gz or \c .tar.bz2, keep a pristine version in some directory and make your changes in another copy. \n When you're finished making changes, run the command \code diff -ur \e original \e new > \e my-changes.patch \endcode and send me that file. */ /** \defgroup dev_design The internal design \ingroup dev \section dev_design_terms Terms used in this document \subsection dev_des_t_entry Entry In subversion speak an entry is either a directory, a symlink or a file; In FSVS it can additionally be a block or character device. \n Sockets and pipes are currently ignored, as they're typically re-created by the various applications. \subsection dev_des_t_waa WAA, CONF Please see \ref waa_file. \section dev_des_memory_layout In-memory layout In memory fsvs builds a complete tree of the needed entries (\c struct \c estat). They are referenced with the \c parent pointers upwards to the root, and the \c estat::by_inode and \c estat::by_name downwards. \subsection dev_des_mem_alloc Storage and allocation Every directory entry can have a string space allocated, ie. space needed for the filenames in this directory (and possibly sub-directories, too.) On loading of the list in \c waa__input_tree() two memory ranges are allocated - one for the struct estats read, and one for the filenames. Because of this \c free()ing of part of the entries is not possible; a freelist for the struct estats is used, but the string space is more or less permanent. \section dev_des_algo Algorithms and assumption in the code Generally I tried to use fast and simple algorithms better than \c O(n); but it's possible that I forgot something. \subsection dev_des_algo_dsearch Searching for an entry Searching for an entry in a directory (in memory) is \c O(log2(n)), as I use \c bsearch(). \subsection dev_des_algo_output Writing the entry list Determining the correct order for writing the entries (in \c waa__output_tree()) is optimized by having all lists pre-sorted; about half the time (tested) a single compare is enough to determine the next written entry. \note Of course, to keep the lists sorted, a lot of comparisons have to be made before waa__output_tree(). \subsection dev_des_algo_by estat::by_inode and estat::by_name The \c by_inode and \c by_name members are pointers to arrays of pointers to entries (:-); they must reference the same entries, only the order may differ. \c by_inode must (nearly) always be valid ; \c by_name is optional. The flag \c estat::to_be_sorted tells \c waa__output_tree() that the order of the \c by_inode array might be wrong, and has to be re-sorted before use. While scanning for changes we use a global \c by_inode ordering, as this is \b much faster than naive traversal; the \c by_name array is used for comparing directories, to determine whether there are any new entries. Both arrays \b must include a \c NULL -pointer at the end of the array. \subsection dev_des_algo_manber Manber-Hash and MD5 To quickly find whether a given file has changed, and to send only the changed bytes over the wire, we take a running hash (a Manber-Hash), and whenever we find a "magic" value we take that as buffer end. We calculate the MD5 of each buffer, and store them along with their start offset in the file. So on commit we can find identical blocks and send only those, and while comparing we can return "changed" as soon as we find a difference. \section dev_des_errors Error checking and stopping Return codes are checked everywhere. The return value of functions in this program is normally (int); 0 means success, something else an error. Either this error is expected (like ENOENT for some operations) and handled, or it must be returned to the caller. Most of this is already defined in macros. Typical function layout is like this (taken from waa.c): \code int waa__make_info_link(char *directory, char *name, char *dest) { int status; char *path, *eos; STOPIF( waa___get_waa_directory(directory, &path, &eos), NULL); strcpy(eos, name); ... if (access(path, F_OK) != 0) STOPIF_CODE_ERR( symlink(dest, path) == -1, errno, "cannot create informational symlink '%s' -> '%s'", path, dest); ex: return status; } \endcode When a function gets called by subversion libraries, we have to use their return type. Here an example from \c commit.c: \code svn_error_t *ci___set_props(void *baton, struct estat *sts, change_any_prop_t function, apr_pool_t *pool) { const char *ccp; svn_string_t *str; int status; svn_error_t *status_svn; status=0; ... if (sts->entry_type != FT_SYMLINK) { ... str=svn_string_createf (pool, "%u %s", sts->st.uid, hlp__get_uname(sts->st.uid, "") ); STOPIF_SVNERR( function, (baton, propname_owner, str, pool) ); ... } ex: RETURN_SVNERR(status); } \endcode The various \c STOPIF() -macros automatically print an error message and, depending on the debug- and verbosity-flags given on the command line, a back trace too. Another special case is output to \c STDOUT; if we get an error \c EPIPE here, we pass it up to main() as \c -EPIPE (to avoid confusion with writing some other data), where it gets ignored. To avoid printing an error message this is hardcoded in the \c STOPIF() macros. Assertions should be checked by \c BUG_ON(condition, format_string, ...). This will cause a segmentation violation, which (for debug builds) will automatically attach a debugger (\c gdb, only if present on the system). \section dev_des_comments Comments and documentation FSVS is normalized to use doxygen format for the documentation: "/§** ... *§/". For non-trivial things it's practical to document the thoughts, too; such internal documentation uses the normal C-style comments ("/§* ... *§/"). \subsection dev_des_slash_star /§* in documentation In cases where a slash \c / and a star \c * have to be used in the documentation, there's a hack by putting a paragraph symbol (\c \\c2\\xa7 in UTF-8) between them, so that it doesn't break the comment block. There's a perl hack for documentation generation, where these get removed. \note For C this would not be strictly necessary; There's always the way of putting a # if 0 block around that comment block. Doxygen doesn't allow this; even if using a shell script (with comments indicated by \c #) doxygen doesn't allow /§* or *§/. \section dev_tests About the tests \subsection dev_tests_delay Delays after commit There have been a lot of "sleep 1" commands in the tests, to get directories' mtime to change for new entries. Now they are mostly changed to a simple "-o delay=yes" on the commit just above, which should give us about half a second on average. \note If FSVS has to be run for the check, it must wait until the other instance has finished - else the dir-list file and so on won't be written; so parallel checking via \c & and \c wait doesn't really work. Simply putting delay=yes in the FSVS configuration file more than doubled the run time of the tests - this was unacceptable to me. */ // vi: filetype=doxygen spell spelllang=en_us fsvs-fsvs-1.2.12/src/dox/options.dox000066400000000000000000000671621453631713700173530ustar00rootroot00000000000000/** \defgroup options Further options for FSVS. \ingroup userdoc List of settings that modify FSVS' behaviour. FSVS understands some options that modify its behaviour in various small ways. \section oh_overview Overview \subsection o__hlist This document This document lists all available options in FSVS, in an \ref o__list "full listing" and in \ref o__groups "groups". Furthermore you can see their \ref o__prio "relative priorities" and some \ref o__examples "examples". \subsection o__groups Semantic groups
  • \ref oh_display
  • \ref oh_diff
  • \ref oh_commit
  • \ref oh_performance
  • \ref oh_base
  • \ref oh_debug
\subsection o__list Sorted list of options FSVS currently knows:
  • \c all_removed - \ref o_all_removed
  • \c author - \ref o_author
  • \c change_check - \ref o_chcheck
  • \c colordiff - \ref o_colordiff
  • \c commit_to - \ref o_commit_to
  • \c conflict - \ref o_conflict
  • \c conf - \ref o_conf.
  • \c config_dir - \ref o_configdir.
  • \c copyfrom_exp - \ref o_copyfrom_exp
  • \c debug_output - \ref o_debug_output
  • \c debug_buffer - \ref o_debug_buffer
  • \c delay - \ref o_delay
  • \c diff_prg, \c diff_opt, \c diff_extra - \ref o_diff
  • \c dir_exclude_mtime - \ref o_dir_exclude_mtime
  • \c dir_sort - \ref o_dir_sort
  • \c empty_commit - \ref o_empty_commit
  • \c empty_message - \ref o_empty_msg
  • \c filter - \ref o_filter, but see \ref glob_opt_filter "-f".
  • \c group_stats - \ref o_group_stats.
  • \c limit - \ref o_logmax
  • \c log_output - \ref o_logoutput
  • \c merge_prg, \c merge_opt - \ref o_merge
  • \c mkdir_base - \ref o_mkdir_base
  • \c password - \ref o_passwd
  • \c path - \ref o_opt_path
  • \c softroot - \ref o_softroot
  • \c stat_color - \ref o_status_color
  • \c stop_change - \ref o_stop_change
  • \c verbose - \ref o_verbose
  • \c warning - \ref o_warnings, but see \ref glob_opt_warnings "-W".
  • \c waa - \ref o_waa "waa".
\subsection o__prio Priorities for option setting The priorities are
  • Command line \e (highest)
  • Environment variables. These are named as FSVS_{upper-case option name}.
  • $HOME/.fsvs/wc-dir/config
  • $FSVS_CONF/wc-dir/config
  • $HOME/.fsvs/config
  • $FSVS_CONF/config
  • Default value, compiled in \e (lowest)
\note The \c $HOME-dependent configuration files are not implemented currently. Volunteers? Furthermore there are "intelligent" run-time dependent settings, like turning off colour output when the output is redirected. Their priority is just below the command line - so they can always be overridden if necessary. \subsection o__examples Examples Using the commandline: \code fsvs -o path=environment fsvs -opath=environment \endcode Using environment variables: \code FSVS_PATH=absolute fsvs st \endcode A configuration file, from $FSVS_CONF/config or in a WC-specific path below $FSVS_CONF: \code # FSVS configuration file path=wcroot \endcode \section oh_display Output settings and entry filtering \subsection o_all_removed Trimming the list of deleted entries If you remove a directory, all entries below are implicitly known to be deleted, too. To make the \ref status output shorter there's the \c all_removed option which, if set to \c no, will cause children of removed entries to be omitted. Example for the config file: \code all_removed=no \endcode \subsection o_dir_exclude_mtime Ignore mtime-metadata changes for directories When this option is enabled, directories where only the mtime changed are not reported on \ref status anymore. This is useful in situations where temporary files are created in directories, eg. by text editors. (Example: \c VIM swapfiles when no \c directory option is configured). Example for the config file: \code dir_exclude_mtime=yes \endcode \subsection o_dir_sort Directory sorting If you'd like to have the output of \ref status sorted, you can use the option \c dir_sort=yes. FSVS will do a run through the tree, to read the status of the entries, and then go through it again, but sorted by name. \note If FSVS aborts with an error during \ref status output, you might want to turn this option off again, to see where FSVS stops; the easiest way is on the command line with \c -odir_sort=no. \subsection o_filter Filtering entries Please see the command line parameter for \ref glob_opt_filter "-f", which is identical. \code fsvs -o filter=mtime \endcode \subsection o_logmax "fsvs log" revision limit There are some defaults for the number of revisions that are shown on a "fsvs log" command:
  • 2 revisions given (-rX:Y): \c abs(X-Y)+1, ie. all revisions in that range.
  • 1 revision given: exactly that one.
  • no revisions given: from \c HEAD to 1, with a maximum of 100.
As this option can only be used to set an upper limit of revisions, it makes most sense for the no-revision-arguments case. \subsection o_logoutput "fsvs log" output format You can modify aspects of the \ref log "fsvs log" output format by setting the \c log_output option to a combination of these flags:
  • \c color: This uses color in the output, similar to \c cg-log (\c cogito-log); the header and separator lines are highlighted. \note This uses ANSI escape sequences, and tries to restore the default color; if you know how to do that better (and more compatible), please tell the developer mailing list.
  • \c indent: Additionally you can shift the log message itself a space to the right, to make the borders clearer.
Furthermore the value \c normal is available; this turns off all special handling. \note If you start such an option, the value is reset; so if you specify \c log_output=color,indent in the global config file, and use \c log_output=color on the commandline, only colors are used. This is different to the \ref o_filter option, which is cumulating. \subsection o_opt_path Displaying paths You can specify how paths printed by FSVS should look like; this is used for the entry status output of the various actions, and for the diff header lines. There are several possible settings, of which one can be chosen via the \c path option.
  • \anchor pd_wcroot \c wcroot \n This is the old, traditional FSVS setting, where all paths are printed relative to the working copy root.
  • \anchor pd_parm \c parameter \n With this setting FSVS works like most other programs - it uses the first best-matching parameter given by the user, and appends the rest of the path.\n This is the new default. \note Internally FSVS still first parses all arguments, and then does a single run through the entries. So if some entry matches more than one parameter, it is printed using the first match.
  • \anchor pd_absolute \c absolute \n All paths are printed in absolute form. This is useful if you want to paste them into other consoles without worrying whether the current directory matches, or for using them in pipelines.
The next two are nearly identical to \c absolute, but the beginning of paths are substituted by environment variables. This makes sense if you want the advantage of full paths, but have some of them abbreviated.
  • \anchor pd_env \c environment \n Match variables to directories after reading the known entries, and use this cached information. This is faster, but might miss the best case if new entries are found (which would not be checked against possible longer hits). \n Furthermore, as this works via associating environment variables to entries, the environment variables must at least match the working copy base - shorter paths won't be substituted.
  • \c full-environment \n Check for matches just before printing the path. \n This is slower, but finds the best fit. \note The string of the environment variables must match a directory name; the filename is always printed literally, and partial string matches are not allowed. Feedback wanted. \note Only environment variables whose names start with \c WC are used for substitution, to avoid using variables like \c $PWD, \c $OLDPWD, \c $HOME and similar which might differ between sessions. Maybe the allowed prefixes for the environment variables should be settable in the configuration. Opinions to the users mailing list, please.
Example, with \c / as working copy base: \code $ cd /etc $ fsvs -o path=wcroot st .mC. 1001 ./etc/X11/xorg.conf $ fsvs -o path=absolute st .mC. 1001 /etc/X11/xorg.conf $ fsvs -o path=parameters st .mC. 1001 X11/xorg.conf $ fsvs -o path=parameters st . .mC. 1001 ./X11/xorg.conf $ fsvs -o path=parameters st / .mC. 1001 /etc/X11/xorg.conf $ fsvs -o path=parameters st X11 .mC. 1001 X11/xorg.conf $ fsvs -o path=parameters st ../dev/.. .mC. 1001 ../dev/../etc/X11/xorg.conf $ fsvs -o path=parameters st X11 ../etc .mC. 1001 X11/xorg.conf $ fsvs -o path=parameters st ../etc X11 .mC. 1001 ../etc/X11/xorg.conf $ fsvs -o path=environ st .mC. 1001 ./etc/X11/xorg.conf $ WCBAR=/etc fsvs -o path=wcroot st .mC. 1001 $WCBAR/X11/xorg.conf $ WCBAR=/etc fsvs -o path=wcroot st / .mC. 1001 $WCBAR/X11/xorg.conf $ WCBAR=/e fsvs -o path=wcroot st .mC. 1001 /etc/X11/xorg.conf $ WCBAR=/etc WCFOO=/etc/X11 fsvs -o path=wcroot st .mC. 1001 $WCFOO/xorg.conf $ touch /etc/X11/xinit/xinitrc $ fsvs -o path=parameters st .mC. 1001 X11/xorg.conf .m.? 1001 X11/xinit/xinitrc $ fsvs -o path=parameters st X11 /etc/X11/xinit .mC. 1001 X11/xorg.conf .m.? 1001 /etc/X11/xinit/xinitrc \endcode \note At least for the command line options the strings can be abbreviated, as long as they're still identifiable. Please use the full strings in the configuration file, to avoid having problems in future versions when more options are available. \subsection o_status_color Status output coloring FSVS can colorize the output of the status lines; removed entries will be printed in red, new ones in green, and otherwise changed in blue. Unchanged (for \c -v) will be given in the default color. For this you can set \c stat_color=yes; this is turned \c off per default. As with the other colorizing options this gets turned \c off automatically if the output is not on a tty; on the command line you can override this, though. \subsection o_stop_change Checking for changes in a script If you want to use FSVS in scripts, you might simply want to know whether anything was changed. In this case use the \c stop_on_change option, possibly combined with \ref o_filter; this gives you no output on \c STDOUT, but an error code on the first change seen: \code fsvs -o stop_change=yes st /etc if fsvs status -o stop_change=yes -o filter=text /etc/init.d then echo No change found ... else echo Changes seen. fi \endcode \subsection o_verbose Verbosity flags If you want a bit more control about the data you're getting you can use some specific flags for the \c verbose options.
  • \c none,veryquiet - reset the bitmask, don't display anything.
  • \c quiet - only a few output lines.
  • \c changes - the characters showing what has changed for an entry.
  • \c size - the size for files, or the textual description (like \c "dir").
  • \c path - the path of the file, formatted according to \ref o_opt_path "the path option".
  • \c default - The default value, ie. \c changes, \c size and \c name.
  • \c meta - One more than the default so it can be used via a single \c "-v", it marks that the mtime and owner/group changes get reported as two characters. If \c "-v" is used to achieve that, even entries without changes are reported, unless overridden by \ref o_filter.
  • \c url - Displays the entries' top priority URL
  • \c copyfrom - Displays the URL this entry has been copied from (see \ref copy).
  • \c group - The group this entry belongs to
  • \c urls - Displays all known URLs of this entry
  • \c stacktrace - Print the full stacktrace when reporting errors; useful for debugging.
  • \c all - Sets all flags. Mostly useful for debugging.
Please note that if you want to display \b fewer items than per default, you'll have to clear the bitmask first, like this: \code fsvs status -o verbose=none,changes,path \endcode \section oh_diff Diffing and merging on update \subsection o_diff Options relating to the "diff" action The diff is not done internally in FSVS, but some other program is called, to get the highest flexibility. There are several option values:
  • diff_prg: The executable name, default "diff".
  • diff_opt: The default options, default "-pu".
  • diff_extra: Extra options, no default.
The call is done as \code $diff_prg $diff_opt $file1 --label "$label1" $file2 --label "$label2" $diff_extra \endcode \note In \c diff_opt you should use only use command line flags without parameters; in \c diff_extra you can encode a single flag with parameter (like "-U5"). If you need more flexibility, write a shell script and pass its name as \c diff_prg. Advanced users might be interested in \ref exp_env "exported environment variables", too; with their help you can eg. start different \c diff programs depending on the filename. \subsection o_colordiff Using colordiff If you have \c colordiff installed on your system, you might be interested in the \c colordiff option. It can take one of these values:
  • \c no, \c off or \c false: Don't use \c colordiff.
  • empty (default value): Try to use \c colordiff as executable, but don't throw an error if it can't be started; just pipe the data as-is to \c STDOUT. (\e Auto mode.)
  • anything else: Pipe the output of the \c diff program (see \ref o_diff) to the given executable.
Please note that if \c STDOUT is not a tty (eg. is redirected into a file), this option must be given on the command line to take effect. \subsection o_conflict How to resolve conflicts on update If you start an update, but one of the entries that was changed in the repository is changed locally too, you get a conflict. There are some ways to resolve a conflict:
  • \c local - Just take the local entry, ignore the repository.
  • \c remote - Overwrite any local change with the remote version.
  • \c both - Keep the local modifications in the file renamed to filename.mine, and save the repository version as filename.rXXX, ie. put the revision number after the filename. The conflict must be solved manually, and the solution made known to FSVS via the \ref resolve command. \note As there's no known \e good version after this renaming, a zero byte file gets created. \n Any \ref resolve "resolve" or \ref revert "revert" command would make that current, and the changes that are kept in filename.mine would be lost! \n You should only \ref revert to the last repository version, ie. the data of filename.rXXX.
  • \c merge - Call the program \c merge with the common ancestor, the local and the remote version. If it is a clean merge, no further work is necessary; else you'll get the (partly) merged file, and the two other versions just like with the \c both variant, and (again) have to tell FSVS that the conflict is solved, by using the \ref resolve command.
\note As in the subversion command line client \c svn the auxiliary files are seen as new, although that might change in the future (so that they automatically get ignored). \subsection o_merge Options regarding the "merge" program Like with \ref o_diff "diff", the \c merge operation is not done internally in FSVS. To have better control
  • merge_prg: The executable name, default "merge".
  • merge_opt: The default options, default "-A".
The option \c "-p" is always used: \code $merge_prg $merge_opt -p $file1 $common $file2 \endcode \section oh_commit Options for commit \subsection o_author Author You can specify an author to be used on commit. This option has a special behaviour; if the first character of the value is an \c '$', the value is replaced by the environment variable named. Empty strings are ignored; that allows an \c /etc/fsvs/config like this: \code author=unknown author=$LOGNAME author=$SUDO_USER \endcode where the last non-empty value is taken; and if your \c .authorized_keys has lines like \code environment="FSVS_AUTHOR=some_user" ssh-rsa ... \endcode that would override the config values. \note Your \c sshd_config needs the \c PermitUserEnvironment setting; you can also take a look at the \c AcceptEnv and \c SendEnv documentation. \subsection o_passwd Password In some scenarios like ssl-client-key-authentication it is more comfortable to use anonymous logins for checkout. In case the commit needs authentication via a password, you can use the \c password option. Please note the possible risks - on the command line it's visible via \c ps, and config files should at least be protected via \c chmod! There's no encryption or obfuscation! \code password="pa55word" \endcode \subsection o_commit_to Destination URL for commit If you defined multiple URLs for your working copy, FSVS needs to know which URL to commit to. For this you would set \c commit_to to the \b name of the URL; see this example: \code fsvs urls N:master,P:10,http://... N:local,P:20,file:///... fsvs ci /etc/passwd -m "New user defined" -ocommit_to=local \endcode \subsection o_empty_commit Doing empty commits In the default settings FSVS will happily create empty commits, ie. revisions without any changed entry. These just have a revision number, an author and a timestamp; this is nice if FSVS is run via CRON, and you want to see when FSVS gets run. If you would like to avoid such revisions, set this option to \c no; then such commits will be avoided. Example: \code fsvs commit -o empty_commit=no -m "cron" /etc \endcode \subsection o_empty_msg Avoid commits without a commit message If you don't like the behaviour that FSVS does commits with an empty message received from \c $EDITOR (eg if you found out that you don't want to commit after all), you can change this option to \c no; then FSVS won't allow empty commit messages. Example for the config file: \code empty_message=no \endcode \subsection o_mkdir_base Creating directories in the repository above the URL If you want to keep some data versioned, the first commit is normally the creation of the base directories \b above the given URL (to keep that data separate from the other repository data). Previously this had to be done manually, ie. with a svn mkdir $URL --parents or similar command. \n With the \c mkdir_base option you can tell FSVS to create directories as needed; this is mostly useful on the first commit. \code fsvs urls ... fsvs group 'group:ignore,./**' fsvs ci -m "First post!" -o mkdir_base=yes \endcode \subsection o_delay Waiting for a time change after working copy operations If you're using FSVS in automated systems, you might see that changes that happen in the same second as a commit are not seen with \ref status later; this is because the timestamp granularity of FSVS is 1 second. For backward compatibility the default value is \c no (don't delay). You can set it to any combination of
  • \c commit,
  • \c update,
  • \c revert and/or
  • \c checkout;
for \c yes all of these actions are delayed until the clock seconds change. Example how to set that option via an environment variable: \code export FSVS_DELAY=commit,revert \endcode \section oh_performance Performance and tuning related options \subsection o_chcheck Change detection This options allows to specify the trade-off between speed and accuracy. A file with a changed size can immediately be known as changed; but if only the modification time is changed, this is not so easy. Per default FSVS does a MD5 check on the file in this case; if you don't want that, or if you want to do the checksum calculation for \b every file (in case a file has changed, but its mtime not), you can use this option to change FSVS' behaviour. On the command line there's a shortcut for that: for every \c "-C" another check in this option is chosen. The recognized specifications are
none Resets the check bitmask to "no checks".
file_mtime Check files for modifications (via MD5) and directories for new entries, if the mtime is different - default
dir Check all directories for new entries, regardless of the timestamp.
allfiles Check \b all files with MD5 for changes (\c tripwire -like operation).
full All available checks.
You can give multiple options; they're accumulated unless overridden by \c none. \code fsvs -o change_check=allfiles status \endcode \note \a commit and \a update set additionally the \c dir option, to avoid missing new files. \subsection o_copyfrom_exp Avoiding expensive compares on \ref cpfd "copyfrom-detect" If you've got big files that are seen as new, doing the MD5 comparison can be time consuming. So there's the option \c copyfrom_exp (for \e "expensive", which takes the usual \c yes (default) and \c no arguments. \code fsvs copyfrom-detect -o copyfrom_exp=no some_directory \endcode \subsection o_group_stats Getting grouping/ignore statistics If you need to ignore many entries of your working copy, you might find that the ignore pattern matching takes some valuable time. \n In order to optimize the order of your patterns you can specify this option to print the number of tests and matches for each pattern. \code $ fsvs status -o group_stats=yes -q Grouping statistics (tested, matched, groupname, pattern): 4705 80 ignore group:ignore,./**.bak 4625 40 ignore group:ignore,./**.tmp \endcode For optimizing you'll want to put often matching patterns at the front (to make them match sooner, and avoid unnecessary tests); but if you are using other groups than \c ignore (like \c take), you will have to take care to keep the patterns within a group together. Please note that the first line shows how many entries were tested, and that the next lines differ by the number of matches entries for the current line, as all entries being tested against some pattern get tested for the next too, unless they match the current pattern. This option is available for \ref status and the \ref ignore "ignore test" commands. \section oh_base Base configuration \subsection o_conf Path definitions for the config and WAA area \anchor o_waa The paths given here are used to store the persistent configuration data needed by FSVS; please see \ref waa_files and \ref o__prio for more details, and the \ref o_softroot parameter as well as the \ref howto_backup_recovery for further discussion. \code FSVS_CONF=/home/user/.fsvs-conf fsvs -o waa=/home/user/.fsvs-waa st \endcode \note Please note that these paths can be given \b only as environment variables (\c $FSVS_CONF resp. \c $FSVS_WAA) or as command line parameter; settings in config files are ignored. \subsection o_configdir Configuration directory for the subversion libraries This path specifies where the subversion libraries should take their configuration data from; the most important aspect of that is authentication data, especially for certificate authentication. The default value is \c $FSVS_CONF/svn/. \c /etc/fsvs/config could have eg. \code config_dir=/root/.subversion \endcode Please note that this directory can hold an \c auth directory, and the \c servers and \c config files. \subsection o_softroot Using an alternate root directory This is a path that is prepended to \c $FSVS_WAA and \c $FSVS_CONF (or their default values, see \ref waa_files), if they do not already start with it, and it is cut off for the directory-name MD5 calculation. When is that needed? Imagine that you've booted from some Live-CD like Knoppix; if you want to setup or restore a non-working system, you'd have to transfer all files needed by the FSVS binary to it, and then start in some kind of \c chroot environment. With this parameter you can tell FSVS that it should load its libraries from the current filesystem, but use the given path as root directory for its administrative data. This is used for recovery; see the example in \ref howto_backup_recovery. So how does this work?
  • The internal data paths derived from \c $FSVS_WAA and \c $FSVS_CONF use the value given for \c softroot as a base directory, if they do not already start with it. \n (If that creates a conflict for you, eg. in that you want to use \c /var as the \c softroot, and your \c $FSVS_WAA should be \c /var/fsvs, you can make the string comparison fail by using /./var for either path.)
  • When a directory name for \c $FSVS_CONF or \c $FSVS_WAA is derived from some file path, the part matching \c softroot is cut off, so that the generated names match the situation after rebooting.
Previously you'd have to \ref export your data back to the filesystem and call \ref urls "fsvs urls" and FSVS \ref sync-repos "sync-repos" again, to get the WAA data back. \note A plain \c chroot() would not work, as some needed programs (eg. the decoder for update, see \ref s_p_n) would not be available. \note The easy way to understand \c softroot is: If you want to do a \c chroot() into the given directory (or boot with it as \c /), you'll want this set. \note As this value is used for finding the correct working copy root (by trying to find a \ref o_conf "conf-path", it cannot be set from a per-wc config file. Only the environment, global configuration or command line parameter make sense. \section oh_debug Debugging and diagnosing The next two options could be set in the global configuration file, to automatically get the last debug messages when an error happens. To provide an easy way to get on-line debugging again, \c debug_output and \c debug_buffer are both reset to non-redirected, on-line output, if more than a single \c -d is specified on the command line, like this: \code fsvs commit -m "..." -d -d filenames \endcode In this case you'll get a message telling you about that. \subsection o_debug_output Destination for debug output You can specify the debug output destination with the option \c debug_output. This can be a simple filename (which gets truncated on open), or, if it starts with a \c |, a command that the output gets piped into. If the destination cannot be opened (or none is given), debug output goes to \c STDOUT (for easier tracing via \c less). Example: \code fsvs -o debug_output=/tmp/debug.out -d st /etc \endcode \note That string is taken only once - at the first debug output line. So you have to use the correct order of parameters: -o debug_output=... -d. An example: writing the last 200 lines of debug output into a file. \code fsvs -o debug_output='| tail -200 > /tmp/debug.log' -d .... \endcode \subsection o_debug_buffer Using a debug buffer With the \c debug_buffer option you can specify the size of a buffer (in kB) that is used to capture the output, and which gets printed automatically if an error occurs. This must be done \b before debugging starts, like with the \ref o_debug_output "debug_output" specification. \code fsvs -o debug_buffer=128 ... \endcode \note If this option is specified in the configuration file or via the environment, only the buffer is allocated; if it is used on the command line, debugging is automatically turned on, too. \subsection o_warnings Setting warning behaviour Please see the command line parameter \ref glob_opt_warnings "-W", which is identical. \code fsvs -o warning=diff-status=ignore \endcode */ // Use this for folding: // g/^\\subsection/normal v/^\\s kkzf // vi: filetype=doxygen spell spelllang=en_gb formatoptions+=ta : // vi: nowrapscan foldmethod=manual foldcolumn=3 : fsvs-fsvs-1.2.12/src/dox/statii.dox000066400000000000000000000063251453631713700171470ustar00rootroot00000000000000/** \defgroup howto_entry_statii HOWTO: Understand the entries' statii \ingroup howto Transitions between the various statii. Here is a small overview about the various entry-statii and their change conditions. If you find any mismatches between this graphic and FSVS behaviour, don't hesitate to ask on the dev@ mailing list. \dot digraph { // use tooltip? // Note: the labelangle is manually optimized for the current // ordering - which isn't stable, so it might go wrong. edge [fontname=Arial, fontsize=7, labeldistance=0]; node [shape=box, fontname=Arial, fontsize=9]; subgraph cluster_2 { color=white; // --------------- Statii { rank=same; node [style=bold]; New; Not_existing [label="Not existing"]; } Ignored; Deleted; { rank=same; Added; CopiedU [label="Copied,\nunchanged"]; } { rank=same; Changed; Committed [color=green, style=bold]; } Unversioned [label="To-be-\nunversioned"]; { rank=same; Conflicted; CopiedC [label="Copied,\nchanged"]; } // --------------- Commands edge [color=brown]; New -> Added [label="add", URL="\ref add" ]; Ignored -> Added [label="add", URL="\ref add"]; Committed -> Unversioned [label="unversion", URL="\ref unversion"]; { edge [ label="update", URL="\ref update"]; Committed -> Committed; Changed -> Conflicted; } Conflicted -> Changed [label="resolve", URL="\ref resolve"]; { edge [ color=green, URL="\ref commit", tooltip="commit"]; Added -> Committed; New -> Committed; CopiedU -> Committed; Changed -> Committed; CopiedC -> Committed; Unversioned -> New [ label="commit;\nremoved from\nrepository;\nlocally kept,\nbut forgotten."]; Deleted:w -> Not_existing [ label="commit;\nremoved from\nrepository\nand local data."]; } New -> CopiedU [label="copy", URL="\ref cp"]; CopiedU -> New [label="uncopy", URL="\ref uncopy"]; { edge [ color=blue, URL="\ref revert", tooltip="revert"]; CopiedC -> CopiedU; Changed -> Committed; Deleted -> Committed; Added -> New; Unversioned -> Committed; Conflicted -> Committed; } // Configuration edge [color=black]; New -> Ignored [label="ignore\npattern\nmatches", URL="\ref ignore"]; // External edge [color=orange, style=dashed]; CopiedU -> CopiedC [label="edit"]; Committed -> Changed [label="edit"]; Committed -> Deleted [label="rm"]; Not_existing -> New [ label="Create"]; } subgraph cluster_1 { margin=0; nodesep=0.2; ranksep=0.2; color=white; node [shape=plaintext, width=0, height=0, label=""]; { rank=same; revert1 -> revert2 [color=blue, label="revert", URL="\ref revert"]; } { rank=same; commit1 -> commit2 [label="commit", color=green, URL="\ref commit"]; } { rank=same; other1 -> other2 [color=brown, label="other\ncommands"]; } { rank=same; extern1 -> extern2 [color=orange, label="external\nchanges", style=dashed]; } edge [ style=invis ]; revert1 -> commit1 -> other1 -> extern1; } } \enddot */ // vi: filetype=doxygen spell spelllang=en_gb fsvs-fsvs-1.2.12/src/doxygen-data/000077500000000000000000000000001453631713700167225ustar00rootroot00000000000000fsvs-fsvs-1.2.12/src/doxygen-data/Doxyfile000077500000000000000000003552041453631713700204440ustar00rootroot00000000000000# Doxyfile 1.9.5 # This file describes the settings to be used by the documentation system # doxygen (www.doxygen.org) for a project. # # All text after a double hash (##) is considered a comment and is placed in # front of the TAG it is preceding. # # All text after a single hash (#) is considered a comment and will be ignored. # The format is: # TAG = value [value, ...] # For lists, items can also be appended using: # TAG += value [value, ...] # Values that contain spaces should be placed between quotes (\" \"). # # Note: # # Use doxygen to compare the used configuration file with the template # configuration file: # doxygen -x [configFile] # Use doxygen to compare the used configuration file with the template # configuration file without replacing the environment variables or CMake type # replacement variables: # doxygen -x_noenv [configFile] #--------------------------------------------------------------------------- # Project related configuration options #--------------------------------------------------------------------------- # This tag specifies the encoding used for all characters in the configuration # file that follow. The default is UTF-8 which is also the encoding used for all # text before the first occurrence of this tag. Doxygen uses libiconv (or the # iconv built into libc) for the transcoding. See # https://www.gnu.org/software/libiconv/ for the list of possible encodings. # The default value is: UTF-8. DOXYFILE_ENCODING = UTF-8 # The PROJECT_NAME tag is a single word (or a sequence of words surrounded by # double-quotes, unless you are using Doxywizard) that should identify the # project for which the documentation is generated. This name is used in the # title of most generated pages and in a few other places. # The default value is: My Project. PROJECT_NAME = fsvs # The PROJECT_NUMBER tag can be used to enter a project or revision number. This # could be handy for archiving the generated documentation or if some version # control system is used. PROJECT_NUMBER = # Using the PROJECT_BRIEF tag one can provide an optional one line description # for a project that appears at the top of each page and should give viewer a # quick idea about the purpose of the project. Keep the description short. PROJECT_BRIEF = # With the PROJECT_LOGO tag one can specify a logo or an icon that is included # in the documentation. The maximum height of the logo should not exceed 55 # pixels and the maximum width should not exceed 200 pixels. Doxygen will copy # the logo to the output directory. PROJECT_LOGO = # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path # into which the generated documentation will be written. If a relative path is # entered, it will be relative to the location where doxygen was started. If # left blank the current directory will be used. OUTPUT_DIRECTORY = ../doxygen/ # If the CREATE_SUBDIRS tag is set to YES then doxygen will create up to 4096 # sub-directories (in 2 levels) under the output directory of each output format # and will distribute the generated files over these directories. Enabling this # option can be useful when feeding doxygen a huge amount of source files, where # putting all generated files in the same directory would otherwise causes # performance problems for the file system. Adapt CREATE_SUBDIRS_LEVEL to # control the number of sub-directories. # The default value is: NO. CREATE_SUBDIRS = NO # Controls the number of sub-directories that will be created when # CREATE_SUBDIRS tag is set to YES. Level 0 represents 16 directories, and every # level increment doubles the number of directories, resulting in 4096 # directories at level 8 which is the default and also the maximum value. The # sub-directories are organized in 2 levels, the first level always has a fixed # numer of 16 directories. # Minimum value: 0, maximum value: 8, default value: 8. # This tag requires that the tag CREATE_SUBDIRS is set to YES. CREATE_SUBDIRS_LEVEL = 8 # If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII # characters to appear in the names of generated files. If set to NO, non-ASCII # characters will be escaped, for example _xE3_x81_x84 will be used for Unicode # U+3044. # The default value is: NO. ALLOW_UNICODE_NAMES = NO # The OUTPUT_LANGUAGE tag is used to specify the language in which all # documentation generated by doxygen is written. Doxygen will use this # information to generate all constant output in the proper language. # Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Bulgarian, # Catalan, Chinese, Chinese-Traditional, Croatian, Czech, Danish, Dutch, English # (United States), Esperanto, Farsi (Persian), Finnish, French, German, Greek, # Hindi, Hungarian, Indonesian, Italian, Japanese, Japanese-en (Japanese with # English messages), Korean, Korean-en (Korean with English messages), Latvian, # Lithuanian, Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, # Romanian, Russian, Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, # Swedish, Turkish, Ukrainian and Vietnamese. # The default value is: English. OUTPUT_LANGUAGE = English # If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member # descriptions after the members that are listed in the file and class # documentation (similar to Javadoc). Set to NO to disable this. # The default value is: YES. BRIEF_MEMBER_DESC = YES # If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief # description of a member or function before the detailed description # # Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the # brief descriptions will be completely suppressed. # The default value is: YES. REPEAT_BRIEF = YES # This tag implements a quasi-intelligent brief description abbreviator that is # used to form the text in various listings. Each string in this list, if found # as the leading text of the brief description, will be stripped from the text # and the result, after processing the whole list, is used as the annotated # text. Otherwise, the brief description is used as-is. If left blank, the # following values are used ($name is automatically replaced with the name of # the entity):The $name class, The $name widget, The $name file, is, provides, # specifies, contains, represents, a, an and the. ABBREVIATE_BRIEF = "The $name class" \ "The $name widget" \ "The $name file" \ is \ provides \ specifies \ contains \ represents \ a \ an \ the # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then # doxygen will generate a detailed section even if there is only a brief # description. # The default value is: NO. ALWAYS_DETAILED_SEC = NO # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all # inherited members of a class in the documentation of that class as if those # members were ordinary class members. Constructors, destructors and assignment # operators of the base classes will not be shown. # The default value is: NO. INLINE_INHERITED_MEMB = NO # If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path # before files name in the file list and in the header files. If set to NO the # shortest path that makes the file name unique will be used # The default value is: YES. FULL_PATH_NAMES = NO # The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path. # Stripping is only done if one of the specified strings matches the left-hand # part of the path. The tag can be used to show relative paths in the file list. # If left blank the directory from which doxygen is run is used as the path to # strip. # # Note that you can specify absolute paths here, but also relative paths, which # will be relative from the directory where doxygen is started. # This tag requires that the tag FULL_PATH_NAMES is set to YES. STRIP_FROM_PATH = ./ # The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the # path mentioned in the documentation of a class, which tells the reader which # header file to include in order to use a class. If left blank only the name of # the header file containing the class definition is used. Otherwise one should # specify the list of include paths that are normally passed to the compiler # using the -I flag. STRIP_FROM_INC_PATH = ./ # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but # less readable) file names. This can be useful is your file systems doesn't # support long names like on DOS, Mac, or CD-ROM. # The default value is: NO. SHORT_NAMES = NO # If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the # first line (until the first dot) of a Javadoc-style comment as the brief # description. If set to NO, the Javadoc-style will behave just like regular Qt- # style comments (thus requiring an explicit @brief command for a brief # description.) # The default value is: NO. JAVADOC_AUTOBRIEF = YES # If the JAVADOC_BANNER tag is set to YES then doxygen will interpret a line # such as # /*************** # as being the beginning of a Javadoc-style comment "banner". If set to NO, the # Javadoc-style will behave just like regular comments and it will not be # interpreted by doxygen. # The default value is: NO. JAVADOC_BANNER = NO # If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first # line (until the first dot) of a Qt-style comment as the brief description. If # set to NO, the Qt-style will behave just like regular Qt-style comments (thus # requiring an explicit \brief command for a brief description.) # The default value is: NO. QT_AUTOBRIEF = NO # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a # multi-line C++ special comment block (i.e. a block of //! or /// comments) as # a brief description. This used to be the default behavior. The new default is # to treat a multi-line C++ comment block as a detailed description. Set this # tag to YES if you prefer the old behavior instead. # # Note that setting this tag to YES also means that rational rose comments are # not recognized any more. # The default value is: NO. MULTILINE_CPP_IS_BRIEF = NO # By default Python docstrings are displayed as preformatted text and doxygen's # special commands cannot be used. By setting PYTHON_DOCSTRING to NO the # doxygen's special commands can be used and the contents of the docstring # documentation blocks is shown as doxygen documentation. # The default value is: YES. PYTHON_DOCSTRING = YES # If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the # documentation from any documented member that it re-implements. # The default value is: YES. INHERIT_DOCS = YES # If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new # page for each member. If set to NO, the documentation of a member will be part # of the file/class/namespace that contains it. # The default value is: NO. SEPARATE_MEMBER_PAGES = NO # The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen # uses this value to replace tabs by spaces in code fragments. # Minimum value: 1, maximum value: 16, default value: 4. TAB_SIZE = 4 # This tag can be used to specify a number of aliases that act as commands in # the documentation. An alias has the form: # name=value # For example adding # "sideeffect=@par Side Effects:^^" # will allow you to put the command \sideeffect (or @sideeffect) in the # documentation, which will result in a user-defined paragraph with heading # "Side Effects:". Note that you cannot put \n's in the value part of an alias # to insert newlines (in the resulting output). You can put ^^ in the value part # of an alias to insert a newline as if a physical newline was in the original # file. When you need a literal { or } or , in the value part of an alias you # have to escape them by means of a backslash (\), this can lead to conflicts # with the commands \{ and \} for these it is advised to use the version @{ and # @} or use a double escape (\\{ and \\}) ALIASES = # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources # only. Doxygen will then generate output that is more tailored for C. For # instance, some of the names that are used will be different. The list of all # members will be omitted, etc. # The default value is: NO. OPTIMIZE_OUTPUT_FOR_C = YES # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or # Python sources only. Doxygen will then generate output that is more tailored # for that language. For instance, namespaces will be presented as packages, # qualified scopes will look different, etc. # The default value is: NO. OPTIMIZE_OUTPUT_JAVA = NO # Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran # sources. Doxygen will then generate output that is tailored for Fortran. # The default value is: NO. OPTIMIZE_FOR_FORTRAN = NO # Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL # sources. Doxygen will then generate output that is tailored for VHDL. # The default value is: NO. OPTIMIZE_OUTPUT_VHDL = NO # Set the OPTIMIZE_OUTPUT_SLICE tag to YES if your project consists of Slice # sources only. Doxygen will then generate output that is more tailored for that # language. For instance, namespaces will be presented as modules, types will be # separated into more groups, etc. # The default value is: NO. OPTIMIZE_OUTPUT_SLICE = NO # Doxygen selects the parser to use depending on the extension of the files it # parses. With this tag you can assign which parser to use for a given # extension. Doxygen has a built-in mapping, but you can override or extend it # using this tag. The format is ext=language, where ext is a file extension, and # language is one of the parsers supported by doxygen: IDL, Java, JavaScript, # Csharp (C#), C, C++, Lex, D, PHP, md (Markdown), Objective-C, Python, Slice, # VHDL, Fortran (fixed format Fortran: FortranFixed, free formatted Fortran: # FortranFree, unknown formatted Fortran: Fortran. In the later case the parser # tries to guess whether the code is fixed or free formatted code, this is the # default for Fortran type files). For instance to make doxygen treat .inc files # as Fortran files (default is PHP), and .f files as C (default is Fortran), # use: inc=Fortran f=C. # # Note: For files without extension you can use no_extension as a placeholder. # # Note that for custom extensions you also need to set FILE_PATTERNS otherwise # the files are not read by doxygen. When specifying no_extension you should add # * to the FILE_PATTERNS. # # Note see also the list of default file extension mappings. EXTENSION_MAPPING = # If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments # according to the Markdown format, which allows for more readable # documentation. See https://daringfireball.net/projects/markdown/ for details. # The output of markdown processing is further processed by doxygen, so you can # mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in # case of backward compatibilities issues. # The default value is: YES. MARKDOWN_SUPPORT = YES # When the TOC_INCLUDE_HEADINGS tag is set to a non-zero value, all headings up # to that level are automatically included in the table of contents, even if # they do not have an id attribute. # Note: This feature currently applies only to Markdown headings. # Minimum value: 0, maximum value: 99, default value: 5. # This tag requires that the tag MARKDOWN_SUPPORT is set to YES. TOC_INCLUDE_HEADINGS = 5 # When enabled doxygen tries to link words that correspond to documented # classes, or namespaces to their corresponding documentation. Such a link can # be prevented in individual cases by putting a % sign in front of the word or # globally by setting AUTOLINK_SUPPORT to NO. # The default value is: YES. AUTOLINK_SUPPORT = YES # If you use STL classes (i.e. std::string, std::vector, etc.) but do not want # to include (a tag file for) the STL sources as input, then you should set this # tag to YES in order to let doxygen match functions declarations and # definitions whose arguments contain STL classes (e.g. func(std::string); # versus func(std::string) {}). This also make the inheritance and collaboration # diagrams that involve STL classes more complete and accurate. # The default value is: NO. BUILTIN_STL_SUPPORT = NO # If you use Microsoft's C++/CLI language, you should set this option to YES to # enable parsing support. # The default value is: NO. CPP_CLI_SUPPORT = NO # Set the SIP_SUPPORT tag to YES if your project consists of sip (see: # https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen # will parse them like normal C++ but will assume all classes use public instead # of private inheritance when no explicit protection keyword is present. # The default value is: NO. SIP_SUPPORT = NO # For Microsoft's IDL there are propget and propput attributes to indicate # getter and setter methods for a property. Setting this option to YES will make # doxygen to replace the get and set methods by a property in the documentation. # This will only work if the methods are indeed getting or setting a simple # type. If this is not the case, or you want to show the methods anyway, you # should set this option to NO. # The default value is: YES. IDL_PROPERTY_SUPPORT = YES # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC # tag is set to YES then doxygen will reuse the documentation of the first # member in the group (if any) for the other members of the group. By default # all members of a group must be documented explicitly. # The default value is: NO. DISTRIBUTE_GROUP_DOC = NO # If one adds a struct or class to a group and this option is enabled, then also # any nested class or struct is added to the same group. By default this option # is disabled and one has to add nested compounds explicitly via \ingroup. # The default value is: NO. GROUP_NESTED_COMPOUNDS = NO # Set the SUBGROUPING tag to YES to allow class member groups of the same type # (for instance a group of public functions) to be put as a subgroup of that # type (e.g. under the Public Functions section). Set it to NO to prevent # subgrouping. Alternatively, this can be done per class using the # \nosubgrouping command. # The default value is: YES. SUBGROUPING = YES # When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions # are shown inside the group in which they are included (e.g. using \ingroup) # instead of on a separate page (for HTML and Man pages) or section (for LaTeX # and RTF). # # Note that this feature does not work in combination with # SEPARATE_MEMBER_PAGES. # The default value is: NO. INLINE_GROUPED_CLASSES = NO # When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions # with only public data fields or simple typedef fields will be shown inline in # the documentation of the scope in which they are defined (i.e. file, # namespace, or group documentation), provided this scope is documented. If set # to NO, structs, classes, and unions are shown on a separate page (for HTML and # Man pages) or section (for LaTeX and RTF). # The default value is: NO. INLINE_SIMPLE_STRUCTS = NO # When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or # enum is documented as struct, union, or enum with the name of the typedef. So # typedef struct TypeS {} TypeT, will appear in the documentation as a struct # with name TypeT. When disabled the typedef will appear as a member of a file, # namespace, or class. And the struct will be named TypeS. This can typically be # useful for C code in case the coding convention dictates that all compound # types are typedef'ed and only the typedef is referenced, never the tag name. # The default value is: NO. TYPEDEF_HIDES_STRUCT = NO # The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This # cache is used to resolve symbols given their name and scope. Since this can be # an expensive process and often the same symbol appears multiple times in the # code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small # doxygen will become slower. If the cache is too large, memory is wasted. The # cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range # is 0..9, the default is 0, corresponding to a cache size of 2^16=65536 # symbols. At the end of a run doxygen will report the cache usage and suggest # the optimal cache size from a speed point of view. # Minimum value: 0, maximum value: 9, default value: 0. LOOKUP_CACHE_SIZE = 0 # The NUM_PROC_THREADS specifies the number of threads doxygen is allowed to use # during processing. When set to 0 doxygen will based this on the number of # cores available in the system. You can set it explicitly to a value larger # than 0 to get more control over the balance between CPU load and processing # speed. At this moment only the input processing can be done using multiple # threads. Since this is still an experimental feature the default is set to 1, # which effectively disables parallel processing. Please report any issues you # encounter. Generating dot graphs in parallel is controlled by the # DOT_NUM_THREADS setting. # Minimum value: 0, maximum value: 32, default value: 1. NUM_PROC_THREADS = 1 #--------------------------------------------------------------------------- # Build related configuration options #--------------------------------------------------------------------------- # If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in # documentation are documented, even if no documentation was available. Private # class members and static file members will be hidden unless the # EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES. # Note: This will also disable the warnings about undocumented members that are # normally produced when WARNINGS is set to YES. # The default value is: NO. EXTRACT_ALL = YES # If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will # be included in the documentation. # The default value is: NO. EXTRACT_PRIVATE = YES # If the EXTRACT_PRIV_VIRTUAL tag is set to YES, documented private virtual # methods of a class will be included in the documentation. # The default value is: NO. EXTRACT_PRIV_VIRTUAL = NO # If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal # scope will be included in the documentation. # The default value is: NO. EXTRACT_PACKAGE = NO # If the EXTRACT_STATIC tag is set to YES, all static members of a file will be # included in the documentation. # The default value is: NO. EXTRACT_STATIC = YES # If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined # locally in source files will be included in the documentation. If set to NO, # only classes defined in header files are included. Does not have any effect # for Java sources. # The default value is: YES. EXTRACT_LOCAL_CLASSES = YES # This flag is only useful for Objective-C code. If set to YES, local methods, # which are defined in the implementation section but not in the interface are # included in the documentation. If set to NO, only methods in the interface are # included. # The default value is: NO. EXTRACT_LOCAL_METHODS = NO # If this flag is set to YES, the members of anonymous namespaces will be # extracted and appear in the documentation as a namespace called # 'anonymous_namespace{file}', where file will be replaced with the base name of # the file that contains the anonymous namespace. By default anonymous namespace # are hidden. # The default value is: NO. EXTRACT_ANON_NSPACES = NO # If this flag is set to YES, the name of an unnamed parameter in a declaration # will be determined by the corresponding definition. By default unnamed # parameters remain unnamed in the output. # The default value is: YES. RESOLVE_UNNAMED_PARAMS = YES # If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all # undocumented members inside documented classes or files. If set to NO these # members will be included in the various overviews, but no documentation # section is generated. This option has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_MEMBERS = NO # If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all # undocumented classes that are normally visible in the class hierarchy. If set # to NO, these classes will be included in the various overviews. This option # has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_CLASSES = NO # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend # declarations. If set to NO, these declarations will be included in the # documentation. # The default value is: NO. HIDE_FRIEND_COMPOUNDS = NO # If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any # documentation blocks found inside the body of a function. If set to NO, these # blocks will be appended to the function's detailed documentation block. # The default value is: NO. HIDE_IN_BODY_DOCS = NO # The INTERNAL_DOCS tag determines if documentation that is typed after a # \internal command is included. If the tag is set to NO then the documentation # will be excluded. Set it to YES to include the internal documentation. # The default value is: NO. INTERNAL_DOCS = NO # With the correct setting of option CASE_SENSE_NAMES doxygen will better be # able to match the capabilities of the underlying filesystem. In case the # filesystem is case sensitive (i.e. it supports files in the same directory # whose names only differ in casing), the option must be set to YES to properly # deal with such files in case they appear in the input. For filesystems that # are not case sensitive the option should be set to NO to properly deal with # output files written for symbols that only differ in casing, such as for two # classes, one named CLASS and the other named Class, and to also support # references to files without having to specify the exact matching casing. On # Windows (including Cygwin) and MacOS, users should typically set this option # to NO, whereas on Linux or other Unix flavors it should typically be set to # YES. # Possible values are: SYSTEM, NO and YES. # The default value is: SYSTEM. CASE_SENSE_NAMES = YES # If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with # their full class and namespace scopes in the documentation. If set to YES, the # scope will be hidden. # The default value is: NO. HIDE_SCOPE_NAMES = NO # If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will # append additional text to a page's title, such as Class Reference. If set to # YES the compound reference will be hidden. # The default value is: NO. HIDE_COMPOUND_REFERENCE= NO # If the SHOW_HEADERFILE tag is set to YES then the documentation for a class # will show which file needs to be included to use the class. # The default value is: YES. SHOW_HEADERFILE = YES # If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of # the files that are included by a file in the documentation of that file. # The default value is: YES. SHOW_INCLUDE_FILES = YES # If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each # grouped member an include statement to the documentation, telling the reader # which file to include in order to use the member. # The default value is: NO. SHOW_GROUPED_MEMB_INC = NO # If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include # files with double quotes in the documentation rather than with sharp brackets. # The default value is: NO. FORCE_LOCAL_INCLUDES = NO # If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the # documentation for inline members. # The default value is: YES. INLINE_INFO = YES # If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the # (detailed) documentation of file and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. # The default value is: YES. SORT_MEMBER_DOCS = YES # If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief # descriptions of file, namespace and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. Note that # this will also influence the order of the classes in the class list. # The default value is: NO. SORT_BRIEF_DOCS = NO # If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the # (brief and detailed) documentation of class members so that constructors and # destructors are listed first. If set to NO the constructors will appear in the # respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS. # Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief # member documentation. # Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting # detailed member documentation. # The default value is: NO. SORT_MEMBERS_CTORS_1ST = NO # If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy # of group names into alphabetical order. If set to NO the group names will # appear in their defined order. # The default value is: NO. SORT_GROUP_NAMES = NO # If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by # fully-qualified names, including namespaces. If set to NO, the class list will # be sorted only by class name, not including the namespace part. # Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. # Note: This option applies only to the class list, not to the alphabetical # list. # The default value is: NO. SORT_BY_SCOPE_NAME = NO # If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper # type resolution of all parameters of a function it will reject a match between # the prototype and the implementation of a member function even if there is # only one candidate or it is obvious which candidate to choose by doing a # simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still # accept a match between prototype and implementation in such cases. # The default value is: NO. STRICT_PROTO_MATCHING = NO # The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo # list. This list is created by putting \todo commands in the documentation. # The default value is: YES. GENERATE_TODOLIST = YES # The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test # list. This list is created by putting \test commands in the documentation. # The default value is: YES. GENERATE_TESTLIST = YES # The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug # list. This list is created by putting \bug commands in the documentation. # The default value is: YES. GENERATE_BUGLIST = YES # The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO) # the deprecated list. This list is created by putting \deprecated commands in # the documentation. # The default value is: YES. GENERATE_DEPRECATEDLIST= YES # The ENABLED_SECTIONS tag can be used to enable conditional documentation # sections, marked by \if ... \endif and \cond # ... \endcond blocks. ENABLED_SECTIONS = html # The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the # initial value of a variable or macro / define can have for it to appear in the # documentation. If the initializer consists of more lines than specified here # it will be hidden. Use a value of 0 to hide initializers completely. The # appearance of the value of individual variables and macros / defines can be # controlled using \showinitializer or \hideinitializer command in the # documentation regardless of this setting. # Minimum value: 0, maximum value: 10000, default value: 30. MAX_INITIALIZER_LINES = 30 # Set the SHOW_USED_FILES tag to NO to disable the list of files generated at # the bottom of the documentation of classes and structs. If set to YES, the # list will mention the files that were used to generate the documentation. # The default value is: YES. SHOW_USED_FILES = YES # Set the SHOW_FILES tag to NO to disable the generation of the Files page. This # will remove the Files entry from the Quick Index and from the Folder Tree View # (if specified). # The default value is: YES. SHOW_FILES = YES # Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces # page. This will remove the Namespaces entry from the Quick Index and from the # Folder Tree View (if specified). # The default value is: YES. SHOW_NAMESPACES = YES # The FILE_VERSION_FILTER tag can be used to specify a program or script that # doxygen should invoke to get the current version for each file (typically from # the version control system). Doxygen will invoke the program by executing (via # popen()) the command command input-file, where command is the value of the # FILE_VERSION_FILTER tag, and input-file is the name of an input file provided # by doxygen. Whatever the program writes to standard output is used as the file # version. For an example see the documentation. FILE_VERSION_FILTER = # The LAYOUT_FILE tag can be used to specify a layout file which will be parsed # by doxygen. The layout file controls the global structure of the generated # output files in an output format independent way. To create the layout file # that represents doxygen's defaults, run doxygen with the -l option. You can # optionally specify a file name after the option, if omitted DoxygenLayout.xml # will be used as the name of the layout file. See also section "Changing the # layout of pages" for information. # # Note that if you run doxygen from a directory containing a file called # DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE # tag is left empty. LAYOUT_FILE = # The CITE_BIB_FILES tag can be used to specify one or more bib files containing # the reference definitions. This must be a list of .bib files. The .bib # extension is automatically appended if omitted. This requires the bibtex tool # to be installed. See also https://en.wikipedia.org/wiki/BibTeX for more info. # For LaTeX the style of the bibliography can be controlled using # LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the # search path. See also \cite for info how to create references. CITE_BIB_FILES = #--------------------------------------------------------------------------- # Configuration options related to warning and progress messages #--------------------------------------------------------------------------- # The QUIET tag can be used to turn on/off the messages that are generated to # standard output by doxygen. If QUIET is set to YES this implies that the # messages are off. # The default value is: NO. QUIET = YES # The WARNINGS tag can be used to turn on/off the warning messages that are # generated to standard error (stderr) by doxygen. If WARNINGS is set to YES # this implies that the warnings are on. # # Tip: Turn warnings on while writing the documentation. # The default value is: YES. WARNINGS = YES # If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate # warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag # will automatically be disabled. # The default value is: YES. WARN_IF_UNDOCUMENTED = YES # If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for # potential errors in the documentation, such as documenting some parameters in # a documented function twice, or documenting parameters that don't exist or # using markup commands wrongly. # The default value is: YES. WARN_IF_DOC_ERROR = YES # If WARN_IF_INCOMPLETE_DOC is set to YES, doxygen will warn about incomplete # function parameter documentation. If set to NO, doxygen will accept that some # parameters have no documentation without warning. # The default value is: YES. WARN_IF_INCOMPLETE_DOC = NO # This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that # are documented, but have no documentation for their parameters or return # value. If set to NO, doxygen will only warn about wrong parameter # documentation, but not about the absence of documentation. If EXTRACT_ALL is # set to YES then this flag will automatically be disabled. See also # WARN_IF_INCOMPLETE_DOC # The default value is: NO. WARN_NO_PARAMDOC = NO # If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when # a warning is encountered. If the WARN_AS_ERROR tag is set to FAIL_ON_WARNINGS # then doxygen will continue running as if WARN_AS_ERROR tag is set to NO, but # at the end of the doxygen process doxygen will return with a non-zero status. # Possible values are: NO, YES and FAIL_ON_WARNINGS. # The default value is: NO. WARN_AS_ERROR = NO # The WARN_FORMAT tag determines the format of the warning messages that doxygen # can produce. The string should contain the $file, $line, and $text tags, which # will be replaced by the file and line number from which the warning originated # and the warning text. Optionally the format may contain $version, which will # be replaced by the version of the file (if it could be obtained via # FILE_VERSION_FILTER) # See also: WARN_LINE_FORMAT # The default value is: $file:$line: $text. WARN_FORMAT = "$file:$line: $text" # In the $text part of the WARN_FORMAT command it is possible that a reference # to a more specific place is given. To make it easier to jump to this place # (outside of doxygen) the user can define a custom "cut" / "paste" string. # Example: # WARN_LINE_FORMAT = "'vi $file +$line'" # See also: WARN_FORMAT # The default value is: at line $line of file $file. WARN_LINE_FORMAT = "at line $line of file $file" # The WARN_LOGFILE tag can be used to specify a file to which warning and error # messages should be written. If left blank the output is written to standard # error (stderr). In case the file specified cannot be opened for writing the # warning and error messages are written to standard error. When as file - is # specified the warning and error messages are written to standard output # (stdout). WARN_LOGFILE = #--------------------------------------------------------------------------- # Configuration options related to the input files #--------------------------------------------------------------------------- # The INPUT tag is used to specify the files and/or directories that contain # documented source files. You may enter file names like myfile.cpp or # directories like /usr/src/myproject. Separate the files or directories with # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING # Note: If this tag is empty the current directory is searched. INPUT = . \ dox \ tools # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses # libiconv (or the iconv built into libc) for the transcoding. See the libiconv # documentation (see: # https://www.gnu.org/software/libiconv/) for the list of possible encodings. # See also: INPUT_FILE_ENCODING # The default value is: UTF-8. INPUT_ENCODING = UTF-8 # This tag can be used to specify the character encoding of the source files # that doxygen parses The INPUT_FILE_ENCODING tag can be used to specify # character encoding on a per file pattern basis. Doxygen will compare the file # name with each pattern and apply the encoding instead of the default # INPUT_ENCODING) if there is a match. The character encodings are a list of the # form: pattern=encoding (like *.php=ISO-8859-1). See cfg_input_encoding # "INPUT_ENCODING" for further information on supported encodings. INPUT_FILE_ENCODING = # If the value of the INPUT tag contains directories, you can use the # FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and # *.h) to filter out the source-files in the directories. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # read by doxygen. # # Note the list of default checked file patterns might differ from the list of # default file extension mappings. # # If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp, # *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, # *.hh, *.hxx, *.hpp, *.h++, *.l, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, # *.inc, *.m, *.markdown, *.md, *.mm, *.dox (to be provided as doxygen C # comment), *.py, *.pyw, *.f90, *.f95, *.f03, *.f08, *.f18, *.f, *.for, *.vhd, # *.vhdl, *.ucf, *.qsf and *.ice. FILE_PATTERNS = *.c \ *.h \ *.dox # The RECURSIVE tag can be used to specify whether or not subdirectories should # be searched for input files as well. # The default value is: NO. RECURSIVE = NO # The EXCLUDE tag can be used to specify files and/or directories that should be # excluded from the INPUT source files. This way you can easily exclude a # subdirectory from a directory tree whose root is specified with the INPUT tag. # # Note that relative paths are relative to the directory from which doxygen is # run. EXCLUDE = # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or # directories that are symbolic links (a Unix file system feature) are excluded # from the input. # The default value is: NO. EXCLUDE_SYMLINKS = NO # If the value of the INPUT tag contains directories, you can use the # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude # certain files from those directories. # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories for example use the pattern */test/* EXCLUDE_PATTERNS = # The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names # (namespaces, classes, functions, etc.) that should be excluded from the # output. The symbol name can be a fully qualified name, a word, or if the # wildcard * is used, a substring. Examples: ANamespace, AClass, # ANamespace::AClass, ANamespace::*Test # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories use the pattern */test/* EXCLUDE_SYMBOLS = # The EXAMPLE_PATH tag can be used to specify one or more files or directories # that contain example code fragments that are included (see the \include # command). EXAMPLE_PATH = # If the value of the EXAMPLE_PATH tag contains directories, you can use the # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and # *.h) to filter out the source-files in the directories. If left blank all # files are included. EXAMPLE_PATTERNS = * # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be # searched for input files to be used with the \include or \dontinclude commands # irrespective of the value of the RECURSIVE tag. # The default value is: NO. EXAMPLE_RECURSIVE = NO # The IMAGE_PATH tag can be used to specify one or more files or directories # that contain images that are to be included in the documentation (see the # \image command). IMAGE_PATH = # The INPUT_FILTER tag can be used to specify a program that doxygen should # invoke to filter for each input file. Doxygen will invoke the filter program # by executing (via popen()) the command: # # # # where is the value of the INPUT_FILTER tag, and is the # name of an input file. Doxygen will then use the output that the filter # program writes to standard output. If FILTER_PATTERNS is specified, this tag # will be ignored. # # Note that the filter must not add or remove lines; it is applied before the # code is scanned, but not when the output code is generated. If lines are added # or removed, the anchors will not be placed correctly. # # Note that doxygen will use the data processed and written to standard output # for further processing, therefore nothing else, like debug statements or used # commands (so in case of a Windows batch file always use @echo OFF), should be # written to standard output. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # properly processed by doxygen. INPUT_FILTER = # The FILTER_PATTERNS tag can be used to specify filters on a per file pattern # basis. Doxygen will compare the file name with each pattern and apply the # filter if there is a match. The filters are a list of the form: pattern=filter # (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how # filters are used. If the FILTER_PATTERNS tag is empty or if none of the # patterns match the file name, INPUT_FILTER is applied. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # properly processed by doxygen. FILTER_PATTERNS = # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using # INPUT_FILTER) will also be used to filter the input files that are used for # producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES). # The default value is: NO. FILTER_SOURCE_FILES = NO # The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file # pattern. A pattern will override the setting for FILTER_PATTERN (if any) and # it is also possible to disable source filtering for a specific pattern using # *.ext= (so without naming a filter). # This tag requires that the tag FILTER_SOURCE_FILES is set to YES. FILTER_SOURCE_PATTERNS = # If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that # is part of the input, its contents will be placed on the main page # (index.html). This can be useful if you have a project on for instance GitHub # and want to reuse the introduction page also for the doxygen output. USE_MDFILE_AS_MAINPAGE = # The Fortran standard specifies that for fixed formatted Fortran code all # characters from position 72 are to be considered as comment. A common # extension is to allow longer lines before the automatic comment starts. The # setting FORTRAN_COMMENT_AFTER will also make it possible that longer lines can # be processed before the automatic comment starts. # Minimum value: 7, maximum value: 10000, default value: 72. FORTRAN_COMMENT_AFTER = 72 #--------------------------------------------------------------------------- # Configuration options related to source browsing #--------------------------------------------------------------------------- # If the SOURCE_BROWSER tag is set to YES then a list of source files will be # generated. Documented entities will be cross-referenced with these sources. # # Note: To get rid of all source code in the generated output, make sure that # also VERBATIM_HEADERS is set to NO. # The default value is: NO. SOURCE_BROWSER = YES # Setting the INLINE_SOURCES tag to YES will include the body of functions, # classes and enums directly into the documentation. # The default value is: NO. INLINE_SOURCES = NO # Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any # special comment blocks from generated source code fragments. Normal C, C++ and # Fortran comments will always remain visible. # The default value is: YES. STRIP_CODE_COMMENTS = YES # If the REFERENCED_BY_RELATION tag is set to YES then for each documented # entity all documented functions referencing it will be listed. # The default value is: NO. REFERENCED_BY_RELATION = YES # If the REFERENCES_RELATION tag is set to YES then for each documented function # all documented entities called/used by that function will be listed. # The default value is: NO. REFERENCES_RELATION = YES # If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set # to YES then the hyperlinks from functions in REFERENCES_RELATION and # REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will # link to the documentation. # The default value is: YES. REFERENCES_LINK_SOURCE = YES # If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the # source code will show a tooltip with additional information such as prototype, # brief description and links to the definition and documentation. Since this # will make the HTML file larger and loading of large files a bit slower, you # can opt to disable this feature. # The default value is: YES. # This tag requires that the tag SOURCE_BROWSER is set to YES. SOURCE_TOOLTIPS = YES # If the USE_HTAGS tag is set to YES then the references to source code will # point to the HTML generated by the htags(1) tool instead of doxygen built-in # source browser. The htags tool is part of GNU's global source tagging system # (see https://www.gnu.org/software/global/global.html). You will need version # 4.8.6 or higher. # # To use it do the following: # - Install the latest version of global # - Enable SOURCE_BROWSER and USE_HTAGS in the configuration file # - Make sure the INPUT points to the root of the source tree # - Run doxygen as normal # # Doxygen will invoke htags (and that will in turn invoke gtags), so these # tools must be available from the command line (i.e. in the search path). # # The result: instead of the source browser generated by doxygen, the links to # source code will now point to the output of htags. # The default value is: NO. # This tag requires that the tag SOURCE_BROWSER is set to YES. USE_HTAGS = NO # If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a # verbatim copy of the header file for each class for which an include is # specified. Set to NO to disable this. # See also: Section \class. # The default value is: YES. VERBATIM_HEADERS = YES # If the CLANG_ASSISTED_PARSING tag is set to YES then doxygen will use the # clang parser (see: # http://clang.llvm.org/) for more accurate parsing at the cost of reduced # performance. This can be particularly helpful with template rich C++ code for # which doxygen's built-in parser lacks the necessary type information. # Note: The availability of this option depends on whether or not doxygen was # generated with the -Duse_libclang=ON option for CMake. # The default value is: NO. CLANG_ASSISTED_PARSING = NO # If the CLANG_ASSISTED_PARSING tag is set to YES and the CLANG_ADD_INC_PATHS # tag is set to YES then doxygen will add the directory of each input to the # include path. # The default value is: YES. # This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES. CLANG_ADD_INC_PATHS = YES # If clang assisted parsing is enabled you can provide the compiler with command # line options that you would normally use when invoking the compiler. Note that # the include paths will already be set by doxygen for the files and directories # specified with INPUT and INCLUDE_PATH. # This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES. CLANG_OPTIONS = # If clang assisted parsing is enabled you can provide the clang parser with the # path to the directory containing a file called compile_commands.json. This # file is the compilation database (see: # http://clang.llvm.org/docs/HowToSetupToolingForLLVM.html) containing the # options used when the source files were built. This is equivalent to # specifying the -p option to a clang tool, such as clang-check. These options # will then be passed to the parser. Any options specified with CLANG_OPTIONS # will be added as well. # Note: The availability of this option depends on whether or not doxygen was # generated with the -Duse_libclang=ON option for CMake. CLANG_DATABASE_PATH = #--------------------------------------------------------------------------- # Configuration options related to the alphabetical class index #--------------------------------------------------------------------------- # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all # compounds will be generated. Enable this if the project contains a lot of # classes, structs, unions or interfaces. # The default value is: YES. ALPHABETICAL_INDEX = YES # In case all classes in a project start with a common prefix, all classes will # be put under the same header in the alphabetical index. The IGNORE_PREFIX tag # can be used to specify a prefix (or a list of prefixes) that should be ignored # while generating the index headers. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. IGNORE_PREFIX = #--------------------------------------------------------------------------- # Configuration options related to the HTML output #--------------------------------------------------------------------------- # If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output # The default value is: YES. GENERATE_HTML = YES # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # it. # The default directory is: html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_OUTPUT = html # The HTML_FILE_EXTENSION tag can be used to specify the file extension for each # generated HTML page (for example: .htm, .php, .asp). # The default value is: .html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FILE_EXTENSION = .html # The HTML_HEADER tag can be used to specify a user-defined HTML header file for # each generated HTML page. If the tag is left blank doxygen will generate a # standard header. # # To get valid HTML the header file that includes any scripts and style sheets # that doxygen needs, which is dependent on the configuration options used (e.g. # the setting GENERATE_TREEVIEW). It is highly recommended to start with a # default header using # doxygen -w html new_header.html new_footer.html new_stylesheet.css # YourConfigFile # and then modify the file new_header.html. See also section "Doxygen usage" # for information on how to generate the default header that doxygen normally # uses. # Note: The header is subject to change so you typically have to regenerate the # default header when upgrading to a newer version of doxygen. For a description # of the possible markers and block names see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. ##HTML_HEADER = doxygen-data/head.html HTML_HEADER = # The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each # generated HTML page. If the tag is left blank doxygen will generate a standard # footer. See HTML_HEADER for more information on how to generate a default # footer and what special commands can be used inside the footer. See also # section "Doxygen usage" for information on how to generate the default footer # that doxygen normally uses. # This tag requires that the tag GENERATE_HTML is set to YES. ##HTML_FOOTER = doxygen-data/foot.html HTML_FOOTER = # The HTML_STYLESHEET tag can be used to specify a user-defined cascading style # sheet that is used by each HTML page. It can be used to fine-tune the look of # the HTML output. If left blank doxygen will generate a default style sheet. # See also section "Doxygen usage" for information on how to generate the style # sheet that doxygen normally uses. # Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as # it is more robust and this tag (HTML_STYLESHEET) will in the future become # obsolete. # This tag requires that the tag GENERATE_HTML is set to YES. ##HTML_STYLESHEET = doxygen-data/doxygen.css HTML_STYLESHEET = # The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined # cascading style sheets that are included after the standard style sheets # created by doxygen. Using this option one can overrule certain style aspects. # This is preferred over using HTML_STYLESHEET since it does not replace the # standard style sheet and is therefore more robust against future updates. # Doxygen will copy the style sheet files to the output directory. # Note: The order of the extra style sheet files is of importance (e.g. the last # style sheet in the list overrules the setting of the previous ones in the # list). For an example see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_STYLESHEET = # The HTML_EXTRA_FILES tag can be used to specify one or more extra images or # other source files which should be copied to the HTML output directory. Note # that these files will be copied to the base HTML output directory. Use the # $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these # files. In the HTML_STYLESHEET file, use the file name only. Also note that the # files will be copied as-is; there are no commands or markers available. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_FILES = # The HTML_COLORSTYLE tag can be used to specify if the generated HTML output # should be rendered with a dark or light theme. Default setting AUTO_LIGHT # enables light output unless the user preference is dark output. Other options # are DARK to always use dark mode, LIGHT to always use light mode, AUTO_DARK to # default to dark mode unless the user prefers light mode, and TOGGLE to let the # user toggle between dark and light mode via a button. # Possible values are: LIGHT Always generate light output., DARK Always generate # dark output., AUTO_LIGHT Automatically set the mode according to the user # preference, use light mode if no preference is set (the default)., AUTO_DARK # Automatically set the mode according to the user preference, use dark mode if # no preference is set. and TOGGLE Allow to user to switch between light and # dark mode via a button.. # The default value is: AUTO_LIGHT. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE = AUTO_LIGHT # The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen # will adjust the colors in the style sheet and background images according to # this color. Hue is specified as an angle on a color-wheel, see # https://en.wikipedia.org/wiki/Hue for more information. For instance the value # 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300 # purple, and 360 is red again. # Minimum value: 0, maximum value: 359, default value: 220. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_HUE = 220 # The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors # in the HTML output. For a value of 0 the output will use gray-scales only. A # value of 255 will produce the most vivid colors. # Minimum value: 0, maximum value: 255, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_SAT = 100 # The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the # luminance component of the colors in the HTML output. Values below 100 # gradually make the output lighter, whereas values above 100 make the output # darker. The value divided by 100 is the actual gamma applied, so 80 represents # a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not # change the gamma. # Minimum value: 40, maximum value: 240, default value: 80. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_GAMMA = 80 # If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML # page will contain the date and time when the page was generated. Setting this # to YES can help to show when doxygen was last run and thus if the # documentation is up to date. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_TIMESTAMP = NO # If the HTML_DYNAMIC_MENUS tag is set to YES then the generated HTML # documentation will contain a main index with vertical navigation menus that # are dynamically created via JavaScript. If disabled, the navigation index will # consists of multiple levels of tabs that are statically embedded in every HTML # page. Disable this option to support browsers that do not have JavaScript, # like the Qt help browser. # The default value is: YES. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_DYNAMIC_MENUS = YES # If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML # documentation will contain sections that can be hidden and shown after the # page has loaded. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_DYNAMIC_SECTIONS = NO # With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries # shown in the various tree structured indices initially; the user can expand # and collapse entries dynamically later on. Doxygen will expand the tree to # such a level that at most the specified number of entries are visible (unless # a fully collapsed tree already exceeds this amount). So setting the number of # entries 1 will produce a full collapsed tree by default. 0 is a special value # representing an infinite number of entries and will result in a full expanded # tree by default. # Minimum value: 0, maximum value: 9999, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_INDEX_NUM_ENTRIES = 100 # If the GENERATE_DOCSET tag is set to YES, additional index files will be # generated that can be used as input for Apple's Xcode 3 integrated development # environment (see: # https://developer.apple.com/xcode/), introduced with OSX 10.5 (Leopard). To # create a documentation set, doxygen will generate a Makefile in the HTML # output directory. Running make will produce the docset in that directory and # running make install will install the docset in # ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at # startup. See https://developer.apple.com/library/archive/featuredarticles/Doxy # genXcode/_index.html for more information. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_DOCSET = NO # This tag determines the name of the docset feed. A documentation feed provides # an umbrella under which multiple documentation sets from a single provider # (such as a company or product suite) can be grouped. # The default value is: Doxygen generated docs. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_FEEDNAME = "Doxygen generated docs" # This tag determines the URL of the docset feed. A documentation feed provides # an umbrella under which multiple documentation sets from a single provider # (such as a company or product suite) can be grouped. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_FEEDURL = # This tag specifies a string that should uniquely identify the documentation # set bundle. This should be a reverse domain-name style string, e.g. # com.mycompany.MyDocSet. Doxygen will append .docset to the name. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_BUNDLE_ID = org.doxygen.Project # The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify # the documentation publisher. This should be a reverse domain-name style # string, e.g. com.mycompany.MyDocSet.documentation. # The default value is: org.doxygen.Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_ID = org.doxygen.Publisher # The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher. # The default value is: Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_NAME = Publisher # If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three # additional HTML index files: index.hhp, index.hhc, and index.hhk. The # index.hhp is a project file that can be read by Microsoft's HTML Help Workshop # on Windows. In the beginning of 2021 Microsoft took the original page, with # a.o. the download links, offline the HTML help workshop was already many years # in maintenance mode). You can download the HTML help workshop from the web # archives at Installation executable (see: # http://web.archive.org/web/20160201063255/http://download.microsoft.com/downlo # ad/0/A/9/0A939EF6-E31C-430F-A3DF-DFAE7960D564/htmlhelp.exe). # # The HTML Help Workshop contains a compiler that can convert all HTML output # generated by doxygen into a single compiled HTML file (.chm). Compiled HTML # files are now used as the Windows 98 help format, and will replace the old # Windows help format (.hlp) on all Windows platforms in the future. Compressed # HTML files also contain an index, a table of contents, and you can search for # words in the documentation. The HTML workshop also contains a viewer for # compressed HTML files. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_HTMLHELP = NO # The CHM_FILE tag can be used to specify the file name of the resulting .chm # file. You can add a path in front of the file if the result should not be # written to the html output directory. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_FILE = # The HHC_LOCATION tag can be used to specify the location (absolute path # including file name) of the HTML help compiler (hhc.exe). If non-empty, # doxygen will try to run the HTML help compiler on the generated index.hhp. # The file has to be specified with full path. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. HHC_LOCATION = # The GENERATE_CHI flag controls if a separate .chi index file is generated # (YES) or that it should be included in the main .chm file (NO). # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. GENERATE_CHI = NO # The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc) # and project file content. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_INDEX_ENCODING = # The BINARY_TOC flag controls whether a binary table of contents is generated # (YES) or a normal table of contents (NO) in the .chm file. Furthermore it # enables the Previous and Next buttons. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. BINARY_TOC = NO # The TOC_EXPAND flag can be set to YES to add extra items for group members to # the table of contents of the HTML help documentation and to the tree view. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. TOC_EXPAND = NO # If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and # QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that # can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help # (.qch) of the generated HTML documentation. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_QHP = NO # If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify # the file name of the resulting .qch file. The path specified is relative to # the HTML output folder. # This tag requires that the tag GENERATE_QHP is set to YES. QCH_FILE = # The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help # Project output. For more information please see Qt Help Project / Namespace # (see: # https://doc.qt.io/archives/qt-4.8/qthelpproject.html#namespace). # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_NAMESPACE = # The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt # Help Project output. For more information please see Qt Help Project / Virtual # Folders (see: # https://doc.qt.io/archives/qt-4.8/qthelpproject.html#virtual-folders). # The default value is: doc. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_VIRTUAL_FOLDER = doc # If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom # filter to add. For more information please see Qt Help Project / Custom # Filters (see: # https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_NAME = # The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the # custom filter to add. For more information please see Qt Help Project / Custom # Filters (see: # https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_ATTRS = # The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this # project's filter section matches. Qt Help Project / Filter Attributes (see: # https://doc.qt.io/archives/qt-4.8/qthelpproject.html#filter-attributes). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_SECT_FILTER_ATTRS = # The QHG_LOCATION tag can be used to specify the location (absolute path # including file name) of Qt's qhelpgenerator. If non-empty doxygen will try to # run qhelpgenerator on the generated .qhp file. # This tag requires that the tag GENERATE_QHP is set to YES. QHG_LOCATION = # If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be # generated, together with the HTML files, they form an Eclipse help plugin. To # install this plugin and make it available under the help contents menu in # Eclipse, the contents of the directory containing the HTML and XML files needs # to be copied into the plugins directory of eclipse. The name of the directory # within the plugins directory should be the same as the ECLIPSE_DOC_ID value. # After copying Eclipse needs to be restarted before the help appears. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_ECLIPSEHELP = NO # A unique identifier for the Eclipse help plugin. When installing the plugin # the directory name containing the HTML and XML files should also have this # name. Each documentation set should have its own identifier. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES. ECLIPSE_DOC_ID = org.doxygen.Project # If you want full control over the layout of the generated HTML pages it might # be necessary to disable the index and replace it with your own. The # DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top # of each HTML page. A value of NO enables the index and the value YES disables # it. Since the tabs in the index contain the same information as the navigation # tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. DISABLE_INDEX = NO # The GENERATE_TREEVIEW tag is used to specify whether a tree-like index # structure should be generated to display hierarchical information. If the tag # value is set to YES, a side panel will be generated containing a tree-like # index structure (just like the one that is generated for HTML Help). For this # to work a browser that supports JavaScript, DHTML, CSS and frames is required # (i.e. any modern browser). Windows users are probably better off using the # HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can # further fine tune the look of the index (see "Fine-tuning the output"). As an # example, the default style sheet generated by doxygen has an example that # shows how to put an image at the root of the tree instead of the PROJECT_NAME. # Since the tree basically has the same information as the tab index, you could # consider setting DISABLE_INDEX to YES when enabling this option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_TREEVIEW = YES # When both GENERATE_TREEVIEW and DISABLE_INDEX are set to YES, then the # FULL_SIDEBAR option determines if the side bar is limited to only the treeview # area (value NO) or if it should extend to the full height of the window (value # YES). Setting this to YES gives a layout similar to # https://docs.readthedocs.io with more room for contents, but less room for the # project logo, title, and description. If either GENERATE_TREEVIEW or # DISABLE_INDEX is set to NO, this option has no effect. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. FULL_SIDEBAR = NO # The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that # doxygen will group on one line in the generated HTML documentation. # # Note that a value of 0 will completely suppress the enum values from appearing # in the overview section. # Minimum value: 0, maximum value: 20, default value: 4. # This tag requires that the tag GENERATE_HTML is set to YES. ENUM_VALUES_PER_LINE = 4 # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used # to set the initial width (in pixels) of the frame in which the tree is shown. # Minimum value: 0, maximum value: 1500, default value: 250. # This tag requires that the tag GENERATE_HTML is set to YES. TREEVIEW_WIDTH = 250 # If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to # external symbols imported via tag files in a separate window. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. EXT_LINKS_IN_WINDOW = NO # If the OBFUSCATE_EMAILS tag is set to YES, doxygen will obfuscate email # addresses. # The default value is: YES. # This tag requires that the tag GENERATE_HTML is set to YES. OBFUSCATE_EMAILS = YES # If the HTML_FORMULA_FORMAT option is set to svg, doxygen will use the pdf2svg # tool (see https://github.com/dawbarton/pdf2svg) or inkscape (see # https://inkscape.org) to generate formulas as SVG images instead of PNGs for # the HTML output. These images will generally look nicer at scaled resolutions. # Possible values are: png (the default) and svg (looks nicer but requires the # pdf2svg or inkscape tool). # The default value is: png. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FORMULA_FORMAT = png # Use this tag to change the font size of LaTeX formulas included as images in # the HTML documentation. When you change the font size after a successful # doxygen run you need to manually remove any form_*.png images from the HTML # output directory to force them to be regenerated. # Minimum value: 8, maximum value: 50, default value: 10. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_FONTSIZE = 10 # The FORMULA_MACROFILE can contain LaTeX \newcommand and \renewcommand commands # to create new LaTeX commands to be used in formulas as building blocks. See # the section "Including formulas" for details. FORMULA_MACROFILE = # Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see # https://www.mathjax.org) which uses client side JavaScript for the rendering # instead of using pre-rendered bitmaps. Use this if you do not have LaTeX # installed or if you want to formulas look prettier in the HTML output. When # enabled you may also need to install MathJax separately and configure the path # to it using the MATHJAX_RELPATH option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. USE_MATHJAX = NO # With MATHJAX_VERSION it is possible to specify the MathJax version to be used. # Note that the different versions of MathJax have different requirements with # regards to the different settings, so it is possible that also other MathJax # settings have to be changed when switching between the different MathJax # versions. # Possible values are: MathJax_2 and MathJax_3. # The default value is: MathJax_2. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_VERSION = MathJax_2 # When MathJax is enabled you can set the default output format to be used for # the MathJax output. For more details about the output format see MathJax # version 2 (see: # http://docs.mathjax.org/en/v2.7-latest/output.html) and MathJax version 3 # (see: # http://docs.mathjax.org/en/latest/web/components/output.html). # Possible values are: HTML-CSS (which is slower, but has the best # compatibility. This is the name for Mathjax version 2, for MathJax version 3 # this will be translated into chtml), NativeMML (i.e. MathML. Only supported # for NathJax 2. For MathJax version 3 chtml will be used instead.), chtml (This # is the name for Mathjax version 3, for MathJax version 2 this will be # translated into HTML-CSS) and SVG. # The default value is: HTML-CSS. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_FORMAT = HTML-CSS # When MathJax is enabled you need to specify the location relative to the HTML # output directory using the MATHJAX_RELPATH option. The destination directory # should contain the MathJax.js script. For instance, if the mathjax directory # is located at the same level as the HTML output directory, then # MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax # Content Delivery Network so you can quickly see the result without installing # MathJax. However, it is strongly recommended to install a local copy of # MathJax from https://www.mathjax.org before deployment. The default value is: # - in case of MathJax version 2: https://cdn.jsdelivr.net/npm/mathjax@2 # - in case of MathJax version 3: https://cdn.jsdelivr.net/npm/mathjax@3 # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_RELPATH = # The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax # extension names that should be enabled during MathJax rendering. For example # for MathJax version 2 (see # https://docs.mathjax.org/en/v2.7-latest/tex.html#tex-and-latex-extensions): # MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols # For example for MathJax version 3 (see # http://docs.mathjax.org/en/latest/input/tex/extensions/index.html): # MATHJAX_EXTENSIONS = ams # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_EXTENSIONS = # The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces # of code that will be used on startup of the MathJax code. See the MathJax site # (see: # http://docs.mathjax.org/en/v2.7-latest/output.html) for more details. For an # example see the documentation. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_CODEFILE = # When the SEARCHENGINE tag is enabled doxygen will generate a search box for # the HTML output. The underlying search engine uses javascript and DHTML and # should work on any modern browser. Note that when using HTML help # (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) # there is already a search function so this one should typically be disabled. # For large projects the javascript based search engine can be slow, then # enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to # search using the keyboard; to jump to the search box use + S # (what the is depends on the OS and browser, but it is typically # , /