kgb-arch-posix-by-slawek_1.0b4+ds.orig/0000755000175000017500000000000011027006671017547 5ustar raphaelraphaelkgb-arch-posix-by-slawek_1.0b4+ds.orig/gpl.txt0000444000175000017500000004313310406654070021076 0ustar raphaelraphael GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. kgb-arch-posix-by-slawek_1.0b4+ds.orig/README0000444000175000017500000000031310513412032020411 0ustar raphaelraphaelKGB Archiver console version for all POSIX systems To compile: g++ -O3 -o kgb_arch kgb_arch_posix.cpp ©2005-2006 Tomasz Pawlak based on PAQ6 by Matt Mahoney mod by Slawek (poczta-sn@gazeta.pl) kgb-arch-posix-by-slawek_1.0b4+ds.orig/kgb_arch_posix_by_slawek.cpp0000444000175000017500000025567411027006671025316 0ustar raphaelraphael/*KGB Archiver console version ©2005-2006 Tomasz Pawlak, tomekp17@gmail.com, mod by Slawek (poczta-sn@gazeta.pl) based on PAQ6 by Matt Mahoney PAQ6v2 - File archiver and compressor. (C) 2004, Matt Mahoney, mmahoney@cs.fit.edu This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License version 2 as published by the Free Software Foundation at http://www.gnu.org/licenses/gpl.txt or (at your option) any later version. This program is distributed without any warranty. USAGE To compress: PAQ6 -3 archive file file... (1 or more file names), or or (MSDOS): dir/b | PAQ6 -3 archive (read file names from input) or (UNIX): ls | PAQ6 -3 archive To decompress: PAQ6 archive (no option) To list contents: more < archive Compression: The files listed are compressed and stored in the archive, which is created. The archive must not already exist. File names may specify a path, which is stored. If there are no file names on the command line, then PAQ6 prompts for them, reading until the first blank line or end of file. The -3 is optional, and is used to trade off compression vs. speed and memory. Valid options are -0 to -9. Higher numbers compress better but run slower and use more memory. -3 is the default, and gives a reasonable tradeoff. Recommended options are: -0 to -2 for fast (2X over -3) but poor compression, uses 2-6 MB memory -3 for reasonably fast and good compression, uses 18 MB (default) -4 better compression but 3.5X slower, uses 64 MB -5 slightly better compression, 6X slower than -3, uses 154 MB -6 about like -5, uses 202 MB memory -7 to -9 use 404 MB, 808 MB, 1616 MB, about the same speed as -5 Decompression: No file names are specified. The archive must exist. If a path is stored, the file is extracted to the appropriate directory, which must exist. PAQ6 does not create directories. If the file to be extracted already exists, it is not replaced; rather it is compared with the archived file, and the offset of the first difference is reported. The decompressor requires as much memory as was used to compress. There is no option. It is not possible to add, remove, or update files in an existing archive. If you want to do this, extract the files, delete the archive, and create a new archive with just the files you want. TO COMPILE gxx -O PAQ6.cpp DJGPP 2.95.2 bcc32 -O2 PAQ6.cpp Borland 5.5.1 sc -o PAQ6.cpp Digital Mars 8.35n g++ -O produces the fastest executable among free compilers, followed by Borland and Mars. However Intel 8 will produce the fastest and smallest Windows executable overall, followed by Microsoft VC++ .net 7.1 /O2 /G7 PAQ6 DESCRIPTION 1. OVERVIEW A PAQ6 archive has a header, listing the names and lengths of the files it contains in human-readable format, followed by the compressed data. The first line of the header is "PAQ6 -m" where -m is the memory option. The data is compressed as if all the files were concatenated into one long string. PAQ6 uses predictive arithmetic coding. The string, y, is compressed by representing it as a base 256 number, x, such that: P(s < y) <= x < P(s <= y) (1) where s is chosen randomly from the probability distribution P, and x has the minimum number of digits (bytes) needed to satisfy (1). Such coding is within 1 byte of the Shannon limit, log 1/P(y), so compression depends almost entirely on the goodness of the model, P, i.e. how well it estimates the probability distribution of strings that might be input to the compressor. Coding and decoding are illustrated in Fig. 1. An encoder, given P and y, outputs x. A decoder, given P and x, outputs y. Note that given P in equation (1), that you can find either x from y or y from x. Note also that both computations can be done incrementally. As the leading characters of y are known, the range of possible x narrows, so the leading digits can be output as they become known. For decompression, as the digits of x are read, the set of possible y satisfying (1) is restricted to an increasingly narrow lexicographical range containing y. All of the strings in this range will share a growing prefix. Each time the prefix grows, we can output a character. y +--------------------------+ Uncomp- | V ressed | +---------+ p +----------+ x Compressed Data --+--->| Model |----->| Encoder |----+ Data +---------+ +----------+ | | +----------+ V y +---------+ p +----------+ y Uncompressed +--->| Model |----->| Decoder |----+---> Data | +---------+ +----------+ | | | +-------------------------------------+ Fig. 1. Predictive arithmetic compression and decompression Note that the model, which estimates P, is identical for compression and decompression. Modeling can be expressed incrementally by the chain rule: P(y) = P(y_1) P(y_2|y_1) P(y_3|y_1 y_2) ... P(y_n|y_1 y_2 ... y_n-1) (2) where y_i means the i'th character of the string y. The output of the model is a distribution over the next character, y_i, given the context of characters seen so far, y_1 ... y_i-1. To simplify coding, PAQ6 uses a binary string alphabet. Thus, the output of a model is an estimate of P(y_i = 1 | context) (henceforth p), where y_i is the i'th bit, and the context is the previous i - 1 bits of uncompressed data. 2. PAQ6 MODEL The PAQ6 model consists of a weighted mix of independent submodels which make predictions based on different contexts. The submodels are weighted adaptively to favor those making the best predictions. The output of two independent mixers (which use sets of weights selected by different contexts) are averaged. This estimate is then adjusted by secondary symbol estimation (SSE), which maps the probability to a new probability based on previous experience and the current context. This final estimate is then fed to the encoder as illustrated in Fig. 2. Uncompressed input -----+--------------------+-------------+-------------+ | | | | V V | | +---------+ n0, n1 +----------+ | | | Model 1 |--------->| Mixer 1 |\ p | | +---------+ \ / | | \ V V \ / +----------+ \ +-----+ +------------+ +---------+ \ / \| | p | | Comp- | Model 2 | \/ + | SSE |--->| Arithmetic |--> ressed +---------+ /\ | | | Encoder | output ... / \ /| | | | / \ +----------+ / +-----+ +------------+ +---------+ / \ | Mixer 2 | / | Model N |--------->| |/ p +---------+ +----------+ Fig. 2. PAQ6 Model details for compression. The model is identical for decompression, but the encoder is replaced with a decoder. In Sections 2-6, the description applies to the default memory option (-5, or MEM = 5). For smaller values of MEM, some components are omitted and the number of contexts is less. 3. MIXER The mixers compute a probability by a weighted summation of the N models. Each model outputs two numbers, n0 and n1 represeting the relative probability of a 0 or 1, respectively. These are combined using weighted summations to estimate the probability p that the next bit will be a 1: SUM_i=1..N w_i n1_i (3) p = -------------------, n_i = n0_i + n1_i SUM_i=1..N w_i n_i The weights w_i are adjusted after each bit of uncompressed data becomes known in order to reduce the cost (code length) of that bit. The cost of a 1 bit is -log(p), and the cost of a 0 is -log(1-p). We find the gradient of the weight space by taking the partial derivatives of the cost with respect to w_i, then adjusting w_i in the direction of the gradient to reduce the cost. This adjustment is: w_i := w_i + e[ny_i/(SUM_j (w_j+wo) ny_j) - n_i/(SUM_j (w_j+wo) n_j)] where e and wo are small constants, and ny_i means n0_i if the actual bit is a 0, or n1_i if the bit is a 1. The weight offset wo prevents the gradient from going to infinity as the weights go to 0. e is set to around .004, trading off between faster adaptation (larger e) and less noise for better compression of stationary data (smaller e). There are two mixers, whose outputs are averaged together before being input to the SSE stage. Each mixer maintains a set of weights which is selected by a context. Mixer 1 maintains 16 weight vectors, selected by the 3 high order bits of the previous byte and on whether the data is text or binary. Mixer 2 maintains 16 weight vectors, selected by the 2 high order bits of each of the previous 2 bytes. To distinguish text from binary data, we use the heuristic that space characters are more common in text than NUL bytes, while NULs are more common in binary data. We compare the position of the 4th from last space with the position of the 4th from last 0 byte. 4. CONTEXT MODELS Individual submodels output a prediction in the form of two numbers, n0 and n1, representing relative probabilities of 0 and 1. Generally this is done by storing a pair of counters (c0,c1) in a hash table indexed by context. When a 0 or 1 is encountered in a context, the appropriate count is increased by 1. Also, in order to favor newer data over old, the opposite count is decreased by the following heuristic: If the count > 25 then replace with sqrt(count) + 6 (rounding down) Else if the count > 1 then replace with count / 2 (rounding down) The outputs are derived from the counts in a way that favors highly predictive contexts, i.e. those where one count is large and the other is small. For the case of c1 >= c0 the following heuristic is used. If c0 = 0 then n0 = 0, n1 = 4 c0 Else n0 = 1, n1 = c1 / c0 For the case of c1 < c0 we use the same heuristic swapping 0 and 1. In the following example, we encounter a long string of zeros followed by a string of ones and show the model output. Note how n0 and n1 predict the relative outcome of 0 and 1 respectively, favoring the most recent data, with weight n = n0 + n1 Input c0 c1 n0 n1 ----- -- -- -- -- 0000000000 10 0 40 0 00000000001 5 1 5 1 000000000011 2 2 1 1 0000000000111 1 3 1 3 00000000001111 1 4 1 4 Table 1. Example of counter state (c0,c1) and outputs (n0,n1) In order to represent (c0,c1) as an 8-bit state, counts are restricted to the values 0-40, 44, 48, 56, 64, 96, 128, 160, 192, 224, or 255. Large counts are incremented probabilistically. For example, if c0 = 40 and a 0 is encountered, then c0 is set to 44 with probability 1/4. Decreases in counter values are deterministic, and are rounded down to the nearest representable state. Counters are stored in a hash table indexed by contexts starting on byte boundaries and ending on nibble (4-bit) boundaries. Each hash element contains 15 counter states, representing the 15 possible values for the 0-3 remaining bits of the context after the last nibble boundary. Hash collisions are detected by storing an 8-bit checksum of the context. Each bucket contains 4 elements in a move-to-front queue. When a new element is to be inserted, the priority of the two least recently accessed elements are compared by using n (n0+n1) of the initial counter as the priority, and the lower priority element is discarded. Hash buckets are aligned on 64 byte addresses to minimize cache misses. 5. RUN LENGTH MODELS A second type of model is used to efficiently represent runs of up to 255 identical bytes within a context. For example, given the sequence "abc...abc...abc..." then a run length model would map "ab" -> ("c", 3) using a hash table indexed by "ab". If a new value is seen, e.g. "abd", then the state is updated to the new character and a count of 1, i.e. "ab" -> ("d", 1). A run length context is accessed 8 times, once for each bit. If the bits seen so far are consistent with the modeled character, then the output of a run length model is (n0,n1) = (0,n) if the next bit is a 1, or (n,0) if the next bit is a 0, where n is the count (1 to 255). If the bits seen so far are not consistent with the predicted byte, then the output is (0,0). These counts are added to the counter state counts to produce the model output. Run lengths are stored in a hash table without collision detection, so an element occupies 2 bytes. Generally, most models store one run length for every 8 counter pairs, so 20% of the memory is allocated to them. Run lengths are used only for memory option (-MEM) of 5 or higher. 6. SUBMODEL DETAILS Submodels differ mainly in their contexts. These are as follows: a. DefaultModel. (n0,n1) = (1,1) regardless of context. b. CharModel (N-gram model). A context consists of the last 0 to N whole bytes, plus the 0 to 7 bits of the partially read current byte. The maximum N depends on the -MEM option as shown in the table below. The order 0 and 1 contexts use a counter state lookup table rather than a hash table. Order Counters Run lengths ----- -------- ----------- 0 2^8 1 2^16 2 2^(MEM+15) 2^(MEM+12), MEM >= 5 3 2^(MEM+17) 2^(MEM+14), MEM >= 5 4 2^(MEM+18) 2^(MEM+15), MEM >= 5 5 2^(MEM+18), MEM >= 1 2^(MEM+15), MEM >= 5 6 2^(MEM+18), MEM >= 3 2^(MEM+15), MEM >= 5 7 2^(MEM+18), MEM >= 3 2^(MEM+15), MEM >= 5 8 2^20, MEM = 5 2^17, MEM = 5 2^(MEM+14), MEM >= 6 2^(MEM+14), MEM >= 6 9 2^20, MEM = 5 2^17, MEM = 5 2^(MEM+14), MEM >= 6 2^(MEM+14), MEM >= 6 Table 2. Number of modeled contexts of length 0-9 c. MatchModel (long context). A context is the last n whole bytes (plus extra bits) where n >=8. Up to 4 matching contexts are found by indexing into a rotating input buffer whose size depends on MEM. The index is a hash table of 32-bit pointers with 1/4 as many elements as the buffer (and therefore occupying an equal amount of memory). The table is indexed by a hashes of 8 byte contexts. No collision detection is used. In order to detect very long matches at a long distance (for example, versions of a file compressed together), 1/16 of the pointers (chosen randomly) are indexed by a hash of a 32 byte context. For each match found, the output is (n0,n1) = (w,0) or (0,w) (depending on the next bit) with a weight of w = length^2 / 4 (maximum 511), depending on the length of the context in bytes. The four outputs are added together. d. RecordModel. This models data with fixed length records, such as tables. The model attempts to find the record length by searching for characters that repeat in the pattern x..x..x..x where the interval between 4 successive occurrences is identical and at least 2. Because of uncertainty in this method, the two most recent values (which must be different) are used. The following 5 contexts are modeled; 1. The two bytes above the current bit for each repeat length. 2. The byte above and the previous byte (to the left) for each repeat length. 3. The byte above and the current position modulo the repeat length, for the longer of the two lengths only. e. SparseModel. This models contexts with gaps. It considers the following contexts, where x denotes the bytes considered and ? denotes the bit being predicted (plus preceding bits, which are included in the context). x.x? (first and third byte back) x..x? x...x? x....x? xx.? x.x.? xx..? c ... c?, gap length c ... xc?, gap length Table 3. Sparse model contexts The last two examples model variable gap lengths between the last byte and its previous occurrence. The length of the gap (up to 255) is part of the context. e. AnalogModel. This is intended to model 16-bit audio (mono or stereo), 24-bit color images, 8-bit data (such as grayscale images). Contexts drop the low order bits, and include the position within the file modulo 2, 3, or 4. There are 8 models, combined into 4 by addition before mixing. An x represents those bits which are part of the context. 16 bit audio: xxxxxx.. ........ xxxxxx.. ........ ? (position mod 2) xxxx.... ........ xxxxxx.. ........ ? (position mod 2) xxxxxx.. ........ ........ ........ xxxxxx.. ........ xxxxxx.. ........ ? (position mod 4 for stereo audio) 24 bit color: xxxx.... ........ ........ xxxxxxxx ........ ........ ? (position mod 3) xxxxxx.. xxxx.... xxxx.... ? (position mod 3) 8 bit data: xxx..... xxxxx... xxxxxxx. ? CCITT images (1 bit per pixel, 216 bytes wide, e.g. calgary/pic) xxxxxxxx (skip 215 bytes...) xxxxxxxx (skip 215 bytes...) ? Table 4. Analog models. f. WordModel. This is intended to model text files. There are 3 contexts: 1. The current word 2. The previous and current words 3. The second to last and current words (skipping a word) A word is defined in two different ways, resulting in a total of 6 different contexts: 1. Any sequence of characters with ASCII code > 32 (not white space). Upper case characters are converted to lower case. 2. Any sequence of A-Z and a-z (case sensitive). g. ExeModel. This models 32-bit Intel .exe and .dll files by changing relative 32-bit CALL addresses to absolute. These instructions have the form (in hex) "E8 xx yy zz 00" or "E8 xx yy zz FF" where the 32-bit operand is stored least significant byte first. These are converted to absolute addresses by adding the position of the E8 byte, and then stored in a 256 element table indexed by the low order byte (xx) along with an 8-bit count. If another E8 xx ... 00/FF with the same value of xx is encountered, then the old value is replaced and the count set back to 1. During modeling, when "E8 xx" is encountered, the bytes yy, zz, and 00/FF are predicted by adjusting xx to absolute address, then looking up the address in the table indexed by xx. If the context matches the table entry up to the current bit, then the next bit from the table is predicted with weight n for yy, 4n for zz, and 16n for 00/FF, where n is the count. 7. SSE The purpose of the SSE stage is to further adjust the probability output from the mixers to agree with actual experience. Ideally this should not be necessary, but in reality this can improve compression. For example, when "compressing" random data, the output probability should be 0.5 regardless of what the models say. SSE will learn this by mapping all input probabilities to 0.5. | Output __ | p / | / | __/ | / | / | | | / |/ Input p +------------- Fig. 3. Example of an SSE mapping. SSE maps the probability p back to p using a piecewise linear function with 32 segments. Each vertex is represented by a pair of 8-bit counters (n0, n1) except that now the counters use a stationary model. When the input is p and a 0 or 1 is observed, then the corresponding count (n0 or n1) of the two vertices on either side of p are incremented. When a count exceeds the maximum of 255, both counts are halved. The output probability is a linear interpolation of n1/n between the vertices on either side. The vertices are scaled to be longer in the middle of the graph and short near the ends. The intial counts are set so that p maps to itself. SSE is context sensitive. There are 2048 separately maintained SSE functions, selected by the 0-7 bits of the current (partial) byte and the 2 high order bits of the previous byte, and on whether the data is text or binary, using the same heuristic as for selecting the mixer context. The final output to the encoder is a weighted average of the SSE input and output, with the output receiving 3/4 of the weight: p := (3 SSE(p) + p) / 4. (4) 8. MEMORY USAGE The -m option (MEM = 0 through 9) controls model and memory usage. Smaller numbers compress faster and use less memory, while higher numbers compress better. For MEM < 5, only one mixer is used. For MEM < 4, bit counts are stored in nonstationary counters, but no run length is stored (decreasing memory by 20%). For MEM < 1, SSE is not used. For MEM < 5, the record, sparse, and analog models are not used. For MEM < 4, the word model is not used. The order of the char model ranges from 4 to 9 depending on MEM for MEM as shown in Table 6. Run Memory used by........................ Total MEM Mixers Len Order Char Match Record Sparse Analog Word SSE Memory (MB) --- ------ --- ----- ---- ----- ------ ------ ------ ---- --- ----------- 0 1 no 4 .5 1 1.5 1 1 no 5 1 2 .12 3 2 1 no 5 2 4 .12 6 3 1 no 7 10 8 .12 18 4 1 no 7 20 16 6 6 1 15 .12 64 5 2 yes 9 66 32 13 11 2 30 .12 154 6 2 yes 9 112 32 13 11 4 30 .12 202 7 2 yes 9 224 64 25 22 9 60 .12 404 8 2 yes 9 448 128 50 45 18 120 .12 808 9 2 yes 9 992 256 100 90 36 240 .12 1616 Table 5. Memory usage depending on MEM (-0 to -9 option). 9. EXPERIMENTAL RESULTS Results on the Calgary corpos are shown below for some top data compressors as of Dec. 30, 2003. Options are set for maximum compression. When possible, the files are all compressed into a single archive. Run times are on a 705 MHz Duron with 256 MB memory, and include 3 seconds to run WRT when applicable. PAQ6 was compiled with DJGPP (g++) 2.95.2 -O. Original size Options 3,141,622 Time Author ------------- ------- --------- ---- ------ gzip 1.2.4 -9 1,017,624 2 Jean Loup Gailly epm r9 c 668,115 49 Serge Osnach rkc a -M80m -td+ 661,102 91 Malcolm Taylor slim 20 a 659,213 159 Serge Voskoboynikov compressia 1.0 beta 650,398 66 Yaakov Gringeler durilca v.03a (as in README) 647,028 30 Dmitry Shkarin PAQ5 661,811 361 Matt Mahoney WRT11 + PAQ5 638,635 258 Przemyslaw Skibinski + PAQ6 -0 858,954 52 -1 750,031 66 -2 725,798 76 -3 709,806 97 -4 655,694 354 -5 648,951 625 -6 648,892 636 WRT11 + PAQ6 -6 626,395 446 WRT20 + PAQ6 -6 617,734 439 Table 6. Compressed size of the Calgary corpus. WRT11 is a word reducing transform written by Przemyslaw Skibinski. It uses an external English dictionary to replace words with 1-3 byte symbols to improve compression. rkc, compressia, and durilca use a similar approach. WRT20 is a newer version of WRT11. 10. ACKNOWLEDGMENTS Thanks to Serge Osnach for introducing me to SSE (in PAQ1SSE/PAQ2) and the sparse models (PAQ3N). Also, credit to Eugene Shelwein, Dmitry Shkarin for suggestions on using multiple character SSE contexts. Credit to Eugene, Serge, and Jason Schmidt for developing faster and smaller executables of previous versions. Credit to Werner Bergmans and Berto Destasio for testing and evaluating them, including modifications that improve compression at the cost of more memory. Credit to Alexander Ratushnyak who found a bug in PAQ4 decompression, and also in PAQ6 decompression for very small files (both fixed). Thanks to Berto for writing PAQ5-EMILCONT-DEUTERIUM from which this program is derived (which he derived from PAQ5). His improvements to PAQ5 include a new Counter state table and additional contexts for CharModel and SparseModel. I refined the state table by adding more representable states and modified the return counts to give greater weight when there is a large difference between the two counts. I expect there will be better versions in the future. If you make any changes, please change the name of the program (e.g. PAQ7), including the string in the archive header by redefining PROGNAME below. This will prevent any confusion about versions or archive compatibility. Also, give yourself credit in the help message. */ #define PROGNAME "KGB_arch" // Please change this if you change the program #define hash ___hash // To avoid Digital MARS name collision #include #include #include #include #include #include #include #include #include #include #include #undef hash using namespace std; const int PSCALE=4096; // Integer scale for representing probabilities int MEM=3; // Use about 6 MB * 2^MEM bytes of memory template inline int size(const T& t) {return t.size();} // 8-32 bit unsigned types, adjust as appropriate typedef unsigned char U8; typedef unsigned short U16; typedef unsigned long U32; // Fail if out of memory void handler() { printf("Out of memory\n"); exit(1); } // A ProgramChecker verifies some environmental assumptions and sets the // out of memory handler. It also gets the program starting time. // The global object programChecker should be initialized before any // other global objects. class ProgramChecker { clock_t start; public: ProgramChecker() { start=clock(); set_new_handler(handler); // Test the compiler for common but not guaranteed assumptions assert(sizeof(U8)==1); assert(sizeof(U16)==2); assert(sizeof(U32)==4); assert(sizeof(int)==4); } clock_t start_time() const {return start;} // When the program started } programChecker; //////////////////////////// rnd //////////////////////////// // 32-bit random number generator based on r(i) = r(i-24) ^ r(i-55) class Random { U32 table[55]; // Last 55 random values int i; // Index of current random value in table public: Random(); U32 operator()() { // Return 32-bit random number if (++i==55) i=0; if (i>=24) return table[i]^=table[i-24]; else return table[i]^=table[i+31]; } } rnd; Random::Random(): i(0) { // Seed the table table[0]=123456789; table[1]=987654321; for (int j=2; j<55; ++j) table[j]=table[j-1]*11+table[j-2]*19/16; } //////////////////////////// hash //////////////////////////// // Hash functoid, returns 32 bit hash of 1-4 chars class Hash { U32 table[8][256]; // Random number table public: Hash() { for (int i=7; i>=0; --i) for (int j=0; j<256; ++j) table[i][j]=rnd(); assert(table[0][255]==3610026313LU); } U32 operator()(U8 i0) const { return table[0][i0]; } U32 operator()(U8 i0, U8 i1) const { return table[0][i0]+table[1][i1]; } U32 operator()(U8 i0, U8 i1, U8 i2) const { return table[0][i0]+table[1][i1]+table[2][i2]; } U32 operator()(U8 i0, U8 i1, U8 i2, U8 i3) const { return table[0][i0]+table[1][i1]+table[2][i2]+table[3][i3]; } } hash; //////////////////////////// Counter //////////////////////////// /* A Counter represents a pair (n0, n1) of counts of 0 and 1 bits in a context. get0() -- returns p(0) with weight n = get0()+get1() get1() -- returns p(1) with weight n add(y) -- increments n_y, where y is 0 or 1 and decreases n_1-y priority() -- Returns a priority (n) for hash replacement such that higher numbers should be favored. */ class Counter { U8 state; struct E { // State table entry U16 n0, n1; // get0(), get1() U8 s00, s01; // Next state on input 0 without/with probabilistic incr. U8 s10, s11; // Next state on input 1 U32 p0, p1; // Probability of increment x 2^32 on inputs 0, 1 }; static E table[]; // State table public: Counter(): state(0) {} int get0() const {return table[state].n0;} int get1() const {return table[state].n1;} int priority() const {return get0()+get1();} void add(int y) { if (y) { if (state<208 || rnd() N ch.bpos() -- The number of bits (0-7) of the current partial byte at (0) ch[i] -- ch(pos()-i) ch.lo() -- Low order nibble so far (1-15 with leading 1) ch.hi() -- Previous nibble, 0-15 (no leading 1 bit) ch.pos(c) -- Position of the last occurrence of byte c (0-255) ch.pos(c, i) -- Position of the i'th to last occurrence, i = 0 to 3 */ class Ch { U32 N; // Buffer size U8 *buf; // [N] last N bytes U32 p; // pos() U32 bp; // bpos() U32 hi_nibble, lo_nibble; // hi(), lo() U32 lpos[256][4]; // pos(c, i) public: Ch(): N(0), buf(0), p(0), bp(0), hi_nibble(0), lo_nibble(1) { memset(lpos, 0, 256*4*sizeof(U32)); } void init() { N = 1 << 19+MEM-(MEM>=6); buf=(U8*)calloc(N, 1); if (!buf) handler(); buf[0]=1; } U32 operator()(int i) const {return buf[(p-i)&(N-1)];} U32 operator()() const {return buf[p&(N-1)];} void update(int y) { U8& r=buf[p&(N-1)]; r+=r+y; if (++bp==8) { lpos[r][3]=lpos[r][2]; lpos[r][2]=lpos[r][1]; lpos[r][1]=lpos[r][0]; lpos[r][0]=p; bp=0; ++p; buf[p&(N-1)]=1; } if ((lo_nibble+=lo_nibble+y)>=16) { hi_nibble=lo_nibble-16; lo_nibble=1; } } U32 pos() const {return p;} U32 pos(U8 c, int i=0) const {return lpos[c][i&3];} U32 bpos() const {return bp;} U32 operator[](int i) const {return buf[i&(N-1)];} U32 hi() const {return hi_nibble;} U32 lo() const {return lo_nibble;} } ch; // Global object //////////////////////////// Hashtable //////////////////////////// /* A Hashtable stores Counters. It is organized to minimize cache misses for 64-byte cache lines. The size is fixed at 2^n bytes. It uses LRU replacement for buckets of size 4, except that the next to oldest element is replaced if it has lower priority than the oldest. Each bucket represents 15 counters for a context on a half-byte boundary. Hashtable ht(n) -- Create hash table of 2^n bytes (15/16 of these are 1-byte Counters). ht.set(h) -- Set major context to h, a 32 bit hash of a context ending on a nibble (4-bit) boundary. ht(c) -- Retrieve a reference to counter associated with partial nibble c (1-15) in context h. Normally there should be 4 calls to ht(c) after each ht.set(h). */ template class Hashtable { private: const U32 N; // log2 size in bytes struct HashElement { U8 checksum; // Checksum of context, used to detect collisions T c[15]; // 1-byte counters in minor context c HashElement(): checksum(0) {} }; HashElement *table; // [2^(N-4)] U32 cxt; // major context public: Hashtable(U32 n); // Set major context to h, a 32 bit hash. Create a new element if needed. void set(U32 h) { // Search 4 elements for h within a 64-byte cache line const U8 checksum=(h>>24)^h; const U32 lo= (h>>(32-N)) & -4; const U32 hi=lo+4; U32 i; for (i=lo; i Hashtable::Hashtable(U32 n): N(n>4?n-4:1), table(0), cxt(0) { assert(sizeof(HashElement)==16); assert(sizeof(char)==1); // Align the hash table on a 64 byte cache page boundary char *p=(char*)calloc((16<0 && n<=N); assert(c_>=0 && c_2000000000/PSCALE) sum/=4, n1/=4; assert(sum>0); return (PSCALE-1)*n1/sum; } // Adjust the weights by gradient descent to reduce cost of bit y void Mixer::update(int y) { U32 s0=0, s1=0; for (int i=0; i0 && s1>0) { const U32 s=s0+s1; const U32 sy=y?s1:s0; const U32 sy1=0xffffffff/sy+(rnd()&1023) >> 10; const U32 s1 =0xffffffff/s +(rnd()&1023) >> 10; for (int i=0; i> 8; wt[c][i]=min(65535, max(1, int(wt[c][i]+dw))); } } n=0; } Mixer::Mixer(int C_): C(C_), bc0(new U32[N]), bc1(new U32[N]), wt(new U32[C_][N]), n(0), c(0) { for (int i=0; i=MINMEM) m2.write(n0, n1); } void add(int n0, int n1) { if (MEM>=MINMEM) { m1.add(n0, n1); m2.add(n0, n1); } else m1.add(n0, n1); } int predict() { U32 p1=m1.predict((ch(1) >> 5) + 8*(ch.pos(0, 3) < ch.pos(32, 3))); if (MEM>=MINMEM) { U32 p2=m2.predict((ch(1) >> 6)+4*(ch(2) >> 6)); return (p1+p2)/2; } else return p1; } void update(int y) { m1.update(y); if (MEM>=MINMEM) m2.update(y); } U32 getC() const {return 256;} U32 getN() const {return m1.getN();} }; MultiMixer mixer; //////////////////////////// CounterMap //////////////////////////// /* CounterMap maintains a model and one context Countermap cm(N); -- Create, size 2^N bytes cm.update(h); -- Update model, then set next context hash to h cm.write(); -- Predict next bit and write counts to mixer cm.add(); -- Predict and add to previously written counts There should be 8 calls to either write() or add() between each update(h). h is a 32-bit hash of the context which should be set after a whole number of bytes are read. */ // Stores only the most recent byte and its count per context (run length) // in a hash table without collision detection class CounterMap1 { const int N; struct S { U8 c; // char U8 n; // count }; S* t; // cxt -> c repeated last n times U32 cxt; public: CounterMap1(int n): N(n>1?n-1:1), cxt(0) { assert(sizeof(S)==2); t=(S*)calloc(1<> 32-N; } void add() { if ((U32)((t[cxt].c+256) >> 8-ch.bpos())==ch()) { if ((t[cxt].c >> 7-ch.bpos()) & 1) mixer.add(0, t[cxt].n); else mixer.add(t[cxt].n, 0); } } void write() { mixer.write(0, 0); add(); } }; // Uses a nibble-oriented hash table of contexts (counter state) class CounterMap2 { private: const U32 N2; // Size of ht2 in elements U32 cxt; // Major context Hashtable ht2; // Secondary hash table Counter* cp[8]; // Pointers into ht2 or 0 if not used public: CounterMap2(int n); // Use 2^n bytes memory void add(); void update(U32 h); void write() { mixer.write(0, 0); add(); } }; CounterMap2::CounterMap2(int n): N2(n), cxt(0), ht2(N2) { for (int i=0; i<8; ++i) cp[i]=0; } // Predict the next bit given the bits so far in ch() void CounterMap2::add() { const U32 bcount = ch.bpos(); if (bcount==4) { cxt^=hash(ch.hi(), cxt); ht2.set(cxt); } cp[bcount]=&ht2(ch.lo()); mixer.add(cp[bcount]->get0(), cp[bcount]->get1()); } // After 8 predictions, update the models with the last input char, ch(1), // then set the new context hash to h void CounterMap2::update(U32 h) { const U32 c=ch(1); // Update the secondary context for (int i=0; i<8; ++i) { if (cp[i]) { cp[i]->add((c>>(7-i))&1); cp[i]=0; } } cxt=h; ht2.set(cxt); } // Combines 1 and 2 above. class CounterMap3 { enum {MINMEM=5}; // Smallest MEM to use cm1 CounterMap1 cm1; CounterMap2 cm2; public: CounterMap3(int n): cm1(MEM>=MINMEM ? n-2 : 0), cm2(n) {} void update(U32 h) { if (MEM>=MINMEM) cm1.update(h); cm2.update(h); } void write() { cm2.write(); if (MEM>=MINMEM) cm1.add(); } void add() { cm2.add(); if (MEM>=MINMEM) cm1.add(); } }; #define CounterMap CounterMap3 //////////////////////////// Model //////////////////////////// // All models have a function model() which updates the model with the // last bit of input (in ch) then writes probabilities for the following // bit into mixer. class Model { public: virtual void model() = 0; virtual ~Model() {} }; //////////////////////////// defaultModel //////////////////////////// // DefaultModel predicts P(1) = 0.5 class DefaultModel: public Model { public: void model() {mixer.write(1, 1);} }; //////////////////////////// charModel //////////////////////////// // A CharModel contains n-gram models from 0 to 9 class CharModel: public Model { enum {N=10}; // Number of models Counter *t0, *t1; // Model orders 0, 1 [256], [65536] CounterMap t2, t3, t4, t5, t6, t7, t8, t9; // Model orders 2-9 U32 *cxt; // Context hashes [N] Counter *cp0, *cp1; // Pointers to counters in t0, t1 public: CharModel(): t0(new Counter[256]), t1(new Counter[65536]), t2(MEM+15), t3(MEM+17), t4(MEM+18), t5((MEM>=1)*(MEM+18)), t6((MEM>=3)*(MEM+18)), t7((MEM>=3)*(MEM+18)), t8((MEM>=5)*(MEM+18-(MEM>=6))), t9((MEM>=5)*(MEM+18-(MEM>=6))), cxt(new U32[N]) { cp0=&t0[0]; cp1=&t1[0]; memset(cxt, 0, N*sizeof(U32)); memset(t0, 0, 256*sizeof(Counter)); memset(t1, 0, 65536*sizeof(Counter)); } void model(); // Update and predict }; // Update with bit y, put array of 0 counts in n0 and 1 counts in n1 inline void CharModel::model() { // Update models int y = ch(ch.bpos()==0)&1; // last input bit cp0->add(y); cp1->add(y); // Update context if (ch.bpos()==0) { // Start new byte for (int i=N-1; i>0; --i) cxt[i]=cxt[i-1]^hash(ch(1), i); t2.update(cxt[2]); t3.update(cxt[3]); t4.update(cxt[4]); if (MEM>=1) t5.update(cxt[5]); if (MEM>=3) { t6.update(cxt[6]); t7.update(cxt[7]); } if (MEM>=5) { t8.update(cxt[8]); t9.update(cxt[9]); } } cp0=&t0[ch()]; cp1=&t1[ch()+256*ch(1)]; // Write predictions to the mixer mixer.write(cp0->get0(), cp0->get1()); mixer.write(cp1->get0(), cp1->get1()); t2.write(); t3.write(); t4.write(); if (MEM>=1) t5.add(); if (MEM>=3) { t6.write(); t7.add(); } if (MEM>=5) { t8.write(); t9.add(); } } //////////////////////////// matchModel //////////////////////////// /* A MatchModel looks for a match of length n >= 8 bytes between the current context and previous input, and predicts the next bit in the previous context with weight n. If the next bit is 1, then the mixer is assigned (0, n), else (n, 0). Matchies are found using an index (a hash table of pointers into ch). */ class MatchModel: public Model { const int N; // 2^N = hash table size enum {M=4}; // Number of strings to match U32 hash[2]; // Hashes of current context up to pos-1 U32 begin[M]; // Points to first matching byte U32 end[M]; // Points to last matching byte + 1, 0 if no match U32 *ptr; // Hash table of pointers [2^(MEM+17)] public: MatchModel(): N(17+MEM-(MEM>=6)), ptr(new U32[1 << N]) { memset(ptr, 0, (1 << N)*sizeof(U32)); hash[0]=hash[1]=0; for (int i=0; i> (32-N); if ((hash[0]>>28)==0) h=hash[1] >> (32-N); // 1/16 of 8-contexts are hashed to 32 bytes for (int i=0; i0) { begin[i]=end[i]; U32 p=ch.pos(); while (begin[i]>0 && p>0 && begin[i]!=p+1 && ch[begin[i]-1]==ch[p-1]) { --begin[i]; --p; } } if (end[i]==begin[i]) // No match found begin[i]=end[i]=0; break; } } ptr[h]=ch.pos(); } // Test whether the current context is valid in the last 0-7 bits for (int i=0; i> (8-ch.bpos())) != ch()) begin[i]=end[i]=0; } // Predict the bit found in the matching contexts int n0=0, n1=0; for (int i=0; i511) wt=511; int y=(ch[end[i]]>>(7-ch.bpos()))&1; if (y) n1+=wt; else n0+=wt; } } mixer.write(n0, n1); } //////////////////////////// recordModel //////////////////////////// /* A RecordModel finds fixed length records and models bits in the context of the two bytes above (the same position in the two previous records) and in the context of the byte above and to the left (the previous byte). The record length is assumed to be the interval in the most recent occurrence of a byte occuring 4 times in a row equally spaced, e.g. "x..x..x..x" would imply a record size of 3. There are models for the 2 most recent, different record lengths of at least 2. */ class RecordModel: public Model { const int SIZE; enum {N=2}; // Number of models CounterMap t0, t1, t2, t3, t4; // Model int repeat1, repeat2; // 2 last cycle lengths public: RecordModel(): SIZE((MEM>=4)*(16+MEM-(MEM>=6))), t0(SIZE), t1(SIZE), t2(SIZE), t3(SIZE), t4(SIZE), repeat1(2), repeat2(3) {} void model(); }; // Update the model with bit y, then put predictions of the next update // as 0 counts in n0[0..N-1] and 1 counts in n1[0..N-1] inline void RecordModel::model() { if (ch.bpos()==0) { // Check for a repeating pattern of interval 3 or more const int c=ch(1); const int d1=ch.pos(c,0)-ch.pos(c,1); const int d2=ch.pos(c,1)-ch.pos(c,2); const int d3=ch.pos(c,2)-ch.pos(c,3); if (d1>1 && d1==d2 && d2==d3) { if (d1==repeat1) swap(repeat1, repeat2); else if (d1!=repeat2) { repeat1=repeat2; repeat2=d1; } } // Compute context hashes int r1=repeat1, r2=repeat2; if (r1>r2) swap(r1, r2); t0.update(hash(ch(r1), ch(r1*2), r1)); // 2 above (shorter repeat) t1.update(hash(ch(1), ch(r1), r1)); // above and left t2.update(hash(ch(r1), ch.pos()%r1)); // above and pos t3.update(hash(ch(r2), ch(r2*2), r2)); // 2 above (longer repeat) t4.update(hash(ch(1), ch(r2), r2)); // above and left } t0.write(); t1.write(); t2.write(); t3.write(); t4.write(); } //////////////////////////// sparseModel //////////////////////////// // A SparseModel models several order-2 contexts with gaps class SparseModel: public Model { const int SIZE; enum {N=10}; // Number of models CounterMap t0, t1, t2, t3, t4, t5, t6, t7, t8; // Sparse models public: SparseModel(): SIZE((MEM>=4)*(MEM+15-(MEM>=6))), t0(SIZE), t1(SIZE), t2(SIZE), t3(SIZE), t4(SIZE), t5(SIZE), t6(SIZE), t7(SIZE), t8(SIZE) {} void model(); // Update and predict }; inline void SparseModel::model() { // Update context if (ch.bpos()==0) { t0.update(hash(ch(1), ch(3))); t1.update(hash(ch(1), ch(4))); t2.update(hash(ch(1), ch(5))); t3.update(hash(ch(1), ch(6))); t4.update(hash(ch(2), ch(3))); t5.update(hash(ch(2), ch(4))); t6.update(hash(ch(3), ch(4))); const int g=min(255, int(ch.pos()-ch.pos(ch(1), 2))); // gap to prior ch1 t7.update(hash(ch(1), g)); t8.update(hash(ch(1), ch(2), g)); } // Predict t0.write(); t1.write(); t2.write(); t3.write(); t4.write(); t5.write(); t6.write(); t7.write(); t8.write(); } //////////////////////////// analogModel //////////////////////////// // An AnalogModel is intended for 16-bit mono or stereo (WAV files) // 24-bit images (BMP files), and 8 bit analog data (such as grayscale // images), and CCITT images. class AnalogModel: public Model { const int SIZE; enum {N=6}; CounterMap t0, t1, t2, t3, t4, t5, t6; int pos3; // pos % 3 public: AnalogModel(): SIZE((MEM>=4)*(MEM+13)), t0(SIZE), t1(SIZE), t2(SIZE), t3(SIZE), t4(SIZE), t5(SIZE), t6(SIZE), pos3(0) {} void model() { if (ch.bpos()==0) { if (++pos3==3) pos3=0; t0.update(hash(ch(2)/4, ch(4)/4, ch.pos()%2)); // 16 bit mono model t1.update(hash(ch(2)/16, ch(4)/16, ch.pos()%2)); t2.update(hash(ch(2)/4, ch(4)/4, ch(8)/4, ch.pos()%4)); // Stereo t3.update(hash(ch(3), ch(6)/4, pos3)); // 24 bit image models t4.update(hash(ch(1)/16, ch(2)/16, ch(3)/4, pos3)); t5.update(hash(ch(1)/2, ch(2)/8, ch(3)/32)); // 8-bit data model t6.update(hash(ch(216), ch(432))); // CCITT images } t0.write(); t1.add(); t2.add(); t3.write(); t4.add(); t5.write(); t6.write(); } }; //////////////////////////// wordModel //////////////////////////// // A WordModel models words, which are any characters > 32 separated // by whitespace ( <= 32). There is a unigram, bigram and sparse // bigram model (skipping 1 word). class WordModel: public Model { const int SIZE; enum {N=3}; CounterMap t0, t1, t2, t3, t4, t5; U32 cxt[N]; // Hashes of last N words broken on whitespace U32 word[N]; // Hashes of last N words of letters only, lower case public: WordModel(): SIZE((MEM>=4)*(MEM+17-(MEM>=6))), t0(SIZE), t1(SIZE), t2(SIZE), t3(SIZE), t4(SIZE), t5(SIZE) { for (int i=0; i32) { cxt[0]^=hash(cxt[0], c); } else if (cxt[0]) { for (int i=N-1; i>0; --i) cxt[i]=cxt[i-1]; cxt[0]=0; } if (isalpha(c) || c>=192) word[0]^=hash(word[0], tolower(c), 1); else { for (int i=N-1; i>0; --i) word[i]=word[i-1]; word[0]=0; } t0.update(cxt[0]); t1.update(cxt[1]+cxt[0]); t2.update(cxt[2]+cxt[0]); t3.update(word[0]); t4.update(word[1]+word[0]); t5.update(word[2]+word[0]); } t0.write(); t1.write(); t2.write(); t3.write(); t4.write(); t5.write(); } }; //////////////////////////// exeModel //////////////////////////// // Model 32-bit Intel executables, changing relative call (E8) operands // to absolute addresses class ExeModel { struct S { U32 a; // absolute address, indexed on 8 low order bytes U8 n; // how many times? S(): a(0), n(0) {} }; S t[256]; // E8 history indexed on low order byte public: void model() { // Convert E8 relative little-endian address to absolute by adding // file offset, then store in table t indexed by its low byte if (ch.bpos()==0) { if (ch(5)==0xe8 && (ch(1)==0 || ch(1)==0xff)) { U32 a=ch(4)+(ch(3)<<8)+(ch(2)<<16)+(ch(1)<<24)+ch.pos()-5; int i=a&0xff; if (t[i].a==a && t[i].n<255) ++t[i].n; else { t[i].a=a; t[i].n=1; } } } int n0=0, n1=0; // Model 4th byte of address if (ch(4)==0xe8) { int i=(ch(3)+ch.pos()-4)&0xff; // index in t if (t[i].n>0) { U32 r=t[i].a-ch.pos()+4; // predicted relative address U32 ck=(((r&0xff000000)>>8)+0x1000000)>>(24-ch.bpos()); // ch(0) should be this if context matches so far int y=(r>>(31-ch.bpos()))&1; // predicted bit if (ch(0)==ck && ch(1)==((r>>16)&0xff)) { if (y) n1=t[i].n*16; else n0=t[i].n*16; } } } // Model 3rd byte of address if (ch(3)==0xe8) { int i=(ch(2)+ch.pos()-3)&0xff; if (t[i].n>0) { U32 r=t[i].a-ch.pos()+3; U32 ck=((r&0xff0000)+0x1000000)>>(24-ch.bpos()); int y=(r>>(23-ch.bpos()))&1; if (ch(0)==ck && ch(1)==((r>>8)&0xff)) { if (y) n1=t[i].n*4; else n0=t[i].n*4; } } } // Model 2nd byte of address else if (ch(2)==0xe8) { int i=(ch(1)+ch.pos()-2)&0xff; if (t[i].n>0) { U32 r=t[i].a-ch.pos()+2; U32 ck=((r&0xff00)+0x10000)>>(16-ch.bpos()); int y=(r>>(15-ch.bpos()))&1; if (ch(0)==ck) { if (y) n1=t[i].n; else n0=t[i].n; } } } mixer.write(n0, n1); } }; //////////////////////////// Predictor //////////////////////////// /* A Predictor adjusts the model probability using SSE and passes it to the encoder. An SSE model is a table of counters, sse[SSE1][SSE2] which maps a context and a probability into a new, more accurate probability. The context, SSE1, consists of the 0-7 bits of the current byte and the 2 leading bits of the previous byte. The probability to be mapped, SSE2 is first stretched near 0 and 1 using SSEMap, then quantized into SSE2=32 intervals. Each SSE element is a pair of 0 and 1 counters of the bits seen so far in the current context and probability range. Both the bin below and above the current probability is updated by adding 1 to the appropriate count (n0 or n1). The output probability for an SSE element is n1/(n0+n1) interpolated between the bins below and above the input probability. This is averaged with the original probability with 25% weight to give the final probability passed to the encoder. */ class Predictor { // Models DefaultModel defaultModel; CharModel charModel; MatchModel matchModel; RecordModel recordModel; SparseModel sparseModel; AnalogModel analogModel; WordModel wordModel; ExeModel exeModel; enum {SSE1=256*4*2, SSE2=32, // SSE dimensions (contexts, probability bins) SSESCALE=1024/SSE2}; // Number of mapped probabilities between bins // Scale probability p into a context in the range 0 to 1K-1 by // stretching the ends of the range. class SSEMap { U16 table[PSCALE]; public: int operator()(int p) const {return table[p];} SSEMap(); } ssemap; // functoid // Secondary source encoder element struct SSEContext { U8 c1, n; // Count of 1's, count of bits int p() const {return PSCALE*(c1*64+1)/(n*64+2);} void update(int y) { if (y) ++c1; if (++n>254) { // Roll over count overflows c1/=2; n/=2; } } SSEContext(): c1(0), n(0) {} }; SSEContext (*sse)[SSE2+1]; // [SSE1][SSE2+1] context, mapped probability U32 nextp; // p() U32 ssep; // Output of sse U32 context; // SSE context public: Predictor(); int p() const {return nextp;} // Returns pr(y = 1) * PSCALE void update(int y); // Update model with bit y = 0 or 1 }; Predictor::SSEMap::SSEMap() { for (int i=0; i1023) p=1023; if (p<0) p=0; table[i]=p; } } Predictor::Predictor(): sse(0), nextp(PSCALE/2), ssep(512), context(0) { ch.init(); // Initialize to sse[context][ssemap(p)] = p if (MEM>=1) { sse=(SSEContext(*)[SSE2+1]) new SSEContext[SSE1][SSE2+1]; int N=PSCALE; int oldp=SSE2+1; for (int i=N-1; i>=0; --i) { int p=(ssemap(i*PSCALE/N)+SSESCALE/2)/SSESCALE; int n=1+N*N/((i+1)*(N-i)); if (n>254) n=254; int c1=(i*n+N/2)/N; for (int j=oldp-1; j>=p; --j) { for (int k=0; k=1) { sse[context][ssep/SSESCALE].update(y); sse[context][ssep/SSESCALE+1].update(y); } // Adjust model mixing weights mixer.update(y); // Update individual models ch.update(y); defaultModel.model(); charModel.model(); if (MEM>=2) matchModel.model(); if (MEM>=4) { recordModel.model(); sparseModel.model(); analogModel.model(); wordModel.model(); } if (MEM>=3) exeModel.model(); // Combine probabilities nextp=mixer.predict(); // Get final probability, interpolate SSE and average with original if (MEM>=1) { context=(ch(0)*4+ch(1)/64)*2+(ch.pos(0,3)=0x4000000) xmid+=(xdiff>>12)*p; else if (xdiff>=0x100000) xmid+=((xdiff>>6)*p)>>6; else xmid+=(xdiff*p)>>12; // Update the range if (y) x2=xmid; else x1=xmid+1; predictor.update(y); // Shift equal MSB's out while (((x1^x2)&0xff000000)==0) { putc(x2>>24, archive); x1<<=8; x2=(x2<<8)+255; } } /* Decode one bit from the archive, splitting [x1, x2] as in the encoder and returning 1 or 0 depending on which subrange the archive point x is in. */ inline int Encoder::decode() { // Split the range const U32 p=predictor.p()*(4096/PSCALE)+2048/PSCALE; // P(1) * 4K assert(p<4096); const U32 xdiff=x2-x1; U32 xmid=x1; // = x1+p*(x2-x1) multiply without overflow, round down if (xdiff>=0x4000000) xmid+=(xdiff>>12)*p; else if (xdiff>=0x100000) xmid+=((xdiff>>6)*p)>>6; else xmid+=(xdiff*p)>>12; // Update the range int y=0; if (x<=xmid) { y=1; x2=xmid; } else x1=xmid+1; predictor.update(y); // Shift equal MSB's out while (((x1^x2)&0xff000000)==0) { x1<<=8; x2=(x2<<8)+255; int c=getc(archive); if (c==EOF) c=0; x=(x<<8)+c; } return y; } // Should be called when there is no more to compress void Encoder::flush() { // In COMPRESS mode, write out the remaining bytes of x, x1 < x < x2 if (mode==COMPRESS) { while (((x1^x2)&0xff000000)==0) { putc(x2>>24, archive); x1<<=8; x2=(x2<<8)+255; } putc(x2>>24, archive); // First unequal byte } } //////////////////////////// Transformer //////////////////////////// /* A transformer compresses 1 byte at a time. It also provides a place to insert transforms or filters in the future. Transformer tf(COMPRESS, f) -- Initialize for compression to archive f which must be open in "wb" mode with the header already written Transformer tf(DECOMPRESS, f) -- Initialize for decompression from f which must be open in "rb" mode past the header tf.encode(c) -- Compress byte c c = tf.decode() -- Decompress byte c tf.flush() -- Should be called when compression is finished */ class Transformer { Encoder e; public: Transformer(Mode mode, FILE* f): e(mode, f) {} void encode(int c) { for (int i=7; i>=0; --i) e.encode((c>>i)&1); } U32 decode() { U32 c=0; for (int i=0; i<8; ++i) c=c+c+e.decode(); return c; } void flush() { e.flush(); } }; //////////////////////////// main //////////////////////////// // Read and return a line of input from FILE f (default stdin) up to // first control character except tab. Skips CR in CR LF. string getline(FILE* f=stdin) { int c; string result=""; while ((c=getc(f))!=EOF && (c>=32 || c=='\t')) result+=char(c); if (c=='\r') (void) getc(f); return result; } // User interface int main(int argc, char** argv) { int _mode = 0; // Check arguments if (argc<2) { printf("KGB Archiver v1.0, (C) 2005-2006 Tomasz Pawlak\nBased on PAQ6 by Matt Mahoney\nmod by Slawek (poczta-sn@gazeta.pl)\n\n" "Compression:\t\tkgb_arch.exe - archive.kgb files <@list_files>\n" "Decompression:\t\tkgb_arch.exe archive.kgb\n" "Table of contests:\tmore < archive.kgb\n\n" "m argument\tmemory usage\n" "----------\t------------------------------\n" " -0 \t 2 MB (the fastest compression)\n" " -1 \t 3 MB\n" " -2 \t 6 MB\n" " -3 \t 18 MB (dafault)\n" " -4 \t 64 MB\n" " -5 \t 154 MB\n" " -6 \t 202 MB\n" " -7 \t 404 MB\n" " -8 \t 808 MB\n" " -9 \t 1616 MB (the best compression)\n"); return 1; } // Read and remove -MEM option if (argc>1 && argv[1][0]=='-') { if (isdigit(argv[1][1]) && argv[1][2]==0) { MEM=argv[1][1]-'0'; } else printf("Option %s ignored\n", argv[1]); argc--; argv++; } // File names and sizes from input or archive vector filename; // List of names vector filesize; // Size or -1 if error int uncompressed_bytes=0, compressed_bytes=0; // Input, output sizes // Extract files FILE* archive=fopen(argv[1], "rb"); if (archive) { _mode = 0; if (argc>2) { printf("File %s already exists\n", argv[1]); return 1; } // Read PROGNAME " -m\r\n" at start of archive string s=getline(archive); if (s.substr(0, string(PROGNAME).size()) != PROGNAME) { printf("Archive %s is not in KGB Archiver format\n", argv[1]); return 1; } // Get option -m where m is a digit if (s.size()>2 && s[s.size()-2]=='-') { int c=s[s.size()-1]; if (c>='0' && c<='9') MEM=c-'0'; } printf("Extracting archive " PROGNAME " -%d %s ...\n", MEM, argv[1]); // Read "size filename" in "%d\t%s\r\n" format while (true) { string s=getline(archive); if (s.size()>1) { filesize.push_back(atol(s.c_str())); string::iterator tab=find(s.begin(), s.end(), '\t'); if (tab!=s.end()) filename.push_back(string(tab+1, s.end())); else filename.push_back(""); } else break; } // Test end of header for "\f\0" { int c1=0, c2=0; if ((c1=getc(archive))!='\f' || (c2=getc(archive))!=0) { printf("%s: Incorrect format of file header %d %d\n", argv[1], c1, c2); return 1; } } // Extract files from archive data Transformer e(DECOMPRESS, archive); for (int i=0; i2) for (int i=2; i31&&fchar<127) sWork+=fchar; else if(fchar=='\n') { if(sWork!="") { filename.push_back(sWork); sWork=""; } } else { printf("The file %s is not valid directive file%d.\n",fname.c_str(),fchar); break; } } continue; } fclose(File); } else fclose(File); } filename.push_back(argv[i]); } else { printf( "Type filenames to compression, finish empty line:\n"); while (true) { string s=getline(stdin); if (s=="") break; else filename.push_back(s); } } // Get file sizes for (int i=0; i=0) fprintf(archive, "%ld\t%s\r\n", filesize[i], filename[i].c_str()); } putc(032, archive); // MSDOS EOF putc('\f', archive); putc(0, archive); // Write data Transformer e(COMPRESS, archive); long file_start=ftell(archive); for (int i=0; i=0) { uncompressed_bytes+=size; printf("%-23s %10ldKB -> ", filename[i].c_str(), size/1024); FILE* f=fopen(filename[i].c_str(), "rb"); int c; for (long j=0; j %dKB w %1.2fs.", uncompressed_bytes/1024, compressed_bytes/1024, elapsed_time); else if(!_mode) printf("%dKB -> %dKB w %1.2fs.", compressed_bytes/1024, uncompressed_bytes/1024, elapsed_time); if (uncompressed_bytes>0 && elapsed_time>0) { printf(" (%1.2f%% czas: %1.0f KB/s)", compressed_bytes*100.0/uncompressed_bytes, uncompressed_bytes/(elapsed_time*1000.0)); } printf("\n"); return 0; }